Improved Seizure Prediction with Reinforcement Learning Controlled Filtering Hardware

Improved Seizure Prediction with Reinforcement Learning Controlled Filtering Hardware

What if analog filtering circuits were co-designed with ML feedback to create an entirely new learning signal acquisition hardware?

Analog filtering circuits are typically static; amplifying and filtering data as specified by the designer. This is sufficient when the features of the desired output are known, but when they are not, the designer must make assumptions. Therefore, the resulting data might not contain all the necessary information needed to characterize the signal of interest. Machine learning (ML) algorithms have repeatedly demonstrated their ability to learn from large, complex datasets, identify hidden features, and make advantageous decisions in ways that meet or exceed human ability; but they are only as good as the data from which they learn, potentially limiting performance. What if analog filtering circuits were co-designed with ML feedback to create an entirely new learning signal acquisition hardware?

Reinforcement  learning  (RL),  one  of  the  three paradigms of ML, is based on rewarding a computational agent for desired behaviors as it explores an interactive environment, learning through trial-and-error by using feedback  from  its  own  actions  and  experiences  to optimize a cumulative reward. We propose the creation of an RL algorithm that derives states and rewards from the  validation  outputs  of  a  classification-based supervised  learning  (SL)  algorithm,  another  ML paradigm. The  RL  algorithm  acts  on  tunable analog filtering hardware that filters the data from which the SL algorithm learns. This dual optimization will enable the isolation of key characteristics for the signal  of  interest  with  hardware  filtering  whilst simultaneously  improving  the  performance of  the  SL algorithm.