Researchers from HSE University and the Artificial Intelligence Research Institute (AIRI) have successfully lowered the latency between a change in brain activity and the presentation of the corresponding neurofeedback signal by a factor of 50. The results were obtained by employing a neural network trained in low-latency filtering of brain activity signals from diverse individuals. This approach opens up new prospects for the treatment of attention deficit disorder and epilepsy. A paper with the study findings has been published in Journal of Neural Engineering.
Neurofeedback, a form of biofeedback, has been in use since the 1960s. Its core concept involves individuals receiving objective information about the parameters of their own brain activity, as recorded using an electroencephalogram (EEG), and subsequently learning to regulate their brain waves based on this feedback. For instance, a person can improve their relaxation skills by receiving feedback about the alpha rhythms in their parietal lobe, since an increase in their intensity typically coincides with the state of relaxation. Neurofeedback technology has a broad range of applications, spanning fromthe treatment of conditions like attention deficit hyperactivity disorder (ADHD), epilepsy, and depression to enhancing stress resilience and athlete training.
However, in practice, not all individuals undergoing neurofeedback training experience substantial improvements-;approximately 40% show minimal or no progress. According to HSE researchers, one of the primary reasons for this is the significant delay that occurs between the alteration in brain activity and the presentation of a feedback signal reflecting this change.
Previously, we discovered that during occipital alpha rhythm training, the frequency of brain activity bursts per unit of time alters, while their duration and amplitude remain constant. The concept behind this training is that individuals can learn to induce a state that leads to an increased frequency of these bursts, for which timely positive reinforcement of such transitions becomes crucial. However, in the majority of systems used today, the feedback signal is delivered with a delay exceeding 500 ms. Under such circumstances, establishing a correlation between the feedback and the corresponding event becomes difficult."
Alexei Ossadtchi, Research Team Leader, Director of the Centre for Bioelectric Interfaces at the HSE University Institute for Cognitive Neuroscience, Head of the Neurointerfaces Group at AIRI
Decreasing the delay in presenting the feedback signal increases the likelihood of activating the neuroplasticity mechanisms required to attain a lasting effect from the training. In a previous study, all participants in groups experiencing a minimal delay, which was set at 250 ms, successfully increased the frequency of alpha rhythm bursts per unit of time. However, in groups experiencing a delay of approximately 500 ms and higher, only about 60% of participants were able to accomplish the task.
According to the researchers, further reduction of the delay is likely to result in an even more pronounced acceleration in the learning process and to the attainment of long-term training effects. However, the most substantial component of the delay in presenting the feedback signal is linked to fundamental constraints.
The issue lies in Gabor's uncertainty principle. To isolate the rhythm, it is necessary to use the recorded signal values and to observe the signal over a time interval of approximately 200–300 ms. This means that the filtering process-;selecting the relevant brain rhythms-;takes time and thus delays the signal. The researchers have suggested employing a neural network model of the target signal to accelerate its detection amidst the rest of the brain's activity.
The scientists trained multiple neural networks using extensive datasets of individuals' brain activity, then assessed their stability by introducing noise, and subsequently applied them to data from 25 subjects undergoing alpha-rhythm training. Various architectures were tested, and the Temporal Convolutional Network (TCN) demonstrated the best performance.
"Based on the TCN, a filter was constructed to isolate rhythmic activity, leading to a reduction in the delay of presenting a feedback signal which reflected the instantaneous intensity of the alpha rhythm to just 10 ms. Hence, we have lowered the delay by approximately fifty-fold as compared to the majority of neuroefeedback systems. At the same time, we monitored the transitions of neural populations from the excitation phase to the inhibition phase with virtually no delay," explains Alexei Ossadtchi.
According to the study authors, their findings can justify a reassessment of the efficacy of neurofeedback in addressing various neurological disorders. Employing the method with reduced delay can substantially increase the proportion of patients who respond positively to this therapy for nervous system dysfunction. Furthermore, this opens up exciting future possibilities of developing closed-loop brain stimulation paradigms for the treatment of severe neurological disorders by establishing artificial feedback loops that the brain cannot distinguish from its natural feedback mechanisms, thus inducing targeted plastic changes within the brain's neural networks.
Source:
Journal reference:
Semenkov, I., et al. (2023) Real-time low latency estimation of brain rhythms with deep neural networks. Journal of Neural Engineering. doi.org/10.1088/1741-2552/acf7f3.