Research could pave way for using machine learning to guide treatment for Parkinson’s

Skoltech scientists have shown that a pair of artificial neural networks is able to learn to successfully suppress self-sustained collective signaling patterns that are typical of degenerative neurons in the brain. This could pave the way for using machine learning to guide an effective treatment for such neurological diseases as Parkinson’s. Two sides to this story – physics and machine learning – recently appeared in two major journals in these fields. The physical aspects of neuronal synchronization were published in the journal Chaos: An Interdisciplinary Journal of Nonlinear Science; and the machine learning framework appeared in the Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI 2020, A event).

Parkinson’s disease is a debilitating neurological condition that is estimated to affect 10 million people worldwide, and as the global population ages, it is expected to become more prevalent. One hypothesis about the origins of this disease that is on the table has to do with networks of degenerative neurons in the brain that fire synchronously, creating undesired signals and impeding normal brain function.

To alleviate the symptoms of Parkinson’s and of some other diseases, doctors often resort to the use of deep brain stimulation, when certain brain regions are continuously stimulated via implanted micro-electrodes. Deep brain stimulation can relieve the limb tremor associated with the disease, and although the exact mechanism is still unknown, researchers hypothesize that it “destroys” the firing synchrony of the degenerate network of neurons.

However, as researchers note in their papers, control of a large network of interacting neurons is a complicated nonlinear problem. So, Dmitry Dylov, Assistant Professor of the Center for Computational and Data-Intensive Science and Engineering (CDISE) at Skoltech, and his colleagues decided to use a machine learning approach called reinforcement learning to try and create an algorithm that would learn to guide the deep brain stimulation device adaptively.

There are many complicated neuronal models that try to explain the synchronous activity. Only recently, our colleagues at the University of Potsdam (in an effort led by professor Rosenblum) understood how to get a closed-loop feedback control of such neuronal ensembles without any knowledge about the model at hand. It inspired us to offer reinforcement learning for the same task: the framework we developed does not need to know anything about the system that you want to control. All you need to do is to say: this action was good, and that action was bad… many times.”

Dmitry Dylov, Assistant Professor of the Center for Computational and Data-Intensive Science and Engineering (CDISE) at Skoltech

For this, they used two artificial neural networks called Actor and Critic, which were trained to suppress the collective mode, or the synchronized neuronal “noise”, while relying on nothing but a reward for asynchrony. The role of the first network, the Actor, is to evaluate the chosen strategy for suppressing undesired activity. The Critic assesses an advantage function, which represents the “reward” for both networks, following each action by the Actor which sends the suppression stimuli to the environment.

Instead of real patient data, scientists relied on two numerical models that simulate pathological dynamics of neuronal ensembles, called the periodic Bonhoeffer-van der Pol oscillators (these mimic regularly spiking cells) and the chaotically bursting Hindmarsh-Rose neurons. In their experiments with both models, the Actor-Critic pair of neural networks was ultimately able to successfully learn how to control the collective firing of the ensemble of thousands of neurons.

“It’s somewhat audacious to attempt to “control” large groups of malfunctioning neurons, so the fact that it worked was fascinating by itself. We also discovered that our neural networks learnt how to suppress undesired signal even if we skipped several of the stimulation pulses in the train. It’s a very important finding for future implementation by engineers because DBS devices need to send electric pulses into the brain only when absolutely necessary to avoid the damage and the brain’s adaptation to the stimuli,” Dylov notes.

Scientists note that the workflow they have proposed is universal, in that it can be used for any predefined stimulation pattern, and “could pave the way towards clinical realization of the DBS via reinforced learning”. This could take the form of a library of pre-trained neural networks embedded into the software controlling a DBS device, in a personalized approach to the patients with different signaling patterns and at different progression stages of the disease, regardless of its aetiology.

The researchers emphasize that, even though a causal link between the synchrony of neurons and the pathology is yet to be proven conclusively, their model can potentially still be used to guide deep brain stimulation in medical practice. “We find Reinforced Learning to be an ideal candidate for clinical approbation as a “smart” control algorithm to be embedded into the deep brain stimulation devices,” the authors conclude.

In our experiments, we considered as many of the practical aspects as possible. For example, we varied the duration of pulses, considered pulses that carry zero charge into the brain tissues, reduced amplitude of the stimuli to the minimum, etc. Given all that, embedding our pre-trained model into the implantable deep brain stimulation device is actually very easy. However, the next step would be to test the “smart” gadgets in vivo on real brains. So, we must make sure it is absolutely safe, entailing all the regulatory prerequisites that accompany such endeavors. A natural progression is to switch from the synthetic brain to animal models first. Then, one can talk about clinical validation,”

Dmitry Dylov

Other institutions involved in this research include the Institute of Physics and Astronomy at the University of Potsdam and the Microsoft Research Lab in Montreal.

Source:
Journal reference:

Krylov, D., et al. (2020) Reinforcement learning for suppression of collective activity in oscillatory ensembles. Chaos. https://doi.org/10.1063/1.5128909

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Research suggests no need for yellow fever vaccine booster after initial dose