Integration of local motion signals allows for a larger global pattern motion

Study shows perceptual learning of global pattern motion

Researchers have long known of the brain's ability to learn based on visual motion input, and a recent study has uncovered more insight into where the learning occurs.

The brain first perceives changes in visual input (local motion) in the primary visual cortex. The local motion signals are then integrated in the later visual processing stages and interpreted as global motion in the higher-level processes.

But when subjects in a recent experiment using moving dots were asked to detect global motion (the overall direction of the dots moving together), the results show that their learning relied on more local motion processes (the movement of dots in small areas) than global motion areas.

"We had expected that higher-level processing could be more involved in task-relevant perceptual learning investigated in this study," said Dr. Shigeaki Nishina who conducted the research in Boston University and now belongs to the Honda Research Institute Japan. "Contrary to the expectation, the result suggested local motion signals are predominantly used for task-relevant perceptual learning of global motion, which was surprising to us."

Nishina said the results, which appear in the latest issue of Journal of Vision (http://www.journalofvision.org/9/9/15/) show that the improvement in detection of global motion is not due to learning of the global motion but to learning of local motion of the moving dots in the test.

The researchers said the study of perceptual learning can give scientists deeper insight not only about sensory systems but also the whole brain's adaptable nature.

"This line of study could give a guideline for optimizing human machine interface," said Nishina. "When we use a new machine, we need to learn how to get information from the machine. In our study, local motion signals were more important for the brain to learn a task based on global motion. This suggests that the optimal information for efficient learning could be different from the visual information that is directly related to the task to be learned."

In addition, Nishina said the new understanding of where the brain processes task-relevant perceptual learning can lead to further understanding of how a brain makes decisions based on sensory input.

"We expect that our results will help the understanding of decision-making process and constructing a more concrete model of the process," he said.

The Journal of Vision is an online-only, peer-reviewed, open-access publication devoted to visual function in humans and animals. It is published by the Association for Research in Vision and Ophthalmology. It explores topics such as spatial vision, perception, low vision, color vision and more, spanning the fields of neuroscience, psychology and psychophysics. JOV is known for hands-on datasets and models that users can manipulate online.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Small brain-penetrating molecule offers hope for treating aggressive brain tumors