Understanding how the brain works from computers, and vice versa

For many years, Tomaso Poggio's lab at MIT ran two parallel lines of research.

Some projects were aimed at understanding how the brain works, using complex computational models. Others were aimed at improving the abilities of computers to perform tasks that our brains do with ease, such as making sense of complex visual images.

But recently Poggio has found that the work has progressed so far, and the two tasks have begun to overlap to such a degree, that it's now time to combine the two lines of research.

He'll describe his lab's change in approach, and the research that led up to it, at the American Association for the Advancement of Science annual meeting in Boston, on Saturday, Feb. 16.

The turning point came last year, when Poggio and his team were working on a computer model designed to figure out how the brain processes certain kinds of visual information. As a test of the vision theory they were developing, they tried using the model vision system to actually interpret a series of photographs. Although the model had not been developed for that purpose—it was just supposed to be a theoretical analysis of how certain pathways in the brain work—it turned out to be as good as, or even better than, the best existing computer-vision systems, and as good as humans, at rapidly recognizing certain kinds of complex scenes.

“This is the first time a model has been able to reproduce human behavior on that kind of task,” says Poggio, the Eugene McDermott Professor in MIT's Department of Brain and Cognitive Sciences and Computer Science and Artificial Intelligence Laboratory.

As a result, “My perspective changed in a dramatic way,” Poggio says. “It meant that we may be closer to understanding how the visual cortex recognizes objects and scenes than I ever thought possible.”

The experiments involved a task that is easy for people, but very hard for computer vision systems: recognizing whether or not there were any animals present in photos that ranged from relatively simple close-ups to complex landscapes with a great variety of detail. It's a very complex task, since “animals” can include anything from snakes to butterflies to cattle, against a background that might include distracting trees or buildings. People were shown the scenes for just a fraction of a second, a task that uses a particular part of the human visual cortex, known as the Ventral 1 pathway, to recognize what is seen.

The visual cortex is a large part of the brain's processing system, and one of the most complex, so reaching an understanding of how it works could be a significant step toward understanding how the whole brain works—one of the greatest problems in science today.

“Computational models are beginning to provide powerful new insights into the key problem of how the brain works,” says Poggio, who is also co-director of the Center for Biological and Computational Learning and an investigator at the McGovern Institute for Brain Research at MIT.

Although the model Poggio and his team developed produces surprisingly good results, “we do not quite understand why the model works as well as it does,” he says. They are now working on developing a comprehensive theory of vision that can account for these and other recent results from the lab.

“Our visual abilities are computationally amazing, and we are still far from imitating them with computers,” Poggio says. But the new work shows that it may be time for researchers in artificial intelligence to start paying close attention to the latest developments in neuroscience, he says.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
New blood test could help identify patients at risk for cognitive impairment