Shape and meaning: A study explores how brain encodes visual objects

Opening our eyes and seeing the world before us, full of objects, is a simple action we may take for granted. Yet our brain is constantly carrying out a huge analysis only to let us see a flower, a pen, the face of our children. Where exactly in our brain does shape become meaning? A group of scientists coordinated by Davide Zoccolan of SISSA of Trieste, in collaboration with the team headed over by Riccardo Zecchina of Polytechnic University of Turin (within the Programma Neuroscienze 2008/2009 financed by Compagnia di San Paolo), studied a specific area of the brain that is located precisely halfway between the visual and the semantic analysis, shedding light on its function. The study has been just published in PLOS Computational Biology.

The anterior inferotemporal cortex (IT) is the most advanced of the brain's sensorial visual areas, the last station for the specifically visual processing of the image that forms on our retina before the information is processed in other brain areas designated for higher rank cognitive functions. Or at least this is what the majority of scientists had been assuming until a few years ago, when in 2007 and 2008 two studies raised doubts about such theories, suggesting that the IT may be implied in functions of semantic processing of visual objects.

In other words, scientists until then had thought that such brain area represented objects mainly according to their visual properties (such as, for instance, shape), while the new studies claimed that the meaning of the objects had in this case a predominant role.

Zoccolan analyzed the electrophysiological data he gathered on primates a few years ago, while he was working in the laboratory of James DiCarlo, at MIT in Boston. The data have been processed employing various machine-learning computational techniques, including a clustering algorithm recently developed by Zecchina's team. Such analysis methods have helped verify whether the categorization of the objects in the IT occurred according to hierarchies based on the similarity at the level of shape or at the level of meaning. In short, we asked ourselves if for the IT an orange looked more similar to a ball children play with (they are both round) or to a banana (they both belong to the 'fruit' category).

"Our data indicate that most objects are categorized according to their visual similarity, while semantic membership seemed strongly represented only in the case of a specific class of objects: four-legged animals," explains Zoccolan. "The traditional IT model, considered as the area which encodes mainly visual information, remains still valid although our study does not rule out the notion that certain semantic classes which are particularly relevant to primates may be also represented here."

"Besides this confirmation we have also observed something unexpected", adds Zoccolan. "It has been known for a long time that the IT is a processing station of complex information, " explains the scientist, "basically, the most advanced stage of visual information processing, where objects are categorized explicitly, which means that the entire structure of complex objects, rather than single parts of them, is encoded. What we have observed instead is that the IT retains also a 'raw', lower-level coding, for instance whether an object's color is dark or light, if it's big or small, and so on. This is basically an innovative assumption that changes our interpretation of the inferotemporal area's function."

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Camouflage detection boosts neural networks for brain tumor diagnosis