Researchers illuminate the brain processes that support the emergent meaning of combined words

Humans accomplish a phenomenal amount of tasks by combining pieces of information. We perceive objects by combining edges, categorize scenes by combining objects, interpret events by combining actions, and understand sentences by combining words. But researchers don't yet have a clear understanding of how the brain forms and maintains the meaning of the whole -; such as a sentence -; from its parts. Carnegie Mellon University researchers in the School of Computer Science's (SCS) Machine Learning Department (MLD) have shed new light on the brain processes that support the emergent meaning of combined words.

Mariya Toneva, a former MLD Ph.D. student now faculty at the Max Planck Institute for Software Systems, worked with Leila Wehbe, an assistant professor in MLD, and Tom Mitchell, the Founders University Professor in SCS, to study which regions of the brain processed the meaning of combined words and how the brain maintained and updated the meaning of words. This work could contribute to a more complete understanding of how the brain processes, maintains and updates the meaning of words, and could redirect research focus to areas of the brain suitable for future wearable neurotechnology, such as devices that can decode what a person is trying to say directly from brain activity. These devices can help people with diseases like Parkinson's or multiple sclerosis that limit muscle control.

Toneva, Mitchell and Wehbe used neural networks to build computational models that could predict the areas of the brain that process the new meaning of words when they are combined. They tested this model by recording the brain activity of eight people as they read a chapter of "Harry Potter and the Sorcerer's Stone." The results suggest that some regions of the brain process both the meaning of individual words and the meaning of combined words, while others process only the meanings of individual words. Crucially, the authors also found that one of the neural activity recording tools they used, magnetoencephalography (MEG), did not capture a signal that reflected the meaning of combined words. Since future wearable neurotechnology devices might use recording tools similar to MEG, one potential limitation is their inability to detect the meaning of combined words, which could affect their capacity to help users produce language.

The team's work builds on past research from Wehbe and Mitchell that used functional magnetic resonance imaging to identify the parts of the brain engaged as people read a chapter of the same Potter book. The result was the first integrated computational model of reading, identifying which parts of the brain are responsible for such subprocesses as parsing sentences, determining the meaning of words and understanding relationships between characters.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Cannabis use linked to brain changes in young adults at risk of psychosis