The role of prediction and attention in understanding of speech
Listening to a conversation in the context of a cocktail party presents a great challenge for the auditory system. Without realizing it, one must extract, from a complex mixture of sound, the sound of a single voice to understand and track it. Researchers at Queen's University, lead by Dr. Ingrid Johnsrude, are studying how our brains meet that challenge, and allow us to distinguish specific voices in crowded, noisy and distracting environments. Her studies have revealed that the brain does not simply rely on the incoming sounds that reach the ear to understand and retain speech, but rather also relies on information from other senses and prior knowledge to facilitate comprehension. These results were presented at the 8th Annual Meeting of the Canadian Association for Neuroscience held in Montreal, Canada May 25 to 28th 2014.
Dr. Johnrude's studies exposed test subjects to degraded or clear speech in the presence or absence of distraction. By looking at activation of different brain regions while test subjects were exposed to different listening conditions, Dr. Johnsrude's research has revealed that the early processing of sound, which occurs in a brain region called the primary auditory cortex, depends on higher-level linguistic knowledge encoded in other regions of the brain.
Following a conversation in a noisy environment also requires one to disregard surrounding noises and distractions and specifically focus on a conversational partner. While clear speech was understood and remembered whether subjects were distracted or not by other tasks, attention was shown to be critically important to understand degraded speech.
What you hear and understand of a conversation is influenced by what you are used to hearing, so it will be easier to understand a familiar voice than that of a stranger. This was shown to be especially true for older adults, who were shown to have more difficulty understanding new voices in a cocktail party situation as they age, but did not show a decline in the ability to understand familiar voices in the same situation.
"We're all familiar with the glass half empty view of aging - that, as you get older, everything gets worse" says Dr. Johnsrude. "You need glasses, your memory goes, and it's harder to hear when you're conversing in a busy place like a restaurant or a party, where many people are talking at once. We wanted to investigate the glass-half full side of aging. One thing that older people have more of than younger people is experience. I study how the experience of older people, like their familiarity with the voice of their significant other, helps them compensate for age-related declines in other abilities."
Furthermore, Dr. Johnsrude was able to show that activation of certain brain regions, the higher-order speech sensitive cortex, could be viewed as a neural signature of effortful listening. Measuring the effort required to understand speech, using the techniques developed by Dr. Johnsrude, may provide a novel way to assess the efficacy and comfort of hearing protheses, and help researchers optimize the benefits obtained from these devices.