The cocktail party problem

Known as "the cocktail party problem," the ability of the brain's auditory processing centers to sort a babble of different sounds, like cocktail party chatter, into identifiable individual voices has long been a mystery.

Now, researchers analyzing how both humans and monkeys perceive sequences of tones have created a model that can predict the central features of this process, offering a new approach to studying its mechanisms.

The research team--Christophe Micheyl, Biao Tian, Robert Carlyon, and Josef Rauschecker--published their findings in the October 6, 2005, issue of Neuron.

For both the humans and the monkeys, the researchers used an experimental method in which they played repetitive triplet sequences of tones of two alternating frequencies. Researchers know that when the frequencies are close together and alternate slowly, the listener perceives a single stream that sounds like a galloping horse. However when the tones are at widely separated frequencies or played in rapid succession, the listener perceives two separate streams of beeps.

Importantly, at intermediate frequency separations or speeds, after a few seconds the listeners' perceptions can shift from the single galloping sounds to the two streams of beeps. The researchers could use this phenomenon to explore the neurobiology of perception of auditory streams, because they could explore how perception altered with the same stimulus.

In the human studies, Micheyl, working in the MIT laboratory of Andrew Oxenham, asked subjects to listen to such tone sequences and signal when their perceptions changed. The researchers found that the subjects showed the characteristic perception changes at the intermediate frequency differences and speeds.

Then, Carlyon, working in Rauschecker's laboratory at Georgetown University Medical Center, recorded signals from neurons in the auditory cortex of monkeys as the same sequences of tones were played to the animals. These neuronal signals could be used to indicate the monkeys' perceptions of the tone sequences.

From the data on the monkeys, the researchers developed a model that aimed to predict in humans the change in perception between one or two auditory streams under different frequency separations and tone presentation rates.

"Using this approach, we demonstrate a striking correspondence between the temporal dynamics of neural responses to alternating-tone sequences in the primary cortex…of awake rhesus monkeys and the perceptual build-up of auditory stream segregation measured in humans listening to similar sound sequences," concluded the researchers.

In a commentary on the paper in the same issue of Neuron, Michael DeWeese and Anthony Zador wrote that the new approach "promises to elucidate the neural mechanisms underlying both our conscious experience of the auditory world and our impressive ability to extract useful auditory streams from a sea of distracters."

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
AI models can be trained to distinguish brain tumors from healthy tissue