Nov 21 2007
Researchers have added a new piece to the puzzle of how the brain selectively amplifies those distinctions that matter most from the continuous cascade of sights, sounds, and other sensory input.
Whether recognizing a glowering face among smiling ones or the unmistakable sound of a spouse calling one's name, such “categorical perception” is central to sensory function.
Specifically, Rajeev Raizada and Russell Poldrack report identifying a brain region that selectively amplifies behaviorally significant speech sounds. They also emphasized that their experimental approach represents a useful advance because it not only detects brain activity associated with a given task, but identifies the type of computation the brain is performing during the task.
The researchers reported their findings in the November 21, 2007, issue of the journal Neuron, published by Cell Press.
In their experiments, the researchers used a set of ten computer-synthesized speech sounds that represented a continuum from the sound “ba” to the sound “da.” They designated these on a 1 to 10 scale, with the former being pure “ba” and the latter being pure “da.”
The researchers first played human volunteers different pairs of the two sounds, for example 4 and 7, and asked them to indicate which pairs they distinguished as different from one another.
They next played the same pairs of sounds to the subjects as the subjects' brains were scanned using functional magnetic resonance imaging (fMRI). This widely used technique involves using harmless radio waves and magnetic fields to measure blood flow in brain regions, which reflects activity.
By searching for brain areas distinctively activated during pairs of sounds the subjects perceived as different, the researchers could pinpoint any brain region involved in the selective amplification of categorical perception.
The fMRI scans revealed a sensory processing area of the brain known as the left supramarginal gyrus as a distinctive categorical processing area. This area activated when subjects heard sound pairs that they had earlier reported perceiving as different. In contrast, the left supramarginal gyrus did not respond significantly to sound pairs that the subjects perceived as the same, found the researchers.
What's more, lower-level auditory regions of the brain or another speech-processing area did not respond significantly to the distinctive sound pairs, found the researchers.
The researchers wrote that “A key aspect of the method proposed here, and one that is especially relevant to categorical perception, is that we were seeking not just neural amplification per se, but in particular selective amplification. Thus, an area such as the left supramarginal gyrus not only amplified differences between the phonetic stimuli, but moreover it specifically amplified only the differences that corresponded to crossing each subject's perceptual category boundary.
“The way in which the brain selectively amplifies stimulus differences can help to reveal how its representations of the world are structured,” they wrote. “Such amplification can be said to be involved in a neural representation, as opposed to being just incidental activity, only if it is related to perception and behavior…. Used together, these tools can help to reveal when the brain sees the world in shades of gray, and when it sees in black-and-white.”