Scientists at the University of California at Berkeley have devised a method by which they can hear thoughts. They conducted a small study in which they could predict what people were thinking based on their brain activity.
“As you listen to a sound, it activates certain parts of the auditory cortex of your brain,” said Brian Pasley, a UC Berkeley neuroscientist and lead author of the study published today in PLoS Biology. “We're interested in how the brain converts sound into meaning, so we looked at an early step in a long process.”
The team of researchers enlisted the help of 15 epilepsy patients who had electrodes implanted in their brains. As the patients listened to a series of words, such as “structure,” “doubt” and “property,” the researchers recorded their brain activity. They then developed a computer model to match the sounds with the brain signals. “You can think of the brain as a piano and the recordings as the keys,” said Pasley. “You could turn off the sound and an expert pianist would still have a good idea of what note was being produced. We're trying to be the expert pianist.”
They were surprised to find that they got an eerie version of the actual word which is actually a mash-up of all the sounds that matched the pattern of brain activity. Pasley hopes to fine-tune the model to discern different types of words, like nouns and verbs, and even their meanings. “This study mainly focused on lower-level acoustic characteristics of speech. But I think there's a lot more happening in these brain areas than acoustic analysis,” he said. “We sort of take it for granted, the ability to understand speech. But your brain is doing amazing computations to accomplish this feat.”
Researchers believe this could help paralyzed people communicate. If hearing a word and imagining a word activate similar brain areas, it might be possible to develop a “prosthetic device” for speech, Pasley said. “We'd like to learn more about the imagery process - how similar or different it is from when we're actually hearing sounds,” he said. “If it's similar, this approach could have some clinical application down the line.” However he added, “There are ethical concerns…Not with the current research, but with the possible extensions of it. There has to be a balance. If we are somehow able to encode someone’s thoughts instantaneously that might have great benefits for the thousands of severely disabled people who are unable to communicate right now. On the other hand, there are great concerns if this were applied to people who didn’t want that."
Prof Robert Knight, one of the researchers from the University of California at Berkeley, said, “This is huge for patients who have damage to their speech mechanisms because of a stroke or Lou Gehrig's disease and can't speak. If you could eventually reconstruct imagined conversations from brain activity, thousands of people could benefit.”
Jan Schnupp, Professor of Neuroscience at Oxford University, described the study as “remarkable”. He said, “Neuroscientists have long believed that the brain essentially works by translating aspects of the external world, such as spoken words, into patterns of electrical activity. But proving that this is true by showing that it is possible to translate these activity patterns back into the original sound (or at least a fair approximation of it) is nevertheless a great step forward, and it paves the way to rapid progress toward biomedical applications.”