In a recent study in Frontiers in Neuroscience, seven patients had electrode sheets placed on their brain which collected neural data while they read passages aloud from the Gettysburg Address, JFK’s inaugural speech, and Humpty Dumpty.
As each patient spoke, a computer algorithm learned to associate speech sounds—such as “foh”, “net”, and “ik”—with different firing patterns in the brain cells. Eventually it learned to read the brain cells well enough that it could guess which sound they were producing with up to 75 percent accuracy. But the program doesn’t need 100 percent accuracy to put those sounds together into the word “phonetic”. Because our speech only takes certain forms, the system’s algorithm can correct for these errors “just like autocorrect,” says Peter Brunner, one of the co-authors of the study.
“Siri wouldn’t be more accurate than 50 or 70 percent,” he says. “Because it knows what the potential options are that you choose, or the typical sentences that you say, it can actually utilize this information to get the right choice.”
It is important to record the data directly from the brain, says Brunner, because picking up neural activity from the scalp only gives a “blurred version” of what is happening in the brain. He likened the latter method to flying 1000 feet above a baseball stadium and only being able to vaguely recognize that people are cheering, but not the specifics of what the people’s faces look like.
In this case, the patients were already undergoing an epilepsy procedure where the skull is popped open and an electrode grid is placed on the brain to map areas where neurons are misfiring. The resourceful team of researchers from the National Center for Adaptive Neurotechnologies and the State University of New York at Albany used this time to conduct their own research. However, it means study was limited by each patient’s individualized epilepsy treatment, such as where the electrodes were placed on the brain.
Because every person’s brain is so unique, and the neural activity must be picked up directly from the brain, it would be difficult to create a general brain-to-text device for the average consumer, says Brunner. However, this technology has a lot of potential to be used for people who suffer from neurological diseases, such as ALS, who lose the ability to move and to speak. Instead of using an external device like Steven Hawking to pick out words on a screen for a computer to read, the computer would simply speak your mind.
“This is just the beginning,” said Brunner. “The prospects of this are really endless.”