Researchers Translate Thoughts into Speech, Potentially Allowing “Locked-In” Patients to Communicate
In an effort to unlock the speech capacity in patients who cannot speak because of so-called “locked-in syndrome,” University of … Continued
In an effort to unlock the speech capacity in patients who cannot speak because of so-called “locked-in syndrome,” University of Utah researchers have successfully demonstrated that they can translate brain signals into words using electrode grids placed beneath the skull. Sort of.
The method leaves a lot of room for improvement, but it does prove out some technology that could make thought-to-speech technology more reliable for patients suffering from traumatic brain injuries or illnesses that render them unable to communicate with others. Using two grids of 16 microelectrodes placed over two regions of the brain known to generate human speech, the team was able to record brain signals for 10 useful words – yes, no, hot, cold, thirsty, hungry, goodbye, hello, more and less – and use that data to discern between any two words a patient was thinking between 76 and 90 percent of the time.
But when they tried to distinguish between all ten words at the same time, that success rate dropped to between 28 percent and 48 percent. That’s better than chance – which would be one-in-ten or just 10 percent – but less than reasonably useful.
The electrodes used were non-penetrating, meaning they sit between the patient’s brain and skull, but do not actually poke into the brain. That means they are closer and more sensitive to specific brain waves than externally worn EEG caps, but are less invasive than penetrating electrodes. These electrodes can pick up on weak electrical signals within the brain, meaning they are more nuanced than other brain monitoring sensors and could possibly provide the technological sensitivity needed to get reliable thought-to-speech translation working.
But first the researchers will have to refine their translation techniques to raise the success rates from one-in-four to something more like three-in-four, and ideally be able to distinguish between more than just 10 words. To get to that point, the next round of tests will involve larger, 11-by-11 arrays containing 121 electrodes each. Those larger implants should yield much more brain signal data that could in turn improve translation accuracy to the point that thought-to-speech translation could become a viable clinical solution.