A new AI-powered brain implant allowed a paralyzed man to speak again

This is a big step forward from current assistive speech technology.
A doctor looks at images of a brain as he performs surgery.
Neurosurgeon Edward Chang performs brain surgery at UCSF. Barbara Ries

Share

In 2003, a then-20-year-old man suffered an intensive stroke after a car crash. It left him paralyzed, with only eye movements and minimal head movements intact. The paralyzed muscles in his vocal tract also meant he was no longer able to speak. 

Eighteen years later, scientists have announced that, with an implanted array of electrodes and the help of artificial intelligence, this man has regained a partial ability to speak again. 

In a three-hour-long surgery, scientists opened up the man’s skull and laid an array of 128 electrodes on top of his sensorimotor cortex, which houses the neural regions implicated in speech processing. Before closing him up, they connected those electrodes to a port which extends out from the participant’s head and sends those electrode signals to a computer via a detachable cable. 

Illustration showing placement of the eCOG electrode on the participant’s speech motor cortex and the head stages used to connect the electrode to the computer.
Illustration showing placement of the eCOG electrode on the participant’s speech motor cortex and the head stages used to connect the electrode to the computer. Ken Probst, UCSF

When the man, who goes by the nickname Pancho according to The New York Times, thinks of words, those communication centers of the brain activate, and the electrodes detect those signals. In 50 sessions spanning 81 weeks, Pancho went through lists of common words while connected to an artificial intelligence interface. Scientists trained this deep-learning model to get used to Pancho’s neural patterns and match them to the words he intended. Through this system, Pancho is able to communicate beyond just “yes” and “no.” He was able to say things like “They are going outside” and “Bring my glasses, please.”

But the algorithm was imperfect, and sometimes had problems with similar sounding words. Sentences like “Hello how are you?” became “Hungry how am you?” So the team of researchers created a second artificial intelligence, one that modeled natural language. This kind of AI takes things like syntax into account, predicting and decoding sentences based on the rules of language and how likely it is for certain words to follow others. All together, these models could successfully decode 75 percent of Pancho’s words. This unprecedented scientific feat was published in The New England Journal of Medicine

When the study first began, Pancho had not spoken for more than a decade. After such a long period of disuse, researchers were unsure whether his brain had even retained the mechanisms and circuitry for speech. 

[Related: Ask Us Anything: What happens in your brain when you daydream?]

“We didn’t know if the speech commands in the brain would still work after 15 years,” chairman of neurological surgery at the University of California at San Francisco and lead researcher Edward Chang told NPR. “And even if we could revive those dormant brain signals for speech, could we actually translate those into full words?” 

Luckily, Pancho’s speech circuits were and are intact. Granted, he is currently only able to speak at a rate of  15 to 18 words per minute (much lower than the 125 to 150 word per minute rate of typical conversational speech), but the team is optimistic that they can improve the system to be faster, more accurate, and eventually wireless. 

For Pancho, having this new way to communicate and be understood is “a life-changing experience,” he said to The Times through email. “Not to be able to communicate with anyone, to have a normal conversation and express yourself in any way, it’s devastating, very hard to live with … It’s very much like getting a second chance to talk again.”