If you think of Pink Floyd’s classic track “Another Brick in the Wall (Part 1),” chances are you will instantly hear that iconic, dronelike chorus hook in your head. Its plodding melody is a key component to Roger Waters’ prosody—the vocal variations like intonation, stress, rhythm, and accent that make human speech sound… well, human. Getting a text-to-speech program to recite “We don’t need no education,” however, usually produces a very different, mechanical kind of monotone. But what if such tools could not only understand your mind’s inner voice, but more accurately recreate your intended prosody features, as well?
Thanks to new breakthroughs from researchers at the University of California, Berkeley, experts are closer than ever to making that a reality.
[Related: This AI-powered brain scanner can paraphrase your thoughts.]
According to findings published on August 15 via PLOS Biology, a team of neuroscientists have reconstructed an audio clip of—you guessed it—Pink Floyd’s “The Wall” using only electrical activity recorded from listeners’ brains. As UC Berkeley’s announcement notes, this marks the first time researchers have successfully recreated a song’s instrumentation, rhythm, and vocal melodies from just brain scans.
The impressive feat was over a decade in the making. Between 2008 and 2015, researchers enlisted 29 epilepsy patients already scheduled as part of their treatments to receive sets of nail-like electrode brain implants. These arrays granted the team an opportunity to easily record their brain activity. From there, researchers set about matching areas of neuroactivity to individual audio frequency bands—128 of them, to be exact. As The New York Times noted on August 15, this meant training 128 separate computer models to decode the data, which when combined, offered a striking recreation of Pink Floyd’s song.
[Related: The science is clear: Metal music is good for you.]
“It’s a wonderful result,” study co-author Robert Knight, a neurologist and professor of psychology at UC Berkeley said in a statement. “As this whole field of brain machine interfaces progresses, this gives you a way to add musicality to future brain implants for people who need it,” such as those suffering from ALS or similar speech-compromising conditions.
“It gives you an ability to decode not only the linguistic content, but some of the prosodic content of speech, some of the affect. I think that’s what we’ve really begun to crack the code on,” added Knight. In addition to their promising advancements in audio recreation, Knight’s team also could finally confirm that a brain’s right side is more equipped for music than its left cortex.
As for why researchers landed on “Another Brick in the Wall (Part 1)” of all songs is less symbolic than practical. The epilepsy patients skewed older, and most already enjoyed the song. Speaking with The NY Times, one researcher reasoned their data would be less reliable or useful, “if [participants] said, ‘I can’t listen to this garbage.’”