Food for thought: Your brain is wired to consider various possible meanings for a word before you’ve even heard the final sound of a word uttered. It’s a conclusion scientists at the University of Rochester reached and also proved for the first time using a functional MRI (fMRI)–a tool for brain imaging–to see split-second activity. In the past, scientists postulated that listeners could only follow up to five syllables per second in spoken language by drawing from a small subset of words already known by the listener. It’s much like Google’s ability to predict words before you finish typing them during a search.
In order for researchers to test if the brain could interpret meaning, they had to create a new language. Using familiar English words like “kick” had too many nuances in meaning, they say, for them to properly capture results. So, scientists decided to test the motion part of the brain, known as V5, by creating words with similar beginning syllables to motion verbs but different ending syllables. They got test subjects to learn new words like “biduko,” meaning the shape will move across the screen,” and “biduka,” the shape would change color. When the subjects were tested in front of a computer monitor that displayed new shapes and sounded these words, the fMRI results showed brain activity in the V5 area a split-second before participants heard the final syllable of the words. The team will continue to run more complicated tests to see how the brain responds to syntax, sound and touch sensations.