Researchers testing mental illness figured out how to induce schizophrenic symptoms in a computer, causing it to place itself at the center of crazy delusions, such as claiming responsibility for a terrorist bombing. The results bolster a hypothesis that claims faulty information processing can lead to schizophrenic symptoms.
Computer scientists at the University of Texas-Austin built a neural network called DISCERN, which is able to learn natural language. The humans taught it a series of simple stories, teaching it to store information as relationships between words and sentences — much the same way a person would learn a story.
Then they started again, but cranked up DISCERN’s rate of learning — so it was assimilating words at a faster rate, and it was not ignoring as much noise in the data.
Some mental health experts believe schizophrenics cannot forget or ignore as much stimuli as they should, which makes it difficult to synthesize and process the correct information. This “hyperlearning” phenomenon causes schizophrenics to lose connections among individual stories, losing the distinction between reality and illusion. Dopamine is a key factor in the process of understanding and differentiating experiences.
Telling the computer to “forget less” was akin to flooding the system with dopamine, confounding its ability to discern relationships between words, sentences and events, according to a news release from UT.
“DISCERN began putting itself at the center of fantastical, delusional stories that incorporated elements from other stories it had been told to recall,” according to the news release. In one answer, it claimed responsibility for a terrorist bombing.
The experiment doesn’t prove the hyperlearning hypothesis, but it does lend it additional credence, according to the researchers, who published their crazed computer findings in the journal Biological Psychiatry. It also shows that neural networks can be a useful analogue for the information-processing centers of the brain, according to graduate student Uli Grasemann, who participated in the research.
“We have so much more control over neural networks than we could ever have over human subjects,” he said. “The hope is that this kind of modeling will help clinical research.”
[via ScienceBlog]