Did A Chatbot Really Pass The Turing Test? | Popular Science


Exploring the world of science through graphic narrative. By Maki Naro

Did A Chatbot Really Pass The Turing Test?

©Maki Naro

Hint: No. But it did win a contest!

By now you've all been swept up in the cult of personality of Eugene Goostman, the chatbot that made news when it convinced 10 out 30 judges at the University of Reading's 2014 Turing Test that it was human, thus winning the contest. With the announcement, every news source with two hundred words to spare was quick to crown Eugene Goostman as king of bots. But we should know better by now.

In 1950, British mathematician and cryptanalyst Alan Turing introduced the idea of a test to see if a machine could be indistinguishable from a human. Promptly dubbed the Turing Test, a passing grade is the holy grail of artificial intelligence. Think of the Voight-Kampff test in Blade Runner. Except at the end, you don't shoot a replicant. Yet. In its current incarnation, the human judge sits down at a computer and begins texting with an unseen partner. Being unable to see who they are talking to, the judge must rely on conversation cues to decide whether their partner is human or machine. Turing famously predicted that by the year 2000, a computer program would be able to convince 30% of people that it was human. The event organizers used this prediction—taken somewhat out of context, as Turing never set any specific guidelines for what a Turing Test should be—to set the win conditions of the Turing Test.

I did get a chance to talk to Goostman, before the droves of people wanting to do the same crashed the servers. Despite Oz's harsh critique (he tends to go a bit overboard), I have to truthfully report that he's good. Far from perfect, but not bad. Goostman makes all the mistakes the chatbots before him have made: he dodges questions, he changes the subject, he makes vague answers, he repeats things back to you in ways that no normal human does in a cute attempt to show that he's listening, and of course he says really stupid stuff that doesn't make any sense. Goostman's creators explain his quirks away by giving him a fictional back story. See, Eugene is a 13-year-old Ukranian kid. He has favorite foods and a pet guinea pig, and he feels okay derailing important interrogations to tell you these things. I would have shot him as a replicant ages ago.

I don't buy any of this, by the way. One, it's insulting to 13-year-olds, plenty of whom can hold a conversation without resorting to non sequitur. It's also insulting to people who speak English as a second language, for the same reasons. Goostman's background is a crutch, but it's not the first time a bit of social engineering was used to make up for a bot's inadequacies. The notorious MGONZ program had some poor fellow going for 90 minutes. Its trick? MGONZ was combative, vulgar, and insulting. Any alarms going off in your head when testing MGONZ would be attributed to the mannerisms of an asshole, rather than a computer. Bypassing your brain's psychopath detector is a neat trick, and much more entertaining than the foreign teenager shtick.

Eugene Goostman did not pass the Turing Test. He won the University of Reading contest. He met Turing's benchmark within 14 years of the predicted date. But he's not a perfect AI at all, and we have a long way to go, and a lot to learn in the process. I don't actually think the judges were morons. But I hope they felt silly afterwards.


Editors' Picks

Latest News