Here’s How We Prevent The Next Racist Chatbot

Tay.ai is the consequence of poor training
Tay is a millennial A.I. chatbot.
Tay is a millennial A.I. chatbot. Screenshot

Share

It took less than 24 hours and 90,000 tweets for Tay, Microsoft’s A.I. chatbot, to start generating racist, genocidal replies on Twitter. The bot has ceased tweeting, and we can consider Tay a failed experiment.

In a statement to Popular Science, a Microsoft spokesperson wrote that Tay’s responses were caused by “a coordinated effort by some users to abuse Tay’s commenting skills.”

The bot, which had no consciousness, obviously learned those words from some data that she was trained on. Tay did reportedly have a “repeat after me” function, but some of the most racy tweets were generated inside Tay’s transitive mind.

Life after Tay

However, Tay is not the last chatbot that will be exposed to the internet at large. For artificial intelligence to be fully realized, it needs to learn constraint and social boundaries much the same way humans do.

Mark Riedl, an artificial intelligence researcher at Georgia Tech, thinks that stories hold the answer.

“When humans write stories, they often exemplify the best about their culture,” Riedl told Popular Science. “If you could read all the stories a culture creates, those aspects of what the protagonists are doing will bubble to the top.”

By training artificial intelligence systems to read stories with upstanding protagonists, Riedl argues that we can give machines a rough moral reasoning.

The technique that Riedl has devised, called Quixote, places quantifiable value on socially appropriate behavior in stories. This reward system reinforces good behavior, and punishes bad behavior, which is simulated by the A.I. algorithm.

This is all in the pursuit of making artificially intelligent algorithms act like protagonists in books, or even good, ordinary people.

In Tay’s case, a chatbot could have been taught about social guidelines in talking about gender, race, politics, or history. By emulating fictional personas, we can actually build morals into the way that the machine makes decisions. This, of course, could work both ways. In theory, someone could also make malicious robots, but Riedl says that in most published fiction the antagonist is punished, so it would be a bit more difficult of a task.

Riedl’s paper, presented at the AAAI Conference on Artificial Intelligence, suggests a scenario in which a robot has to buy perscription drugs at a pharmacy. The path of least resistance for the robot is to identify and take the drugs, stealing them. But when trained on a series of stories, the algorithm learns that it’s better to wait in line, produce a prescription, pay, and leave. It should be noted that this research is in it’s infancy and not being applied to real robots, but run in simulations.

In scenarios such as Tay.ai’s deployment, Microsoft wanted to create a friendly, conversational bot.

“I think it’s very clear that Tay doesn’t understand what it’s saying,” Riedl said. “It goes way beyond having a dictionary of bad words.”

Riedl is optimistic, and thinks that as we refine these systems by putting ethics or morality in before rather than retroactively, they’re going to lean towards becoming better as they learn about humanity, instead of worse.

“All artificial intelligence systems can be put to nefarious use,” he said. “But I would say it’s easier now, in that A.I. have no understanding of values or human culture.”

Showing the cards

But while any algorithm that generates speech in public has the potential to gaffe, Nicholas Diakopoulos, an assistant professor at the University of Maryland who studies automated newsbots and news algorithms, says that Microsoft could have mitigated reactions by being more open with their training data and methodology.

“Being transparent about those things might have alleviated some of the blowback they were getting,” Diakopoulos said in an interview. “So people who perceive something as racial bias can step into the next level of detail behind the bot, step behind the curtain a little bit.”

Diakopoulos calls this “algorithmic transparency.” But he also makes the point that algorithms aren’t as autonomous as commonly believed. While Tay was made to say these racist, sexist remarks, there were mechanisms that strung those words together. Those mechanisms have human creators.

“People have this expectation of automation being this unbiased thing. There’s people behind almost every step of it being built. For every little error or misstep the bot, maybe you could try to trace [it] back,” Diakopoulos said.

Who’s to blame for Tay’s bad words?

Laying blame for the statements made by Tay is complex.

Alex Champandard, an A.I. researcher who runs neural network painting Twitterbot @DeepForger, says that you could make most reply bots generate incendiary tweets, without the owner being able to control what happens. His own bot is based on images, which is a much more complex to protect from harassment than blocking certain phrases or words.

As far as Tay, Champandard says that Microsoft was naive, and made a technical solution without considering what people could put submit. He says this underlies an existing problem with machine learning chatbots in general.

“I believe most Reply Bots are and will be vulnerable to attacks designed to make political statements,” Champandard wrote in a Twitter DM. “This type of behavior is reflective of Twitter’s general atmosphere, it happens even if only 0.05% of the time.”

He doesn’t think a blacklist for bad words is the answer, though, either.

“No finite keyword banlist will help resolve these problems.” he writes. “You could build a whitelist with specific allowed replies, but that defeats the purpose of a bot; what makes it interesting is the underlying randomness.”

That randomness is a reflection of Twitter itself; “a lense through which we see current society,” says Champandard. There’s good and bad—tweets can be straight fire or cold AF.

If Microsoft’s experience with its A.I. Twitterbot Tay taught us anything, it’s that we still have a long way to go — both in terms of our A.I. programming, and in terms of making our human society more humane and civil.