In a recent paper in the journal PLOS ONE, German researchers asked 89 college students to team up with a tiny, bright-eyed robot named Nao to answer questions and complete menial tasks. It’s a robot partnership worthy of a buddy comedy. But, as is typical in experimental psychology, these tasks were a distraction from the real question under investigation: What happens when the humans had to turn the robot off?
In 43 cases, the Verge reported earlier this month, “the robot protested, telling participants it was afraid of the dark and even begging: ‘No! Please do not switch me off!'” As the researchers predicted, participants struggled to switch the machine—which they had previously worked with as a partner—off. Thirty of the humans took twice as long on average to turn off the robots compared to the group whose robots said nothing at all. And 13 people refused to comply altogether, leaving Nao on.
“People perceive robots to be somewhat alive,” Christoph Bartneck, a leading researcher in human-robot interaction, wrote in an email. This phenomenon things like movie and robots as though they’re humans is called “the media equation.” And it can create some serious moral quandaries.
In 2007, Bartneck conducted a similar experiment, where a talking, cat-like bot begged for its life, but all participants were forced to turn the machine off by an observing scientist. In that study, 100 percent of people eventually complied, but it wasn’t easy. Relating to a robot as though it were truly alive, Bartneck says, means we “therefore hesitate to switch it off, in particular if this means that the robot would lose its memory or personality.”
The work to understand human perception and the malleability of human behavior has been the focus of experimental psychologists and their peers for more than 50 years—and the methods have always been… troubling.
In the 1960s, a Yale University psychologist devised a way to test the most pressing existential question of the era: Could obedience to authority be enough to persuade ostensibly good people to commit acts generally considered evil?
The researcher, Stanley Milgram, invited study participants into the lab, assigned them the role of teacher, and asked them to administer ever-increasing electric shocks to their “student” under the guise of studying learning and memory. As the shocks escalated, the student—secretly a member of the research team and safe from any real harm—begged to be freed and exhibited signs of pain and distress. Some of the participants refused to continue. But approximately 65 percent administered the most intense voltage anyway.
Controversial since its conception, “the procedure created extreme levels of nervous tension,” Milgram wrote in his original 1963 study. “Profuse sweating, trembling, and stuttering were typical expressions of this emotional disturbance. One unexpected sign of tension—yet to be explained—was the regular occurrence of nervous laughter.” Whether they quit early, or made it to the finish line, participants were riled.
While we tend to approach robots as though they are like humans—and experience similar signs of distress to what Milgram observed decades ago—researchers have identified a few complicating factors. Our decision to turn a pleading robot off may be influenced by the price point or quality of the robot, its sociability (as compared to its utility)—a factor the recent German study manipulated—and even the machine’s coloring, which may be perceived as race or ethnicity, shape our interactions with these machines of our creation. As the authors of the PLOS ONE paper succinctly put it, “When people are interacting with different media, they often behave as if they were interacting with another person and apply a wide range of social rules mindlessly.”
These reactions may seem silly or inconsequential. But artificial intelligence researchers worry about the consequences of our deference to these robots, especially as they move into our homes and even into public life. “The robots of today have no intelligence that would justify keeping them switched on. The robots of today do not care,” Bartneck wrote. “We need to respect our creations, but we also need to avoid being fooled by science fiction.” Amazon Echo is not Westworld he says. And Nao isn’t Blade Runner.
Ultimately, the guilt we feel when shocking a fellow human is good. But when it comes to a robot, Bartneck says feel free to shut it down.