Robots Are Strong: The Sci-Fi Myth of Robotic Competence

Might makes mechanical right in the 1953 movie Robot Monster.

The first in a series of posts about the major myths of robotics, and the role of science fiction in creating and spreading them. Other topics: Robots are smart, the myth of inevitable AI, and robots are evil, the myth of killer machines.

Last week, I created a minor disturbance in the Internet, with a not-so-simple question—should a robotic car sacrifice its owner's life, in order to spare two strangers?

It was never meant to be a rhetorical question. After talking to Patrick Lin, the California Polytechnic State University robo-ethicist who initially presented the topic of ethical vehicles in an op-ed for Wired, as well as discussing the legal ramifications with a law professor and vice dean at Brooklyn Law School, I was convinced: For the good of our species, the answer is a wincing, but whole-hearted affirmative. If an autonomous vehicle has to choose between crashing into the few, in order to save the many, those ethical decisions should be worked out ahead of time, and baked into its algorithms. To me, all other options point to a chaos of litigation, or a monstrous, machine-assisted Battle Royale, as everyone's robots—automotive or otherwise—prioritize their owners' safety above all else, and take natural selection to the open road.

The reaction to this story was varied, as it should be for such a complex thought experiment. But a few trends emerged.

First, there were the usual comment-section robo-phobics, who may or may not have read past my headline, or further than the summaries presented by response pieces on Gizmodo and Slate. The most liked (so far) comment on Facebook: "Enough to justify NEVER making robots or self driving cars. Ethical and moral decisions should be made by humans, not their creations." Similarly, people talked about Skynet, because Terminator references are as unkillable as the movie's fictional robot assassins, despite being just as humorless, and based on nothing that's ever happened in real life.

But the more interesting responses were the dismissals, which came in two varieties: advising that all robot cars should simply follow legendary SF author Isaac Asimov's First Law of Robotics (that a robot may not harm a human being, through action or inaction), or predicting that robot cars will be so infallible, that lethal collisions will be obsolete. "I don't think this would be an issue," wrote Reddit user iamnotmagritte. "The cars would probably communicate with each other, avoiding the situation altogether, and in the case this isn't enough, letting each other know what paths to take to avoid either crashing." During a Twitter discussion with Tyler Lopez, who wrote the related Slate piece, and made some excellent points about the technical and legal inability of current autonomous cars to solve any sort of "who do I kill?" trolley problem, Lopez shared that sentiment. With vehicles and road infrastructure networked together, he proposed, "the dangers associated with the moral algorithm would be solved by network algorithms instead."

Robots, in other words, simply have to be told not to kill anyone, much less two people, and they’ll carry out that mission with machine precision.

This is distressing. To me, it’s proof of something I’ve suspected for years, but haven’t been able to articulate, or at least mention without seeming like even more of an insufferable snob. But here goes: Humans, on the whole, do not understand how robots work.

This shouldn’t be a huge surprise. Robotics is an immensely complex field of study, and only a vanishingly small portion of the human race are training or employed as roboticists. But you could say the same of physics, and yet the average person doesn’t feel qualified to casually weigh in on the mechanics of gravitational lensing, or the spooky feasibility of alternate universes branching out with each decision we make.

So what is it about robots that makes people assume they understand them?

This isn’t a rhetorical question either. The answer is Isaac Asimov. Or, more generally, science fiction. SF writers invented the robot long before it was possible to build one. Even as automated machines have become integral to modern existence, the robot SF keeps coming. And, by and large, it keeps lying. We think we know how robots work, because we’ve heard campfire tales about ones that don’t exist.

I’ll tackle other major myths of robotics in future posts, but let’s start with the one most germane to robot cars, and the need to develop ethical frameworks for any autonomous machine operating in our midst.

Let’s talk about the myth of robotic competence.

* * *

“Trust me,” Bishop says, before the knife starts moving. In one of the most famous scenes in a movie filled with famous scenes, the android (played by Lance Henriksen) stabs the table between his splayed fingers, and those of the Space Marine his hand is pinning down. He stabs other gaps, and the pace builds until the blade is a blur, gouging the table in a staccato frenzy. When he’s done, we see that Bishop has nicked himself slightly. But the poor Marine is unharmed. A bravura performance that’s merely a side benefit of being an artificial person.

This is how Aliens introduces its resident robot, and his inhuman degree of competence. Along with possessing uncanny knife skills and hand-eye coordination, Bishop is unflinchingly brave, inhumanly immune to claustrophobia, able to expertly pilot combat spacecraft (not exactly standard training for a medical officer), and is barely fazed by having his body torn in half. Really, there was no reason to send humans on that doomed bug hunt. A crew of armed synthetics—cleared to do harm, as Bishop was not—could have waltzed off of LV-426 with nary a drop of their white blood spilled.

So why, exactly, is Bishop such a remarkable specimen? It's not that Aliens peered into our future, and divined the secrets of robotic efficiency that modern roboticists have yet to discover. It's because, like most SF, the movie is a work of adventure fiction. And when a story's primary goal is to thrill, its robots have to be thrilling.

Aliens is merely continuing a tradition that dates back to literature's first unofficial android, Frankenstein's monster, an assembled being whose superhuman physical and mental gifts aren't based on the quality of raw materials—he wasn't stitched together from Olympic athletes and Nobel winners. The monster's perfection is just as unexplained as Bishop's, or that of countless other fictional automatons, from Star Trek's Data to Almost Human's Dorian. You could guess at the reasons, of course. Where humans are a random jumble of genetic traits, some valuable, others maladaptive, robots are painstakingly optimized. Machines don't tire, or lose their nerve. Though their programming can be compromised, or it might suddenly sprout inconvenient survival instincts, their ability to accomplish tasks is assured. Robots are as infallible as the Swiss clocks they descended from.

The myth of robotic competence is based on a hunch. And it’s a hunch that, for the most part, has been proven dead wrong by real-life robots.

Actual robots are devices of extremely narrow value and capability. They do one or two things with competence, and everything else terribly, or not at all. Auto-assembly bots can paint or spot-weld a vehicle in a fraction of the time that a human crew might require, and with none of the health concerns. That’s their knife trick. But ask them to install upholstery, and they would most likely bash the vehicle to pieces.

Robot cars, at the moment, have a similarly savant-like range of expertise. As The Atlantic recently covered, Google's driverless vehicles require detailed LIDAR maps—3D models created from lasers sweeping the contours of a given roadway—to function. Autonomous cars have to do impressive things, like detecting the proximity of surrounding cars, and determining right of way at intersections. But they are algorithmically locked onto their laser roads. They stay the proscribed course, following a trail of sensor-generated breadcrumbs. Compared to what humans have to contend with, these robots are the most sheltered sort of permanent student drivers. No one is quizzing them by sending pedestrians or drunk drivers darting into their path, or diverting them through un-mapped, snow-covered country lanes. Their ability to avoid fatal collisions remains untested.

Of course, they’ll get better. Sensors will improve and multiply, control algorithms will become more robust, and perhaps the robots will begin talking amongst themselves, warning each other of imminent danger. But the leap from a crash-reduced world to a completely crash-free one is an assumption, and not well-supported by the harsh realities of robotics in particular, and mechanical devices in general. Machines break. Software stumbles. The automotive environment is one of the most punishing and challenging in all of engineering, requiring components to stand up to wild swings of temperature, treacherous road conditions, and the unexpected failure of other components within an intricate, interlocking system. There’s only one way to assume that robots will always know that a tire is about to blow, or be able to broadcast the emergency to all nearby cars, each of which will respond with the instant, miraculous performance of a Hollywood stunt driver. For that, you can’t be a roboticist, or someone whose computer has crashed inexplicably, or whose WiFi has ever gone down, or whose streaming video has momentarily stuttered. To buy into the myth of robotic competence—or hyper-competence, really—you have to believe that robots are perfect, because SF says so.

* * *

When anyone cites Isaac Asimov’s Laws of Robotics, it’s an unintentional cry for help. It means that he or she sees robots as a modern fairy tale, the Google-built successors of the glitchy old golem of Jewish myth.

But Asimov's Laws weren't designed to solve future dilemmas related robots with the power of life and death. They are narrative devices, whose tidy, oversimplified directives allow for the loopholes, contradictions and logical gaps that can lead to compelling stories. If you're quoting any of those laws, you're falling for a dual-trap, employing the same dangerously narrow reasoning as the makers and deployers of Asimov's robots (who are supposed to be doing it wrong). And even worse, you're relying on fantasies to guide your thinking about reality. Even if it were possible to simply order all robots to never hurt a person, unless they are suddenly able to conquer the laws of physics, or banish the Blue Screen of Death in all its vicissitudes, large automated machines are going to roll or stumble or topple into people. This might be rare, but it will be an inevitability for some number of poor souls. Plus, the military, still the largest provider of R&D funding for robotics, might have something to say about that First Law being applied to all such machines.

Which isn't to say that SF means to mislead us about robotics, or should be ignored. I've talked to many roboticists and artificial intelligence researchers who were inspired by hyper-competent bogeymen, from 2001's HAL 9000 to the Terminator's T-800. The dream of robotic power is intoxicating. That the systems these scientists create are usually pale shadows of human competence is a mere fact of robotics. After all, the point of automation, in almost all cases, isn't to create a superhuman capability. It's to take people out of the equation, to save money, or save their lives, or save them the time and trouble of doing something boring. A Predator drone is not a better aircraft than a manned F-16 fighter, because it's robotic. In fact, it's not a better aircraft at all. Drones are, without exception, the least impressive military vehicles in the sky. But they're small, and cheaper to buy and deploy than a proper airborne killing machine. They're "good enough" technology, if your mission is to assassinate a ground target, in a region where air defense technology amounts to running for cover. But pit them against traditional attack craft, or systems designed to down encroaching aircraft, and armed drones will excel only at becoming smoking ruins.

In very specific, very limited applications, robots are strong. In most cases, though, they are weak. They are cost-effective surrogates. Or they are incredibly humble devices, like the awkward, bumbling humanoids of the DARPA Robotics Challenge, who are celebrated for gradually struggling through tasks (driving a vehicle, walking over rubble, using a power tool) that any able-bodied person would accomplish in exponentially less time. Journalists are often complicit in this myth-building. They inflate automated capabilities, romanticizing the decision-making that goes into how a robot approaches a task, or turning every discussion of exoskeletons and advanced prosthetics for the disabled into a bright-eyed prophesy of Iron Man-like abilities to come. Where journalists should be dismantling false, SF-sourced preconceptions about robotic technology, they're instead referencing those tales of derring-do, and reinforcing the sense that SF was right all along. Whether in make-believe settings, or the distorted scene-setting of media coverage, robots are strong, because anything less would be a buzzkill.

Which makes me the guy earnestly pooping on everyone's robot party. I think there's another option, though. Robots can be impressive without being overstated. A robotic limb can be a remarkable achievement because it restores independence to an amputee, and not because Almost Human imagines that a bionic leg is great for kicking people across the room. It's possible to love SF's thought experiments and vague predictions, while recognizing that it's not in SF's best interest to be rigidly accurate. Robots don't tend to shamble into dour literature about college professors and their desperate affairs. Fictional machines are the better, upgraded angels of our nature, protecting their makers with impossible intellects and physical prowess. More often they're rising against their masters in flawless, one-sided coups that are roughly as feasible (and impossible to banish from pop culture) as zombie outbreaks. Robots are perfect because that's the version of robots that's most fun.

That's a foolish way to think or talk about real robots, which are destined to break down and fall short. Automation is transforming our society in ways that are both disturbing and exciting. Assassination (in some places) is easier with robots. Collisions could one day be reduced with robots. Machine autonomy will annihilate whole professions and create or enhance others. Robots have only begun to reconfigure human life. So shut up, for a moment, about SF’s artificial heroes and villains, and the easy, ill-informed fantasies that fill the gaps of technical understanding. There are too many actual, fallible robots to talk about. And there's only so much time in our short, brutish, meatbag lives to discuss what we're going to do with them.