The Looming Threat of Artificial Unintelligence

Stop fantasizing about super-smart A.I., and start worrying about dumb algorithms

Ultron

Marvel Studios

Brace yourself. In these crucial weeks before the May release of The Avengers: Age of Ultron, editors and writers are going to unleash an onslaught of think pieces about the real-life threat of artificial intelligence (AI). Whatever box office records the upcoming movie does or doesn't break, it will offer yet another vision of AI insurgency, in the form of Ultron. Created to protect humanity from a variety of threats, the embittered, James Spader-voiced peacekeeping software decides to throw the baby out with the bathwater, and just massacre all of us. It's the latest, but certainly not the last time that Hollywood will turn the concept of AI superintelligence into action movie fodder. And for media outlets, it provides another opportunity to apply light reporting, and deeply furrowed brows, to the greatest problem in AI, that also happens to not be a problem at all.

More likely, the AI that hurts us will be very, very dumb.

I'm not arguing that AI is entirely harmless. If anything, it's inevitable that autonomous algorithms will cause harm to humans. But just as there's a difference between recognizing the inherent danger of defunct satellites turning into lethal space junk, and ranting about a future filled with orbital lasers and mind-control satellites, the risks associated with AI should be assessed for what they are, and with at least a modicum of sanity. The AI that's poised to ruin lives has nothing in common with supervillains like Ultron, and won't be what anyone would consider superintelligent. More likely, the AI that hurts us will be very, very dumb.

Racism By The Numbers

There is clear momentum behind the concept of AI safety. When the non-profit Future of Life Institute released an open letter on AI safety in January, a great many people who have no professional involvement with AI signed on. But it wasn't an entirely amateur-hour affair. The signatories included computer scientists, roboticists, and legal experts. One of the latter was Ryan Calo, a law professor at the University of Washington who specializes in robotics. For Calo, the near-term risks associated with AI have nothing to do with intelligence, but rather with autonomy. "I'm worried that there could be some unanticipated, emergent phenomenon," says Calo. "For instance, maybe it'll turn out that lots and lots of people were denied a loan offer or a credit card offer, and it's because an algorithm found that they were surfing on a predominantly African-American social network. And no one involved in that will have been purposely racist, but it'll be a huge scandal."

Calo's hypothetical is disturbing for a variety of reasons, none of which have anything to do with existential doom. First, there's the fact that some number of people could be disenfranchised essentially by accident. The definition of autonomy, in the context of AI, is the ability to take action without human intervention. A programmer wouldn't have to connect the insidious dots on behalf of a program. An AI could arrive at deplorable conclusions based on what appears to be pure, qualitative analysis. Those conclusions wouldn't be accurate, necessarily. Data brokers and advertisers are shameless and pervasive in their tracking of our online activity, collecting everything from which sites we visit to what tax bracket individual users appear to occupy. All it might take is a single correlation between foreclosures and homeowner ethnicities, and it's entirely feasible that an AI could make a wrong-headed connection. Though it's not the most evocative example of AI, the more advanced behavioral tracking software currently in use falls under the technology's relatively large umbrella (search engines are in there, as well). Racist algorithms wouldn't even be particularly cutting-edge.

Calo's imagined AI wouldn't actually be racist. A machine that's incapable of opinion isn't capable of true bias. That lack of social and emotional intelligence presents a huge problem. “The law has to manage risk. It has to make us feel like its there to compensate victims,” says Calo. “But the way it often does that is by looking for fault. And if those things are missing, you might have a lot of victims without perpetrators. I'm worried about the fact that the law isn't up to it.”

"You might have a lot of victims without perpetrators."

Blameless War Crimes

This issue of autonomous accountability could have more dramatic implications for AI-driven robots ordered to take lives. In science fiction, armed bots are rarely seen committing a single battlefield atrocity. If they're going to kill the “wrong” person, they're going to attempt to kill all persons on the planet, often at the behest of a central superintelligence. What's infinitely more technically feasible is a remotely-operated weapons platform that uses AI to deal with degraded communications.

Military personnel never fail to point out that when an unmanned system attacks a target, there's always "a human in the loop." But what if a drone or ground robot is in the middle of a firefight with designated targets, and its communications link drops out? Should it grind to a halt, or retreat, potentially exposing itself to enemy fire (or, worse, capture)? Or should it respond as many of the robots in this June's DARPA Robotics Challenge will when faced with a loss of contact, and forge ahead with its task until communications are restored?

That's one scenario in which an armed robot might commit what's arguably a war crime. An autonomous machine might suddenly target civilians who are adjacent to confirmed targets, or gun down combatants trying to surrender. Other scenarios are less complicated—the robot could simply malfunction, and open fire on a crowded market, using advanced fire control software that qualifies as AI to accurately murder innocent bystanders.

The risk here isn't that glitches will metastasize into uprisings. The threat is that AI could commit isolated acts of murder, and no one would be punished. “In order for you to hold a superior officer accountable for the war crimes of his or her subordinates, that superior has to have effective control—and that's the legal term—effective control over their subordinates,” says Calo. “So the question is, do you have effective control as a military commander over autonomous assets in the field? And if not, does that mean you can't be held accountable for them?”

If there's anything worse than a war crime, it's one that's impossible to prosecute. Unless roboticists and lawmakers prepare for fringe cases, and anticipate the ethical and legal no-man's-land that profoundly unintelligent autonomy can wander into, outrages will occur without anyone to blame.

Stock Crashes, Crushed Limbs, And Other Everyday Disasters

Of course, most AI failures won't result in death. Autonomous stock-picking software will undoubtedly fumble again, as it did during the "flash crash" of 2010, when the Dow inexplicably dropped 1,000 points in a matter of minutes. Other AI systems will misinterpret sensor data, and send robots toppling onto human co-workers in factories or elderly residents in nursing homes. Money will be squandered, and bones will be broken. These are comparatively boring outcomes, compared to lurid, all-or-nothing big-screen fantasies about machine superintelligence. But if researchers and roboticists are serious about promoting AI safety, it's the boring stuff that requires attention, because it's so inevitable.

In fact, one of the potential pitfalls of AI comes from assuming that it's smarter than it is. Stock-picking algorithms are given free reign to spend as they see fit, exercising at least as much professional autonomy as their seemingly obsolete human counterparts. But robots are even more inherently deceptive. “Humans are pathological anthropomorphizers, so we imbue agency in things, and a higher level of agency than they actually have,” says Alan Winfield, a roboticist at the UK's Bristol Robotics, and another signatory on the FLI's open letter. “This is particularly true with robots. It's just a human response to things that are animated. We tend to look at something that's moving, and attribute intentionality to it. We think it must have something between its ears, even if it doesn't have ears or anything between them.”

As robots become more integrated into our lives, this tendency to overestimate their intelligence and general competence could lead to physical injuries. “We are moving in a direction where people and robots will be working side by side,” says Tony Stentz, director of the National Robotics Engineering Center at Carnegie Mellon University. Stentz envisions factories where a robotic system can use its strength to heft a component into position, while a human uses his or her dexterity to tighten a bolt. Such teamwork is promising, but only if everyone involved knows the limits of that machine's abilities, and doesn't make foolish assumptions. “It's a challenge for robots to have the same capability as even someone who is unskilled, who has basic perceptive capabilities and motor skills just from being alive,” says Stentz. Understanding what a robot can and cannot do could mean the difference between a productive workday and a compound fracture. And the onus is also on the AI researchers and roboticists to create systems that are both safe, and painfully clear about their purpose and abilities.

Understanding what a robot can and cannot do could mean the difference between a productive workday and a compound fracture.

Overestimating AI might lead to problems as concrete as a hospital bill, or as murky as the apparent trend towards what Calo calls "outsourcing decision-making." The state of Maryland is already using algorithms to determine which parolees deserve a greater degree of supervision. And the creator of that software believes it could also make sentencing recommendations. Efficiency is a laudible goal, but should such profoundly important decisions be handled by software?

Again, the threat posed by unfettered autonomy doesn't have to follow any slippery slopes, or lead to a sci-fi-inflected global crisis. A single family could be devastated by an avoidable workplace injury, or the loss of their life savings as the result of an AI's impenetrably boneheaded behavior on the stock market. Victims of minor or major calamities could find themselves without financial or legal recourse, because the buck stops at the feet of a machine that's well and truly stupid. These issues are what most researchers and experts are referring to, when they talk about AI safety. “Its about unintended consequences,” says Winfield. “It's nothing to do with preventing superintelligent AI from taking over the world. We're talking about rather dumb, unintelligent bits of AI having potentially major consequences.”