Robots photo
SHARE

Last week, the Future of Life Institute (FLI) released an open letter calling for a ban on autonomous weapons. The institute defines these as systems that can “elect and engage targets without human intervention,” and proposes that, “Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is—practically if not legally—feasible within years, not decades…”

But the content of that letter is pretty irrelevant. It’s a breezy, 437-word document that’s more Facebook post than rallying cry. It’s also the second such warning about the dangers of AI that FLI has made this year. In January, they issued an open letter about AI safety, referencing issues of privacy and workplace injury, as well as the existential threat of machines that might wipe out humankind. That document attracted widespread attention, in large part due to famous signatories like Elon Musk and Stephen Hawking. Now, both have also signed the new letter.

But Musk and Hawking’s involvement isn’t of much note, either. They’ve gone on the record in the past with shrill and unsupported fears of AI, using the language and logic of science fiction rather than any research-based conclusions. Their bias is established, and despite their clear brilliance in other matters, the topic is outside of their professional and academic purview. Hawking studies high-energy physics, and most famously fretted over AI in an op-ed tied to the 2014 movie Transcendence. And Musk’s many futuristic ventures don’t yet include AI. Even the co-founders of Vicarious, the AI firm that Musk has personally invested in, have effectively debunked his claims that researchers are actually working to avoid an apocalyptic outbreak of runaway machine intelligence.

So what’s important about FLI’s letter, if not its content, or its most prominent signatories?

It’s that virtually every major player in AI and robotics has endorsed it. The growing army of signatories signatories currently includes more than 50 Google engineers and researchers, many of who are from DeepMind, the AI firm that Google acquired last year for $400M. Also on the list are Yann LeCun, director of AI research for Facebook, and Yoshua Bengio, an AI researcher from the University of Montreal. They’re pioneers in the field of deep learning, a subset of AI that’s often associated with the potential to create truly human-like machine intelligence. When I interviewed them for a story about the dangers of AI fear-mongering, they viewed such handwringing as largely irrelevant, and detrimental to the field. In Bengio’s case, he even worried about researchers being targeted by people tricked into seeing AI as an apocalyptic threat. And yet, the very people who are concerned about the increasing backlash against anything robotic are on board with the idea of banning autonomous weapons.

What we’re seeing is the beginning of an inevitability.

What we’re seeing is the beginning of an inevitability. The open letter comes on the heels of the second United Nations conference on the subject of banning lethal autonomous weapon systems, or LAWS (an unfortunate acronym, in the context of a ban). Those meetings have yet to produce a binding agreement or proposal, and were presented as an ongoing discussion, rather than a prelude to political. But the final line of FLI’s letter is more direct. “Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”

Human rights groups want these systems banned. A huge number of AI researchers and roboticists do, too. That consensus is only going to get bigger as autonomous killing machines become more feasible, or actually find their way onto the battlefield. This is a doomed technology.

The questions that remain, however, are not trivial, or obvious: What exactly are we banning, and when?

* * *

The central goal of the anti-autonomous weapons movement can be summed up in a single term: meaningful human control.

You can read a lot into those three words, including some misconceptions. There are fringe elements within their ranks, but most critics of LAWS aren’t afraid of a Terminator-style robot uprising. The desire to control armed machines is about preserving the human decision to kill. There are philosophical reasons for not allowing robots to determine when to use lethal force—that it should be a hard decision, for example, with an emotional cost—as well as more practical concerns. FLI’s letter warns about the risk of proliferation, that with their eventual low cost and inherent ease of use, “autonomous weapons will become the Kalashnikovs of tomorrow.” If a swarm of disposable bomb-carrying robots can be fielded by anyone, then what prevents their use by everyone?

Heather Roff, a political scientist and visiting professor at the University of Denver, also worries about LAWS creating a new class of blameless atrocity, where the killing of bystanders or surrendering hostiles can be chalked up to a glitch. “Suddenly everything becomes an accident,” says Roff. “There is no more definition of war crime, because there’s no intention.” The line between the misuse of autonomous force and a genuine malfunction already seems hopelessly blurred, even before LAWS have reached the battlefield.

In 2013, a UK-based nonprofit called Article 26 coined the term “meaningful human control,” in an attempt to pin down what critics of autonomous weapons are actually seeking. But as effective as the term has been in unifying anti-LAWS sentiment, there’s no real sense of what it means. “It gives us a really useful analytical framework,” says Hoff. “In the past year, meaningful human control was cited very often, in papers and presentations. Right now, the question is, Yeah, we all like the way it sounds, but what does it mean?”

So Roff and Article 26 are collaborating on a research project to explore the specific parameters of meaningful human control. That includes giving the anti-LAWS community a more concrete sense of what it’s against, to help codify what it’s proposing. For the past six months, Roff has been building a database of semi-automated killers, starting with systems from the five nations that export and import the most weapons. “Everybody discusses artificial intelligence and autonomous weapons and semi autonomous weapons,” says Roff, “but there seems to be a lacuna of what that means, and what we’re talking about.”

Royal Navy Phalanx system being test fired

The Phalanx CIWS (close-in weapon system) can automatically target and fire at incoming missiles.

What’s the difference in autonomy between, for example, a Phalanx system on a naval vessel, that can be set to scan the horizon for incoming missiles and fire on targets at will, and the Harab, a drone that can loiter over an area, and then nosedive into the first radar emitter it detects (the assumption is that it’s detonating over a Stinger or similar surface-to-air missile launcher)? And as new systems show up, what criteria will we use to determine which might full under a potential ban?

On July 1, Roff and Article 26 were approved for a grant of $104,000 from FLI, to fund a year of research into autonomous weapon systems. The money was part of a $10M donation from Elon Musk, to be distributed among proposals for AI safety-related projects. Though Roff is still looking for funding for another year (the proposal mapped out two years of research), she can finally pay her graduate students for their participation, and devote more time to the question of what actually constitutes autonomy in weapons, and meaningful control.

Here’s an example of why this kind of basic research is relevant. What if, to avoid any ugly outcomes involving the killing of humans, a nation developed drones that were strictly anti-materiel, meaning they could only attack other drones. That would seem to counter to concerns that groups like Islamic State wouldn’t be constrained by a LAWS ban. A swarm of drone-hunters would be the robotic equivalent of a missile defense system, like those ship-based Phalanx systems. Problem solved, right?

But what happens when a counter-drone swarm discovers enemy bots within a crowd of civilians? If, like the Phalanx, the drones are designed to faster than a human can respond—who could have to cue up and approve 1000 different targets in a matter of moments—they might take action immediately, and descend and self-destruct in the immediate vicinity of civilians. Authorities could express their regret at what amounts to collateral damage, and possibly blame the other side for finding a loophole on their autonomous rules of engagement. Now repeat that event, in countless permutations throughout various conflicts around the world. And that’s not to mention the potential for LAWS to be used in covert actions, with governments denying their involvement in an assassination. “All of a sudden the world looks a little more bleak,” says Roff. “Artificial intelligence starts to be really scary.”

* * *

In the past, I’ve argued that the researchers and human rights groups that are advocating a ban on LAWS were wasting time and resources. No one in the military wants systems that are a line of bad code away from unleashing friendly fire on their personnel.

It’s embarrassing how wrong I was.

The going assumption in the anti-LAWS community is that, when you read between the lines of Pentagon-sourced material, such as DARPA’s requests for proposals or the U.S. Navy’s projections of where warfare is headed, autonomous weapons are coming. I’m not entirely convinced. There’s a connect-the-dots quality to some of this rhetoric, that’s uncomfortably close to conspiracy theory. The internet has brought much-needed scrutiny of the defense industry’s every move, and my view was that, until a nation attempts to develop, why bother debating one out of an infinite number of applications for robotics? The field is already saddled with outlandish fears. Rampaging killbots are yet another sci-fi-inspired fever dream.

But I wasn’t grasping the complexities of this issue, or the growing momentum among researchers who study robotics. Roff’s work is just one of many examples of attempts to apply data and scientific rigor to what originated as an impassioned, but somewhat vague political argument.

I’m most embarrassed, however, at missing an obvious point: There’s no harm in banning autonomous weapons. With researchers like Roff actively mapping out the parameters for such regulations, it’s clear that a ban on LAWS wouldn’t be a blanket restriction on other kinds of autonomous robots. And an international ban would give nations political options for dealing with governments who deploy them anyway.

The only harm is in waiting for autonomous weapons to start killing.

The only harm is in waiting for autonomous weapons to start killing. What if we could have banned chemical weapons before the first clouds rolled over soldiers, killing between 30,000 and 90,000 of them outright in WWI (estimates vary widely), and causing an unknown number of lethal cancers in others? And what if the public had known about nuclear weapons during their development in the 1940s? Cities like Hiroshima and Nagasaki might still have been targeted, and fire-bombed with conventional weapons. A-bomb apologists have argued that more Allied lives might have lost, if Japan hadn’t been stunned in surrender by a horrifying new technology, capable of leveling entire cities with a single bomb. Still, is there any doubt that, when faced with the prospect of the global proliferation of radioactive doomsday weapons, the world would have at least considered banning their creation?

In hindsight, it seems inevitable that chemical and nuclear weapons would be fully or at least partially banned. Those actions came too late for thousands of victims. Autonomous weapons might never be considered as inhumane or as politically destabilizing, but the potential for increased suffering and chaos is hardly worth the benefits of automating lethality, whether by design or accident. We have a unique opportunity to preempt some amount of tragedy before it occurs, and before there’s a chance for runaway proliferation.

It’s a case I can’t even muster the full effort to make, because it so obviously makes itself. But for this inevitable ban to be effective, researchers like Roff have their work cut out for them. The academic basis of any such regulations has to be data-driven and respectful of human dignity, without being merely pacifist. “I’m a just war theorist, so implicitly that means that sometimes I think war is okay,” says Roff. “ It’s about thinking this issue through. Because technology is not value-neutral. It is value loaded.”