The International Community Is About To Debate Killer Robots

To ban or let be?
Remote Control Army Robot
Stephen Baack, U.S. Army Photo

Share

Nobody wants a robot apocalypse. From the mechanical worker’s revolt in R.U.R. (the play that gave us the word “robot”) to the bleak, nuke-scarred hellscapes of the Terminator and Matrix films, the idea of humanity destroyed by tools of its own creation is compelling, if still the domain of fiction. To keep the apocalypse firmly in the realm of the speculative, today the International Committee of the Red Cross released an unusual statement for a humanitarian group: “Decisions to kill and destroy are a human responsibility.”

The Red Cross isn’t encouraging human decisions to kill and destroy. Instead, it’s arguing that if such decisions are going to be made (and little in human history suggests they won’t be), then it’s really important that it is actual humans with that authority and power, not lethal autonomous weapon systems. Or, in the vernacular of the movement, “killer robots.” From the Red Cross:

This statement comes from the Red Cross as part of the Meeting of Experts on Lethal Autonomous Weapons Systems on the Convention on Certain Conventional Weapons. The meeting is being held in Geneva this week, with the goal of creating rules for weapons so that wars, if they must be fought, are fought as humanely as possible. Previously, the Convention on Certain Conventional Weapons has banned things like flamethrowers, which are both exceptionally cruel and have limited military utility. “Lethal autonomous weapon systems” is a broader category than “hurling flaming goop,” and while few people explicitly want robots running around deciding who should die, it’s the definitions of “lethal,” “autonomous,” “weapon,” and “systems” that will shape not just our understanding of these machines, but also the laws and rules that will impact how these machines are used.

It’s not a matter of if lethal machines will exist, but more a question of the degree to which humans are involved in that decision-making process.

Half of the Red Cross’ statement focuses on defining and clarifying the terms of the debate. For the ICRC, “’autonomous weapon systems’ is an umbrella term encompassing any weapon system that has autonomy in the critical functions of selecting and attacking targets.” Meaningful human control of these weapons requires “strict operational constraints with respect to the task carried out, the targets attacked, the operational environment, the geographical space and time of operation, the scope to enable human oversight of the operation of the weapon system, and the human ability to deactivate it if need be.” Or, put plainly, there needs to be a lot of controls on the where, when, who, to what extent, and how of a lethal autonomous robot’s use, and at any stage, a human needs to be able to stop the weapon.

Bomber With Long Range Anti-Ship Missile

To fit humanitarian laws, the Red Cross says weapons must be predictable: A drone swarm that acts in unusual ways beyond human control would be prohibited. And the Red Cross isn’t sure that human control at the time a weapon is fired is enough to prevent the hazards of autonomy if there’s only “minimal or no human control at the stage of the weapon system’s operation.” Consider the Long Range Anti Ship Missile, developed by Lockheed Martin for DARPA, and ultimately the U.S. Navy. One of the missile’s defining characteristics is that it can autonomously plot paths to targeted ships, and identify targets by the signals they send. By the Red Cross guidelines, it simply isn’t enough to have a target picked out when the weapon is fired; a human has to approve of the change in target, or else the weapon violates the humanitarian norms of war.

The Red Cross isn’t the only group trying to figure out autonomous weapons. Last month, at an event on the future of the military called “Securing Tomorrow” and hosted by the Washington Post, Deputy Secretary of Defense Bob Work addressed the topic of autonomy. From the Pentagon’s write-up:

So it’s not exactly a matter of if lethal machines will exist, but more a question of the degree to which humans are involved in that decision-making process, and what kinds of autonomy will matter.

By Twitter direct message, Popular Science spoke to Mark Gubrud, a physicist and adjunct professor in the Peace, War, and Defense curriculum at the University of North Carolina. Gubrud thinks that defining autonomy is the wrong question. Instead, he says, “The right question is where must and where can we draw the line to stop this arms race before it takes us any further into danger?”

Similarly, Gubrud has a more precise definition of human control in mind: “ If a (non-human) system makes a decision under internal programming plus environmental inputs, that is not human control,” he says, “You may have programmed it, and you may be satisfied that it is making the right decisions, but you are not controlling it when it makes those decisions. Human control is when a human makes the decisions. The whole point of calling something autonomous is that it is operating outside of human control, making decisions on its own.”

Spot And Marines At Quantico

Writing for The Bulletin of Atomic Scientists, security studies professor Heather M. Roff argues that the dangers from sophisticated, learning machines on the battlefield are so great they should be outright prohibited. She says:

Last year, the Center for New American Security, a Washington D.C. think tank, published a report trying to separate the big debate about killer robots into many smaller debates on degree of autonomy, human involvement, and type of decision.

“The ICRC statement is a responsible and forward-looking contribution to the dialogue over autonomous weapon systems. It correctly points out that autonomy in weapon systems is a fact of life now, and that maintaining human control over the use of force is essential,” said Michael C. Horowitz, one of the authors of that report, when reached for comment.

He continued, “The ICRC references the importance of the dictates of the public conscience in understanding the legality of autonomous weapon systems. Given a general lack of public awareness on the topic of autonomous weapon systems around the world, we need to be careful about leaping to judgments about what the public conscience ‘is’ on this topic. One key issue the ICRC statement raises is whether requirements for judging the predictability and reliability of any hypothetical autonomous weapon systems should be the same – or higher – than they are for other weapon systems.”

In an upcoming but not yet published work, Horowitz suggests different ways of defining autonomous weapons and argues for distinguishing between types of autonomy as a key way forward.

We are, as a species, potentially facing a unique prospect: To protect ourselves from other humans, are we willing to entrust killing power to machines? And if so, to what extent? It’s clear that the Red Cross thinks we need to consider our next steps very carefully, and no one I’ve spoken with about lethal autonomy disagrees. When it comes to armed machines, there’s no clear binary choice.

 

Win the Holidays with PopSci's Gift Guides

Shopping for, well, anyone? The PopSci team’s holiday gift recommendations mean you’ll never need to buy another last-minute gift card.