Think Tank Publishes Report On Tanks That Think

How to talk about killer robots that make their own decisions

Share

No military technology is perhaps more viscerally upsetting than the idea of a machine, armed with a gun, making the decision on its own to kill people. It’s a theme throughout dystopian fiction and films, and it animates protests against drones, despite the fact that military drones still have humans at the controls. Autonomy for weapons–where a gun turret or future machine will be programmed to press the trigger on its own–is a definite possibility in future wars. A new report from the Center for New American Security, a Washington D.C. think tank, wants to guide us calmly into understanding this future of armed thinking machines.

For ease of digestion, the report breaks down the big question of killer robots into smaller questions. The first is about the nature of autonomy itself when it comes to machines.

Autonomy is complex, but when talking about machines, the most important aspects revolve around what controls humans still have over the technology and what decisions the machine can make on its own. Everyone from the Pentagon on down is pretty concerned about making sure that humans are involved when it comes to making lethal decisions, and officials definitely don’t want to just hand robots any weapons, much less super deadly ones.

The technical term for keeping humans involved is “in the loop,” but it’s where humans fall in this loop that matters most

The technical term for keeping humans involved is “in the loop,” but it’s where humans fall in this loop that matters most. A robot that scans the environment for targets and asks a human for approval before firing at each one is different, say, than a robot on the battlefield asking if it can select and attack targets on its own. These are questions that military thinkers, robot designers, and people drafting new laws of war will all have to think about, so it’s good to have the questions framed right up front.

The report’s authors, Paul Scharre and Michael C. Horowitz, break down “in the loop” further. The missile that identifies targets but asks before firing is “human in the loop,” while a machine that picks targets and attacks on its own is “human out of the loop.” There’s a middle ground, however, for automatically firing defensive weapons, like an anti-missile missile battery that needs to fire faster than a human can react or else it won’t work. Scharre and Horowitz refer to weapons like this as “human on the loop.” In these instances, a weapon can fire without human approval, but the human can stop future firing.

What’s important about these distinctions is that they are, like autonomy itself, matters of degree. By carving autonomy into categories, it makes it possible for defense planners, lawmakers, and human rights groups to treat the types differently, especially the more threatening “human out of the loop.”

If military robots are going to be part of the wars of the future, it’s good to have a shared terminology for understanding which robots do what. “An Introduction to Autonomy in Weapon Systems” doesn’t offer all the answers for how the world and military should treat autonomous robots, but it sets out useful vocabulary for anyone who wants to have that conversation in the future.

Read the full report here.

 

Win the Holidays with PopSci's Gift Guides

Shopping for, well, anyone? The PopSci team’s holiday gift recommendations mean you’ll never need to buy another last-minute gift card.