SHARE

Last summer, the Pentagon’s Defense Science Board commissioned a study to examine angles on a particular challenge for DoD, with participants drawn from consulting, defense and technical industries, as well as the military and academia. In possibly the worst John Lennon cover ever made, participants were asked to “Imagine if….We could covertly deploy networks of smart mines and UUVs [Unmanned Underwater Vehicles] to blockade and deny the sea surface, differentiating between fishing vessels and fighting ships… …and not put U.S. Service personnel or high-value assets at risk.”

The scenario, and several others like it, were at the core of the study on autonomy, specifically autonomous machines and computers and systems, and what they mean for the Pentagon and the wars of the future. This matters a great deal, because what the Pentagon thinks of autonomy will shape the weapons it orders and the way it fights wars, and, likely, the way that laws of war are written.

Here’s how the report defines autonomy:

Okay, so what does that mean for a machine with guns attached? First, the report clarifies, expect to see a lot more autonomy in non-killing jobs than in ones with weapons. From the report:

There are many, many jobs in the military that aren’t really about direct combat, from running communications to manning radar to simply driving the trucks that move ammunition from where it’s delivered to the planes that will take it abroad to bases where troops will pick it up. So expect to see autonomous cars and radar systems as part of the military first.

There’s only one specific deadly application of A.I. recommended in the report. The Defense Science Board suggests “U.S. Navy and DARPA should collaborate to conduct an experiment in which assets are deployed to create a minefield of autonomous lethal UUVs.”

Why?

Mainly, to prove that America can deny a section of the sea to an enemy, without risking American sailors. This week has seen a series of confrontations between U.S. Navy patrol ships and Iranian vessels in the Persian Gulf. While the encounters have yet to turn deadly, one way to guarantee that the deadliness is one-sided is to prove that the Pentagon can put deadly underwater robots in an area, control them, and then trust that they will destroy only the specific ships they’re supposed to.

The minefield is just one of 28 recommendations in the report about autonomy, and it’s the only recommendation to specifically address a robot making a deadly decision. Yet it sets a good outline for how deadly, autonomous machines will enter use: as a surprise, in a niche use, and described entirely as a tool for protecting the humans that deployed it. So like most new weapons, then.

Read the full report, published in June, and check out the DefenseOne write-up from this week.