In Future Wars, We’ll Have To Fool Robots

Taking battlefield lessons from the recent Tesla car failures

“All warfare is based on deception,” wrote Sun Tzu, the ancient Chinese strategist, in “The Art of War.” As far as we know, Sun Tzu never grappled with the specific problems of electronic image processing by machines, but the principle still holds in modern times. If war is based on deception, future wars are going to involve tricks to fool robots.

Or at least, that’s the theory from infrastructure theorist and author Geoff Manaugh. Manaugh’s inspiration came from a minor tragedy: the first death from a self-driving car failure. A Tesla, running on autopilot, thought the white side of a tractor-trailer was empty space, and didn’t brake in time. It crashed, killing the human inside and sparking a federal investigation. It’s probably safe to say this is something both federal investigators and Tesla engineers are working to prevent in the future.

Manaugh’s thoughts, meanwhile, are about replicating that kind of failure in a military setting. He writes:

Manaugh’s work primarily deals with organizing cities, with the built structure of human life, so his further recommendations are specific to that. For example:

Understanding how machines see, and what those machines see, will be especially important both for improving and undermining robotic vision, in domestic life and at war. Deception is fundamental to war, and fundamental to deception is knowing how the sense (or sensor) one is trying to deceive works.

Read Manaugh’s full post here.

Kelsey D. Atherton

Kelsey D. Athertonis a defense technology journalist based in Albuquerque, New Mexico. His work on drones, lethal AI, and nuclear weapons has appeared in Slate, The New York Times, Foreign Policy, and elsewhere.