SHARE

On some future battlefield, a military robot will make a mistake. Designing autonomous machines for war means accepting some degree of error in the future, and more vexingly, it means not knowing exactly what that error will be.

As nations, weapon makers, and the international community work on rules about autonomous weapons, talking honestly about the risks from data error is essential if machines are to deliver on their promise of limiting harm.

A new release from an institute within the UN tackles this conversation directly. Published today, “Known Unknowns: Data Issues and Military Autonomous Systems” is a report from the United Nations Institute for Disarmament Research. Its intent is to help policy makers better understand the risks inherent in autonomous machines. These risks include everything from how data processing can fail, to how data collection can be actively gamed by hostile forces. A major component of this risk is that data collected and used in combat is messier than data in a lab, which will change how machines act. 

The real-world scenarios are troubling. Maybe the robot’s camera, trained for the desert glare of White Sands Missile Range, will misinterpret a headlight’s reflection on a foggy morning. Perhaps an algorithm that targets the robot’s machine gun will calibrate the distance wrong, shifting a crosshair from the front of a tank to a piece of playground equipment. Maybe an autonomous scout, reading location data off a nearby cell phone tower, is deliberately fed wrong information by an adversary, and marks the wrong street as a safe path for soldiers.

Autonomous machines can only be autonomous because they collect data about their environment as they move through it, and then act on that data. In training environments, the data that autonomous systems collect is relevant, complete, accurate, and high quality. But, the report notes, “conflict environments are harsh, dynamic and adversarial, and there will always be more variability in the real-world data of the battlefield than the limited sample of data on which autonomous systems are built and verified.”

[Related: Russia is building a tank that can pick its own targets. What could go wrong?]

One example of this kind of error comes from a camera sensor. During a presentation in October 2020, an executive of a military sensor company showed off a targeting algorithm, boasting that the algorithm could distinguish between military and civilian vehicles. In that same demonstration, the video marked a human walking in a parking lot and a tree as identical targets. 

When military planners build autonomous systems, they first train those systems with data in a controlled setting. With training data, it should be possible to get a target recognition program to tell the difference between a tree and a person. Yet even if the algorithm is correct in training, using it in combat could mean an automated targeting program locking onto trees instead of people, which would be militarily ineffective. Worse still,  it could lock onto people instead of trees, which could lead to unintended casualties.

Hostile soldiers or irregulars, looking to outwit an attack from autonomous weapons, could also try to fool the robot hunting them with false or misleading data. This is sometimes known as spoofing, and examples exist in peaceful contexts. For example, by using tape on a 35 mph speed limit sign to make the 3 read a little more like an 8, a team of researchers convinced a Tesla car in self-driving mode to accelerate to 85mph.

In another experiment, researchers were able to fool an object-recognition algorithm into assuming an apple was an iPod by sticking a paper label that said “iPod” onto the apple. In war, an autonomous robot designed to clear a street of explosives might overlook an obvious booby-trapped bomb if it has a written label that says “soccer ball” instead. 

An error anywhere in the process, from collection to interpretation to communicating that information to humans, could lead to “cascading effects” that result in unintended harm, says Arthur Holland Michel, associate researcher in the Security and Technology programme at the UN Institute for Disarmament Research, and the author of this report.

“Imagine a reconnaissance drone that, as a result of spoofing or bad data, incorrectly categorizes a target area as having a very low probability of civilian presence,” Holland Michel tells Popular Science via email. “Those human soldiers who act on that system’s assessment wouldn’t necessarily know that it was faulty, and in a very fast-paced situation they might not have time to audit the system’s assessment and find the issue.”

If testing revealed that a targeting camera might mistake trees for civilians, the soldiers would know to look for that error in battle. If the error is one that never appeared in testing, like an infrared sensor seeing the heat of several clustered radiators and interpreting that as people, the soldiers would not even have reason to believe the autonomous system was wrong until after the shooting was over.

Talking about how machines can produce errors, especially unexpected errors, is important because otherwise people relying on a machine will likely assume it is accurate. Compounding this problem, it is hard in the field to discern how an autonomous machine made its decision.

[Related: An Air Force artificial intelligence program flew a drone fighter for hours]

“The type of AI called Deep Learning is notoriously opaque and is therefore often called a “black box.” It does something with probability, and often it works, but we don’t know why,” Maaike Verbruggen, a doctoral researcher at the Vrije Universiteit Brussel, tells Popular Science via email. “But how can a soldier assess whether a machine recommendation is the right one, if they have no idea why a machine came to that conclusion?”

Given the uncertainty in the heat of battle, it is reasonable to expect soldiers to follow machine recommendations, and assume they are without error. Yet error is an inevitable part of using autonomous machines in conflicts. Trusting that the machine acted correctly does not free soldiers from obligations under international law to avoid unintentional harm.

While there are weapons with autonomous features in use today, no nation has explicitly said it is ready to trust a machine to target and fire on people without human involvement in the process. However, data errors can cause new problems, leaving humans responsible for a machine behaving in an unexpected and unanticipated way. And as machines become more autonomous, this danger is likely only to increase.

“When it comes to autonomous weapons, the devils are in the technical details,” says Holland Michel. “It’s all very well to say that humans should always be held accountable for the actions of autonomous weapons but if those systems, because of their complex algorithmic architecture, have unknown failure points that nobody could have anticipated with existing testing techniques, how do you enshrine that accountability?”

One possible use for fully autonomous weapons is only targeting other machines, like uninhabited drones, and not targeting people or vehicles containing people. But in practice, how that weapon collects, interprets, and uses data becomes tremendously important.

“If such a weapon fails because the relevant data that it collects about a building is incomplete, leading that system to target personnel, you’re back to square one,” says Holland Michel.