How do you make AI trustworthy? Here’s the Pentagon’s plan.

The Department of Defense wants to scale up how it uses artificial intelligence, and it wants everyone to be able to trust those algorithms.
A member of the Air Force wears a VR headset in September, 2020. Senior Airman Daniel Hernandez / US Air Force

Share

On a battlefield in the future, a soldier may have a split second to decide if the information sent to them by an algorithm is accurate. The future of war may hinge on these moments, and before we get there, the Department of Defense wants to make sure that everyone, from the American public to the officers in command to the actual people doing the fighting, trust that when the United States involves artificial intelligence in war, that AI can be trusted.

“Our operators must trust the outcomes of AI systems,” Deputy Defense Secretary Kathleen H. Hicks, the number two civilian at the Department, said at a conference on June 22. “Our commanders must trust the legal, ethical, and moral foundations of explainable AI, and the American people must come to trust the values the Department of Defense has integrated into every application.”

Her remarks came as part of the Defense Department’s Artificial Intelligence Symposium and Tech Exchange. The event was part of a broader emphasis on incorporating the promise of AI into the day-to-day work of the military. It marked the start of the Department’s “AI and Data Acceleration Initiative,” (ADA) with the goal of launching experiments in how the military uses data and AI, and then using those results to kick off even more research.

Artificial intelligence is a big category, and the scope of the programs Hicks promoted is similarly ambitious. One part of the ADA initiative involves “creating operational data teams” and dispatching them to each of the 11 combatant commands across the Department of Defense. 

These 11 organizations fall into two broad categories: there are the seven geographic commands, like Indo-Pacific Command or Space command, which are responsible for directing and overseeing military operations that take place within their geographic boundaries. There’s also the four “functional” commands: Transportation Command, Cyber Command, Strategic Command, and Special Operations Command.

Each command, to some degree, deals with data unique in its specifics but similar in how it was collected and can be used, and starting with data at the command level lets the Department focus on useful experimentation.

In a release from the Pentagon that went along with Hicks’ statement, the Department said that these operational data teams will “catalog, manage and automate data feeds that inform decision making,” to turn the data already collected by the commands into a useful tool.

[Related: Russia is building a tank that can pick its own targets. What could go wrong?]

Artificial intelligence, in this instance, is a tool that ingests data and converts it into useful information. At present, the Department is full of data collection, with varying degrees of retention and processing. Some information, like surveillance footage of a suspected enemy captured by drone cameras, is stored, scrutinized, and built into planning for future attacks. Other information can be more ephemeral, recorded on patrol and then deleted to save room. A lot of it is staggeringly mundane, like maintenance logs for vehicles in a unit’s motor pool.

All of it, in theory, can be better understood, better processed, and better collected. To get to that point, the Pentagon has to understand how and what it is collecting, which is part of the role of the teams dispatched to combatant commands. Beyond that, the military will have to design and use AI that can absorb and refine information. Some of this process can happen in cloud servers built for the military, others on processors attached to sensors as the data is collected. Artificial intelligence algorithms learn from the data they see, and when that happens quickly, it is impossible for humans to vet the process. This is what makes trust so essential to any military implementation of AI.

Hicks said that the goal is for AI that is “safe, ethical, and deployed at the speed of relevance.”

While the Defense Department hopes this new push on AI will prove useful across the board, it is absolutely essential for a concept called “Joint All Domain Command and Control,” often abbreviated JADC2. This concept wants to incorporate sensor information from across the military, so a plane flown by the Air Force could share information from a Navy ship with the driver of an Army tank or a Marine on foot, and the reverse, too.

[Related: Autonomous war machines could make costly mistakes on future battlefields]

By sharing sensor information like this, the military hopes that all parties fighting the same battle can get the best information they need. If the ship’s radar detects an incoming formation of hostile drones, the Marine in the field doesn’t need to know the exact bearings from the radar pings. They just need to know the direction of the danger, and what action to take in response.

If it works—if it can adapt those sensor readings and communication protocols into easily digested information that can safely and securely be transmitted to people right when they need it—then the Pentagon’s emphasis on battlefield AI will have yielded the results it wants.