SHARE

Russia is turning to computers to guide the aim of its tanks in battle. As part of the development of the Armata tank, Russia’s next main piece of battlefield armor, the tank will feature a gun that can automatically find and track targets, requiring human intervention only to approve or decline the shot lined up. It is both an incremental change in using sensors to better fight battles, and a massive step towards a future where machines make decisions over who to kill in war.

Long in development, the T-14 Armata is planned both for the eventual replacement of the existing tanks in Russia’s arsenal, as well as for export abroad. Key to the Armata’s design is an uninhabited turret. By removing the need for humans to be physically present next to the gun, the Armata can have a lower profile, making it harder to see and hit in fights. It also lets the tank use cover from natural terrain.

Uninhabited turrets are a growing trend in armored vehicles, like the heavy Stryker armored personnel carrier used by the US military. Remote control weapon stations, a fancy term for essentially a robot turret with at least one gun and some sensors, are a common way to keep people safer inside a vehicle, using video feeds and a remote control device (like a tablet or joystick) to see and shoot at perceived enemies without risking danger.

The Armata has already seen some combat testing, although without the autonomous targeting in place. 

What stands out about the Armata is that the uninhabited turret was built into the design from the start, to ensure that the humans inside the vehicle never have to risk that extra peril by sitting directly next to the gun. Instead, the human crew of an Armata tank is tucked safely inside a more durable crew capsule in the main body of the vehicle. In order for the system to work, though, it will need a tremendous amount of information from sensors to hone its targeting. 

[Related: Russian fighter pilots could soon fly alongside bomb-filled combat drones]

“Armata can be used both with a crew and without a crew—the robot will control the tank, it will choose the target itself. But whether a decision is made to shoot or not to shoot, a person still makes a decision to press the button,” Sergei Chemezov, head of Rostec, the state corporation that makes Armata, told news service TASS April 24.

Instead of the human aiming the weapon and deciding when to shoot, the human operator will instead review the target selection already made by the Armata’s automated systems, and then either approve the shot or call it off. Independent military analysis service Janes describes the Armata’s fire-control system as replicating a video game, with the targeting crosshairs displayed on a LCD screen.

How, exactly, this automated system would work factors into broader debates over how nations should govern lethal autonomous machines.

“Weapons are commonly classified as either having a human ‘in the loop,’ which means a human selects a target and decides to engage; ‘on the loop,’ which means the machine selects the target and engages, but a human supervises and can abort the process; and ‘out of the loop,’ which means that there is no human involved in the target selection and engagement,” Maaike Verbruggen, a doctoral researcher at the Vrije Universiteit Brussel, tells Popular Science.

As described by Chemezov, the Armata’s targeting falls between a human “in the loop” and “on the loop,” as the weapon automatically finds a target and then requests human authorization to shoot.

“The problem is that this classification risks that the role of the human will slowly shrink until it merely presses the red button of approval, but is not critically engaged with the process anymore,” says Verbruggen. “The role of the human is reduced to rubber stamping the actions of the machine.”

The danger that a human, especially in combat, would simply defer to the judgement of a machine and approve most any shot selected is a real animating fear behind efforts to come up with an international standard for controlling autonomous weapons. This has been central to the push for meaningful human control, a term that includes stricter limitations on automated weapons, like limiting the weapon’s autonomy when there’s a chance that civilians are present.

[Related: This self-driving robot tank can parachute out of a plane]

Russia’s Ministry of defense “essentially states that autonomous and AI-enabled systems cannot function without human input – yet at the same time, its R&D institutions are discussing the eventual full autonomy for such systems in combat,” says Samuel Bendett, an analyst at the Center for Naval Analysis and adjunct senior fellow at the Center for New American Security. 

Russia has framed this turn to robotics, and automated fighting tools, as one that will both save lives and help the country as it struggles to find enough people to fill the ranks of the military in the future.

Still, safeguarding soldiers is only part of the obligation of a military in battle. Another is ensuring that the weapons only find legal, appropriate targets. If the Armata tank is automatically interpreting school buses as tanks, or crowds of civilians as enemy combatants, that makes the automation system dangerous. It also requires that the human approving automated target selection pay close attention, which is much harder.

Verbruggen compares the situation to that of a human operator in an autonomous car—a person may grow complacent if the vehicle has been in charge for a while, and may be less alert if they’re suddenly forced to take over. 

With the Armata, while a soldier may be safely ensconced inside the tank, they still have to make fast decisions about its weapon. In that situation, they may be likely to just trust the automated targeting, even without knowing why, exactly, a given target was picked.