Killer Drones: When Will Our Weaponized Robots Become Autonomous?
USAF
SHARE

America’s drone fleet has become an increasingly relied-upon wing of its counter-insurgency strategy and plays a key role in its geopolitical policy, particularly in Pakistan where unmanned aircraft routinely venture into sovereign territory and deliver lethal payloads to targets on the ground. But the Washington Post asks: just exactly how far away are we from real “killer robots.” The answer, in this morning’s piece of recommended reading, is: we’re already there.

We know that various research and academic institutions are working on robot autonomy (regular readers see stories and videos of these autonomous ‘bots right here on PopSci all the time), but what’s a bit mind-blowing is just how far along some of this technology is. At Fort Benning, a team of Georgia Tech computer scientists is helping the military demonstrate software that can autonomously–without a shred of human input–acquire and make life or death decisions about targets on the ground.

That is, the only thing that’s missing is the capability to fire. Add that, and you’ve got a killer robot.

Of course, these are just demonstrations (for now). But they create a blueprint for the inevitable future of warfare: when time is critical and running decisions up the chain isn’t feasible, software will make key decisions about what constitutes a target, what falls within the bounds of the “rules of war,” and whether or not it’s safe to commence firing. If a program can satisfy whatever requirements have been seeded in its coding, then it’s bombs away.

It all sounds a bit Skynet, but it’s moving forward at a rapid pace within the U.S. military, driven both by need (putting fewer human lives in harm’s way is obviously preferable) and that Cold War-esque mentality that if America isn’t at the front of autonomous warfare then it can only be behind. That sentiment is not entirely misplaced: South Korea has already deployed semi-autonomous armed robotic systems along the demilitarized zone bordering North Korea, and the Chinese have a dog in the hunt for autonomous weapons systems as well.

So what is the state of “lethal autonomy?” To put a number on it, it’s at least a decade (probably more) away from becoming battlefield reality. I sat in on a lecture at last month’s AUVSI unmanned robotics conference titled “Armed and Autonomous” where the focus was on the idea of deploying armed UAVs into contested airspace–using unmanned planes to deliver surface-to-air and air-to-air weapons in areas where anti-air defenses are still intact.

What might surprise many is that the computer programs necessary to evade air defenses and execute these kinds of missions autonomously already exist. The backbone technology is there, we just don’t trust it enough to actually deploy it. The idea of unleashing armed and autonomous robots, aerial or otherwise, is naturally abhorrent to us because robots–at least the robots that we have now–are incapable of making common sense decisions or distinguishing–with 100 percent accuracy–between friend or foe, surrendering troops or hostile enemy, the benign and the threatening.

But that capability gap between human and machine, as WaPo reports, is shrinking. The question is: when will it have shrunk enough that we trust robots with life and death decisions? As we’ve been coldly reminded by incidents in Iraq and Afghanistan, even highly trained soldiers don’t always make the right decisions on the ground. At what percentage of error are we willing to say autonomous robots are ready for war?

Click through below for the Post piece. It’s a quick and engaging morning read.

Washington Post