In August, Stewart Russell, a computer scientist at University of California at Berkeley, authored an open letter calling for the ban of “lethal autonomous weapons.” To those outside the military-industrial complex, this could seem a bit premature, sort of like calling for a ban on Star Trek phasers or the Death Star. Reality says otherwise.
Humans have a venerable tradition of automating warfare. Land mines are a kind of robot, though a very dumb one. Heat-seeking missiles are smarter, albeit not by a lot. “There’s a continuum,” Russell says, and we’re further along it than we realize. “If you wanted to produce something very effective, pretty reliable, and if it became a military priority—in 18 months you could mass-produce some kind of intelligent weapon.” Indeed autonomous killing machines already exist: The Super aEgis II, a South Korean-made weapons platform, can recognize humans and target them. (It will request permission from a living operator before making a shot with its .50 caliber gun, but that’s more a courtesy than a requirement.)
Russell writes that “autonomous weapons will become the Kalashnikovs of tomorrow”—cheap and abundant. And that shifts the rules of war. “AI weapons could change the scale in which small groups of people can affect the rest of the world,” he says. “They can do the damage of nuclear weapons with less money and infrastructure.”
Proponents of AI weapons point to some upsides: Robots going to war would mean fewer human casualties. But to the 20,000 people (the majority of whom are scientists) who signed the letter, the costs far outweigh the benefits. Later this year, Russell and others will push for legislative stopgaps and a change in international law, similar to those that prohibit biological weapons. Meetings are set at the United Nations and the World Economic Forum. Once killer AI is here, there’s no going back.