Yesterday, a United Nations expert called for a halt and moratorium on developing “lethal autonomous robotics,” or, in layman’s terms, “killer robots.”
His argument: once killer robots take part in war, there will be no going back. Christof Heyns, the UN Special Rapporteur on extrajudicial, summary or arbitrary executions, told the Human Rights Council that now is the time to regulate and stop killer robots, arguing that “decisions over life and death in armed conflict may require compassion and intuition.” He also urged the council to form a panel that would study whether international laws in place today adequately address the use of killer robots.
Thing is, killer robots already exist. And they’re about the least compassionate machines we could imagine.
I’m talking about land mines, those notorious explosives that explode when walked over. Land mines are programmed to kill when certain conditions are met. That is the same principle guiding a killer robot.
But there are some key differences: A killer robot might make a decision based on algorithms and inputs, internal coding and pre-programmed combat behaviors. It might be programmed to understand the laws of war, and it might use surveillance technologies to make distinctions between unarmed civilians and armed combatants. The same principles that power facial recognition software could apply to robots targeting their weapons at other weapons, so they fire to disable guns and not to kill people.
Land mines, on the other hand, fail to distinguish between civilians and soldiers, between soldiers of different nations, and between animals or large children or small soldiers. Land-mine triggers cannot be easily shut off and are designed for durability not intelligence. At their worst, killer robots could be as deadly and as indiscriminate as landmines. Chances are, though, they will be much more sophisticated.
The task before lawmakers is not to ban a technology out of fear but to adapt the law to the technology once it exists. Making legislative decisions about new technology is tricky business. In the United States, electronic communication is governed by a law passed well before email was a regular fixture of life. Provisions that made sense to congressmen in 1986 trying to imagine email led to great weaknesses in privacy and personal security, all because the technology wasn’t understood when the law was written. The stakes are much lower in governing electronic communications than in authorizing robots to kill.
Killer robots are coming. Efforts to halt their introduction or ban their development are not only likely to fail, but they’ll drown out legitimate concerns about the safest way to implement the technology with Luddite fear-mongering.