SHARE
Drones photo

Ethics-bot?

As unmanned drones become a larger part of how America makes war, fully autonomous fighting robots seem less a possibility than an eventuality. But how do we ensure that these future autonomous weapons conform to the ethics we would expect from a human combatant?

To prevent a robot from ever taking the field without the same code of conduct flesh and blood soldiers follow, Ron Arkin, a robotics engineer at the Georgia Institute of Technology, has begun writing computer programs that could help robots follow an ethical code in the heat of battle. And although his military-funded project is not designed to produce ethics software for actual future robots, it has generated interesting results nonetheless.

The main problem revolves around the complexity of human ethics. It was easy for Arkin to design a program, based on an actual 2006 encounter in Afghanistan, that would prevent an autonomous UAV from firing on subjects in a graveyard, a violation of the laws of war. However, many of the other missions Arkin selected as examples had far less clear-cut rules. And since robots follow clear cut rules, programing for those engagements proved much more difficult.

One odd outgrowth of this program are pieces of software that aims to provide not more human rules for machines, but more robot-like advice for humans. Noting that robots don’t seek revenge or feel prejudices, the program has spawned investigation into an artificially intelligence adviser for soldiers that would give clear ethical suggestions in situations when humans might be distracted by emotions.

Man and machine, working together for a more ethical world. Almost takes the fun out of being a liquid metal robot, doesn’t it?

[via New Scientist]