A Geneva Convention for Robots?

Asimov's Three Laws just aren't going to cut it. A British artificial intelligence expert says we need to get serious about establishing a code of ethics for autonomous battlefield robots. Noel Sharkey of the University of Sheffield is concerned that the research focus on more capable, autonomous machines - especially in the U.S. - could lead to robots that kill indiscriminately.

He worries that the emphasis on developing independent battle-focused machines could be a way of passing the buck for fatal errors in the vein of, "Hey, it wasn't our fault, the robot did it." You could argue that Sharkey's starting this fight a bit early in the game, since we're not exactly up to the Optimus Prime vs. Megatron phase yet, but the advances that have come out of the DARPA Grand Challenges alone suggest that it might not be too long before we have mechanized grunts. While they might be mechanically capable, Sharkey doesn't think these robots will be smart enough to make the right calls. "We are going to give decisions on human fatality to machines that are not bright enough to be called stupid."—Gregory Mone