By Gregory MonePosted 08.24.2007 at 2:49 pm 2 Comments
Asimov's Three Laws just aren't going to cut it. A British artificial intelligence expert says we need to get serious about establishing a code of ethics for autonomous battlefield robots. Noel Sharkey of the University of Sheffield is concerned that the research focus on more capable, autonomous machines - especially in the U.S. - could lead to robots that kill indiscriminately.
He worries that the emphasis on developing independent battle-focused machines could be a way of passing the buck for fatal errors in the vein of, "Hey, it wasn't our fault, the robot did it." You could argue that Sharkey's starting this fight a bit early in the game, since we're not exactly up to the Optimus Prime vs. Megatron phase yet, but the advances that have come out of the DARPA Grand Challenges alone suggest that it might not be too long before we have mechanized grunts. While they might be mechanically capable, Sharkey doesn't think these robots will be smart enough to make the right calls. "We are going to give decisions on human fatality to machines that are not bright enough to be called stupid."—Gregory Mone
The new Hawk-Eye Tennis Officiating System is bringing high drama and high tech to the tradition-bound tournament
By Jackson LynchPosted 09.06.2006 at 2:00 am 0 Comments
For a closer look at how the Hawk-Eye works, launch the photo gallery by clicking 'View Photos' at left.
Its accuracy may be based on a complex computer-generated algorithm, but the Hawk-Eye Tennis Officiating System is ratcheting up the human drama at this year's U.S. Open. Rather than simply relying on officials to make line calls, the entire crowd now acts as referee, hollering "Challenge!" after controversial judgements.