Believe it or not, your robots may soon be lying to you. But you don't have to take our word for it; Georgia Tech researchers, with funding from the Office of Naval Research, have been toying with algorithms that allow a robot to determine whether or not it wishes to deceive another robot or human, and then to carry out a deceptive strategy to that end.
For the purposes of the study, the researchers focused on the cognitive abilities of a robot attempting to hide from another pursuing robot. Their first objective was to create a means for the robot to decide whether or not deception was in order. If the robot decided that deceiving its pursuer was indeed in its self-interest, the it would proceed to the second step: executing a deceptive strategy.
Tapping interdependence theory and game theory to formulate their algorithms, they came up with two conditions that must be satisfied to set a deceptive tactic in motion: there must be a conflict of some sort between the pursued and the pursuer, and the deceptive robot must somehow benefit from its devious actions. Once the first condition was met, the deceiving 'bot began selecting tactics based on what it knew about the pursuing robot.
The robots were placed in a course in which they played 20 games of robo-hide-and-seek. There were three possible hiding spots, the paths to which were marked by different colored markers. The seeker was programmed to locate the hiding robot based on which markers it knocked over on the way to its hiding spot. But the in 75 percent of the trials, the hiding 'bot would knock down the markers on the way to one hiding spot, then once past the markers would turn and hide somewhere else, indicating a false path to the pursuing robot.
Why might this be helpful other than to create a race of evil Decepticons? In military robots, the ability to hide and otherwise decieve could be vitally important for keeping both technology and information out of the hands of the enemy. Such an ability could also be handy in search and rescue situations where a robot must placate a panicking or erratic victim.
this is like that article from last year . . . .still cool though.
Still cool. Until they start deciding it's in their best interests to deceive us.
whatifthey made pne that could change the shape of its treads so that it could go through sand and make a false trail
ugh stupid keyboard never buy "aegis". the mouse doesn't scroll the keyboard messes up and right next to the print screen button is a power button.
another step closer to irobot
"Is the reactor about to explode?"
"No Captain the reactor is fine"
someday they'll have secrets...
someday they'll have dreams...
armored core silent line. the A.I.O. we can't trust them kudos if you know what i mean.
What would you like master Asimo?
This is ridiculous. A prior article showed a huge gun mounted on a remotely controlled unit, and now teaching robots to lie? Hmmm, I wonder what the people programming this behavior might think and feel, when a robot tricks them in the future? Or perhaps the robots will decide humans are a threat and eliminate the very people who taught them to lie. It is a huge mistake to underestimate the danger of this.