The uncanny valley appears pretty frequently in these pages, at least in presentation — like the disembodied baby head above, for instance, or the wonderfully horrible Telenoid. These robots and others represent the gulf in our robot affinity that gapes open when machines approach a certain level of human likeness.
Masahiro Mori described this phenomenon 42 years ago, when he was a robotics professor at the Tokyo Institute of Technology. His paper was largely unnoticed for decades, but more recently it has become a touchstone for robotics, especially as they become more lifelike. But the paper was never published in English in its entirety, for whatever reason. Now here it is, in a new translation approved by Mori and appearing in IEEE Spectrum.
Mori notes the eerie sensation that arises when we are tricked into thinking an artificial limb is real, and then realize it's not — it "becomes uncanny," and we lose our affinity for it. He expresses this phenomenon in a graph.
"I have noticed that, in climbing toward the goal of making robots appear human, our affinity for them increases until we come to a valley, which I call the uncanny valley," the new translation reads.
He also charts our affinities and lack thereof for still and moving objects, noting that our affinity is pretty high for a stuffed animal or a humanoid robot. But movement is key to our affinity — a humanoid robot would not move like a human, so it would be incredibly creepy, he says. "Imagine a craftsman being awakened suddenly in the dead of night. He searches downstairs for something among a crowd of mannequins in his workshop. If the mannequins started to move, it would be like a horror story," he writes.
A still corpse is also down in the valley. At the deepest point: Zombies.
When I observe an automated factory, it becomes apparent the purpose of the automation and I feel safe.
When I see a human with a prosthetic, I know the human controls this limb and again, I feel safe.
When a human type robot begins to interact with humans and I have no idea of its programing, the question then comes to my mind, has it been program to hurt me in some way?
When I see a HUMAN I ask a question: Has it been programmed to hurt me in some way? There's a fine line between coding and socialization/indoctrination, my friend.
My "point" was referencing what Robot said. Similarities are not always a cause for security. Even humans are black boxes subjected to conditioning, whose behavior is difficult to anticipate. An autonomous drone that hits a village of innocent people is just as problematic as a suicide bomber that incinerates a train full of people. You must have misunderstood me. In my opinion there IS no difference between coding and socialization/indoctrination. You must have misunderstood me.
@neuroguy88, totally agree about socialization/indoctrination, we all know that guns don't kill people - people kill people. and people also re-program other people to kill and hate other people, so don't blame the tools, blame the persons wielding and/or controlling the tool whether the tool is a robot or another human. think about it. i'm all for having robots, but the three laws of robotics can and will be hacked sooner or later
But it gives me another idea on how zombies may come about, nanobot infection.... AGHH!!!