One day this spring, Hanson and I visited Movellan's UCSD lab, a sunny room crowded with books and art and people and computers. Movellan has asked Hanson to build him a head, and is hoping to give it social skills. He and Marian Bartlett, a cognitive scientist who co-directs the UCSD Machine Perception Lab, have collaborated in the development of software featuring an animated schoolteacher who helps teach children to read. The child reads text on the screen. The schoolteacher can recognize if the child looks frustrated, and soon will be able to respond verbally. The character also makes expressions that correspond to the story the child is reading. Movellan plans to program one of Hanson's heads to do what the teacher character does, then test it with children. The scientific question, Hanson says, is "whether people respond more powerfully to a three-dimensional embodied face versus a computer-generated face."
Inspired by this sort of practical use of his human-like robotic head, Hanson has taken to calling K-Bot "a face for social robotics," and says he's "throwing down a glove" for robotics engineers. This is why he has little patience for the Uncanny Valley: It's a concept that plays on fear rather than possibility, that asserts we should shy away from making robots look too human, rather than asking what positive benefits there might be to the truly lifelike robot. "Achieving the subtlety of human appearance is a challenge that should really be undertaken," he says. Only realistic heads will challenge AI researchers to integrate the various robot capabilities–adaptive vision, natural language processing and more–to create "integrated humanoid robotics," Hanson says.
A face robot like K-Bot could also help psychologists figure out exactly which facial movements convey one person's fear, sadness, anger or joy to the mind of another. Today, psychologists try to do that by seeing how people interpret the raised eyebrows, furrowed brows and other expressions of actors in video clips or animated characters, says psychologist Craig Smith of Vanderbilt University. But even actors have difficulty precisely manipulating their expressions, so the experiments aren't always completely controlled, and animated characters may be too un-realistic. A humanoid head that makes accurate facial expressions, in which every facial movement could be precisely controlled, would enable researchers to find out, in three dimensions and in real time, the purpose of specific facial muscles in communicating emotion, Smith says. That, he says, would solve a mystery that's "been a puzzle since Darwin."
Late on the afternoon of our visit to Movellan's lab, Hanson and Triesch sat in the courtyard of a campus coffee shop, a cool breeze rustling the eucalyptus trees. They'd been planning to write a scientific paper about Hanson's facial robots but hadn't decided how to focus it. "What if we write a paper on how to cross the Uncanny Valley?" Hanson suggested. Triesch stretched out his long legs, looked at Hanson, and nodded: "I think it would be great."
Soon the two were in Triesch's conference room, plotting the Uncanny Valley on a white marker board. Hanson pointed to the lip of the valley. "Mori says, "Go here. Don't go further. Don't, no matter what you do, go further!'" he said. Triesch's brow furrowed. Realism can't be plotted on one axis, Hanson continued; it depends on shape, timing, movement and behavior. The idea, he said, is "really pseudoscientific, but people treat it like it's science."
The incredible innovations, like drone swarms and perpetual flight, bringing aviation into the world of tomorrow. Plus: today's greatest sci-fi writers predict the future, the science behind the summer's biggest blockbusters, a Doctor Who-themed DIY 'bot, the organs you can do without, and much more.