SHARE

It’s the fourth day of a scientific conference in Denver–four busy February days in a huge rabbit-warren convention center with long hallways and fluorescent lighting and serious scientists giving serious PowerPoint presentations in darkened auditoriums; four days of breakthroughs and advances–nanotech to biotech, anthropology to zoology, the whole mind-spinning stew. Four days, for the assembled journalists, of making sense of it all and banging out stories on the fly-and now comes word of what could be a light interlude: Keep an eye out for the guy carrying the head. Say what? The robotic human head.

The press people for the American Association for the Advancement of Science, the conference’s sponsor, say the demonstration’s on for tomorrow morning.

For now, though: another darkened auditorium, another presentation, this one on biologically inspired intelligent robots, robots that emulate the form and function of real creatures. Yoseph Bar-Cohen of NASA’s Jet Propulsion Laboratory, a roundish, gray-haired dynamo, gives a whirlwind tour of the possibilities, which he says are not far off–insect-like bots that walk and fly and crawl and hop, others that dive and swim. Cynthia Breazeal from the MIT Media Lab shows videos of the world’s most lovable robot, the infant-like Kismet, looking up innocently at a woman who’s practically cooing at it; Breazeal talks about how she gave Kismet emotions and why. Finally, there’s David Hanson, a grad student in interactive arts and engineering at the University of Texas at Dallas. He’s got thick dark hair, a square jaw, urban-hip artsy sideburns, and he’s moving a bit jerkily in a nervous-but-trying-to-stay-calm sort of way. This, it turns out, is the guy with the head–but the head is out of commission today and he’s just showing slides: a smiling urethane self-portrait, a tan bot named Andy-roid, a pirate robot with earring and eye patch. Overlook the fact that they’re disembodied heads and they all look remarkably lifelike.

And that, it turns out, makes Hanson’s heads unique. The humanoids that have made news the past few years–Asimo, Grace, Kismet–are fine robots all, talented, versatile, smart, friendly. Asimo, the plastic-suited Honda humanoid, walks on two legs and welcomes visitors to the factory that builds it. Carnegie Mellon’s Grace, a six-foot-tall conglomeration of metal parts on wheels topped with an animated computer-monitor face, registered itself for a conference last year, found its way to the right room, and gave a presentation. Kismet, the media darling of a few years back, looks people in the eye, smiles when they do, and learns just like a baby would, by watching and copying. Who wouldn’t like these three? Other robots are being designed to work as nurses, tutors, servants and companions. But despite their talents, every one of these robots looks … well, like a robot. They’re sometimes appealing in a cartoonish sort of way, but they’re metallic, awkward, clunky.

Not Hanson’s heads. And for that reason, the next morning at 10:30 sharp the reporters are waiting–a roomful of them–and TV cameras are here to capture the debut of Hanson’s latest, most advanced model. Hanson, 33, walks in and sets something on a table. It’s a backless head, bolted to a wooden platform, but it’s got a face, a real face, with soft flesh-toned polymer skin and finely sculpted features and high cheekbones and big blue eyes. Hanson hooks it up to his laptop, fiddles with the wires. He’s not saying much; it might be an awkward moment except for the fact that everyone else is too busy checking out the head to notice. Then Hanson taps a few keys and . . . it moves. It looks left and right. It smiles. It frowns, sneers, knits its brows anxiously. Now the questions start, and Hanson is in his element: The head’s got 24 servomotors, he says, covering the major muscles in the human face. It’s got digital cameras in its eyes, to watch the people watching it, and new software will soon let the head mimic viewers. Its name is K-Bot, and it’s modeled after Kristen Nelson, his lab assistant.

And K-Bot is a hit. In the weeks following the head’s debut, stories appear in newspapers and television on six continents. Hanson receives an abundance of e-mails and phone calls: from scientists who want to collaborate, from companies that make prosthetics and surgical-training devices, from movie producers, from companies that make sex dolls. Androidworld.com, a Web site that serves up humanoid parts, software and news, places Hanson’s robot at the top of its list of 22 head projects, enthusing: “WOW–this guy is clearly one of the top head builders in the WORLD.”

For a 33-year-old UTD grad student, it’s an extraordinary burst of attention. But at least in the short term, the whole thing plays out just the way the buzz had billed it: Hanson’s K-Bot serves, for a moment, as a light interlude. No one asks why, of all the roboticists in the world, only Hanson appears to be attempting to build a robotic head that is indistinguishable in form and function from a human. No one points out that he is violating a decades-old taboo among robot designers. And no one asks him how he’s going to do it–how he plans to cross to the other side of the Uncanny Valley.

David Hanson has the sort of mixed pedigree that might just be a prerequisite for tilting at robotic windmills. On his Web page, he identifies himself as a sculptor-roboticist.

He spent two years in the late ’80s as an aimless physics major at the University of North Texas in Denton, where he pursued, as he puts it, “wild imaginative flights of fancy”–for example, turning his apartment into a “tropical paradise” (plants, parrots, tree frogs, a running stream) for a four-day party he and his friends called Disturbathon. “The high jinks,” he says, “were top-notch.” Such projects fueled an absenteeism habit; Hanson’s grades suffered, which cost him his financial aid and forced him to leave school. Then, in 1992, he was accepted at the Rhode Island School of Design, one of the country’s top art schools.

At RISD, Hanson alienated professors by building the Primordial Ooze Bath, an enormous installation in which art patrons crawled, slid, and swam about in gelatinous seaweed extract. Back during his years as a lonely, oddball teenager, Hanson had immersed himself in drawing and
sci-fi–Philip K. Dick and Isaac Asimov were favorites– and at RISD he managed, after a fashion, to meld his two passions. He took an artificial-intelligence class at nearby Brown University, and in 1995 he focused an independent-study project on “out-of-body experiences,” building a remotely operated humanoid head on a retractable five-foot stalk. The head, which he sculpted as a self-portrait, wheeled from room to room and chatted with people (via a remote operator). “The idea was always hanging in my mind of turning a sculpture into a smart sentient being,” he says.

After graduation, Hanson worked as an artist for six years, ending up at Disney in Los Angeles, where he sculpted theme-park characters, researched new materials, and hobnobbed with animatronics experts. In 2000, he saw Bar-Cohen speak at a conference on high-tech materials; at the Jet Propulsion Lab, Bar-Cohen was developing electroactive polymers to use as artificial muscles in NASA robots. Inspired, Hanson showed Bar-Cohen his portfolio, and Bar-Cohen decided to take a flier on this talented, motivated Walt Disney artist; he asked Hanson to write a book chapter describing how a network of artificial muscles could animate a robot. In early 2002, Bar-Cohen again tapped the talents of his young protege, now a grad student at UTD; Bar-Cohen was preparing a presentation to NASA bigwigs on the agency’s emerging robotics technology when he realized he needed some visual oomph–so he sent Hanson a plastic model of a human skull and gave him a week to build a head.

The evening Hanson got the skull, in April 2002, he grabbed a pair of calipers and struck out for a popular bar in an artsy Dallas warehouse district called Exposition Park. There he quickly scanned the room and spotted Kristen Nelson–a willowy blue-eyed brunette he knew casually–chatting with a guy at the bar. Hanson walked past once or twice, and they smiled at each other. Finally he walked up and said hello. “Can I measure your skull?” he asked.

Or maybe that’s not exactly, or entirely, what Hanson asked. By the time of the AAAS conference and the unveiling of K-Bot in February, Hanson’s robot-model instincts had borne fruit beyond his wildest hopes: He and Nelson were engaged. Not surprisingly, their “meet cute” story has seen its share of tellings and retellings; when the two recount the evening, they finish each other’s sentences and expand upon each other’s details. Hanson says he asked Nelson if he could measure her skull. She remembers it slightly differently:

“He asked, “Can I make you into a robot?'”

Can he make her into a robot? Can he make a robot into her? Should he even try? A month before Hanson and Nelson launched into barroom banter, Hanson had discovered that the almost universal answer from roboticists to that last question would be a resounding no: David Hanson should not try to make his robot look too much like Kristen Nelson, because to do so would mean risking a tumble into the depths of the Uncanny Valley.

In the late ’70s, a Japanese roboticist named Masahiro Mori published what would become a highly influential insight into the interplay between robotic design and human psychology. Mori’s central concept holds that if you plot similarity to humans on the x-axis against emotional reaction on the y, you’ll find a funny thing happens on the way to the perfectly lifelike android. Predictably, the curve rises steadily, emotional embrace growing as robots become more human-like. But at a certain point, just shy of true verisimilitude, the curve plunges down, through the floor of neutrality and into real revulsion, before rising again to a second peak of acceptance that corresponds with 100 percent human-like. This chasm–Mori’s Uncanny Valley–represents the notion that something that’s like a human but slightly off will make people recoil. Here, there be monsters.

Breazeal, creator of Kismet, has, like many of her colleagues, taken both inspiration and warning from the Uncanny Valley. Kismet’s gentle expression and enormous baby-blue eyes are designed to get the robot as close as possible on the acceptance curve to Mori’s first peak, but it’s so indisputably still a robot that there’s no chance of it toppling over the precipice. To relate socially to a machine, Breazeal says, people must accept it. A mechanical human face that doesn’t look quite right is “disquieting,” she says. A realistic face that doesn’t move right would be “doubly creepy.”

Breazeal was the first to let Hanson know he was setting off into this uncharted territory. Hanson met her at a conference in early 2002 and struck up a conversation about robotic heads. “She seemed to totally reject the notion of reproducing the human face,” he says. “I felt a little bit sad that this hero of mine would hold a view that was so opposite to my own. But I did feel defiant as well. And I felt a certain pleasure, like I was onto something.”

The first head Hanson built for Bar-Cohen–Andy-roid–is, frankly, a rudimentary prototype. A mere four servos allow it to make just a handful of rather unconvincing expressions. Once he finished with that model, though, Hanson plunged heedlessly into a pursuit of robotic verisimilitude way beyond anything ever attempted. He pored over Gray’s Anatomy and clicked obsessively through medical Web sites, noting the major human facial muscles, from the occipitofrontalis, which elevates the eyebrow and wrinkles the forehead, to the depressor anguli oris, which pulls the corners of the mouth down into a frown. He took in the pioneering work of psychologist Paul Ekman, who has classified thousands of facial expressions, specifying which combinations of individual facial muscles move in what manner to create each one; he pondered the mechanics of how specific muscles and tendons and ligaments work together to move portions of the face. He studied the facial form, composition, proportions and contours of everyone he knew; he spent hours in front of the mirror making faces.

He then experimented with plastic molds and materials, fitting the head from the inside with 24 servomotors, two microprocessors, anchors and nylon fishing line to tug on the skin. Then he wired the head and programmed the software to control it.

It was tough going. To get accurate, believable expressions, Hanson fiddled endlessly with the placement of the servomotors and the lines that tug the skin. The urethane skin of his early prototype heads was too stiff and heavy for the servomotors to move, and so he had invented a new polymer, which he dubbed F’rubber (foam + rubber x Fred MacMurray). Now, he and Nelson worked to perfect the F’rubber formula, mixing 970 combinations of ingredients in the bathroom of the apartment they’d moved into in Hollywood, until they found one polymer that was elastic and flexible and remarkably stable. A month before K-Bot’s February debut, Hanson spent three to four hours a day for a week in a local hardware store, piecing together brass plumbing parts to make a movable neck for the robot. Clerks asked him repeatedly if he was all right. Ultimately, Hanson built K-Bot with $400 in parts from hobby, crafts and hardware stores, paid for by his student loans.

As Hanson’s work progressed, it became ever more clear that making lifelike robot heads meant more than building a convincing surface and creating realistic facial expressions. So late last year he began to consider K-Bot’s brain. The Internet led him to a Los Angeles company, Eyematic, which makes state-of-the-art computer-vision software that recognizes human faces and expressions. Hanson sought out co-founder and chief technology officer Hartmut Neven, who gave him a beta version of the software. Then, through a mutual acquaintance Hanson had met at a scientific conference, he approached Jochen Triesch, a cognitive scientist at the University of California, San Diego, who was using robot heads to test theories about the mental processes underlying vision and rudimentary social skills. Also at UCSD was Javier Movellan, who was working on technologies that would allow a social robot to tutor schoolchildren. Hanson began commuting regularly to UCSD from Hollywood, three hours each way by train.

One day this spring, Hanson and I visited Movellan’s UCSD lab, a sunny room crowded with books and art and people and computers. Movellan has asked Hanson to build him a head, and is hoping to give it social skills. He and Marian Bartlett, a cognitive scientist who co-directs the UCSD Machine Perception Lab, have collaborated in the development of software featuring an animated schoolteacher who helps teach children to read. The child reads text on the screen. The schoolteacher can recognize if the child looks frustrated, and soon will be able to respond verbally. The character also makes expressions that correspond to the story the child is reading. Movellan plans to program one of Hanson’s heads to do what the teacher character does, then test it with children. The scientific question, Hanson says, is “whether people respond more powerfully to a three-dimensional embodied face versus a computer-generated face.”

Inspired by this sort of practical use of his human-like robotic head, Hanson has taken to calling K-Bot “a face for social robotics,” and says he’s “throwing down a glove” for robotics engineers. This is why he has little patience for the Uncanny Valley: It’s a concept that plays on fear rather than possibility, that asserts we should shy away from making robots look too human, rather than asking what positive benefits there might be to the truly lifelike robot. “Achieving the subtlety of human appearance is a challenge that should really be undertaken,” he says. Only realistic heads will challenge AI researchers to integrate the various robot capabilities–adaptive vision, natural language processing and more–to create “integrated humanoid robotics,” Hanson says.

A face robot like K-Bot could also help psychologists figure out exactly which facial movements convey one person’s fear, sadness, anger or joy to the mind of another. Today, psychologists try to do that by seeing how people interpret the raised eyebrows, furrowed brows and other expressions of actors in video clips or animated characters, says psychologist Craig Smith of Vanderbilt University. But even actors have difficulty precisely manipulating their expressions, so the experiments aren’t always completely controlled, and animated characters may be too un-realistic. A humanoid head that makes accurate facial expressions, in which every facial movement could be precisely controlled, would enable researchers to find out, in three dimensions and in real time, the purpose of specific facial muscles in communicating emotion, Smith says. That, he says, would solve a mystery that’s “been a puzzle since Darwin.”

Late on the afternoon of our visit to Movellan’s lab, Hanson and Triesch sat in the courtyard of a campus coffee shop, a cool breeze rustling the eucalyptus trees. They’d been planning to write a scientific paper about Hanson’s facial robots but hadn’t decided how to focus it. “What if we write a paper on how to cross the Uncanny Valley?” Hanson suggested. Triesch stretched out his long legs, looked at Hanson, and nodded: “I think it would be great.”

Soon the two were in Triesch’s conference room, plotting the Uncanny Valley on a white marker board. Hanson pointed to the lip of the valley. “Mori says, “Go here. Don’t go further. Don’t, no matter what you do, go further!'” he said. Triesch’s brow furrowed. Realism can’t be plotted on one axis, Hanson continued; it depends on shape, timing, movement and behavior. The idea, he said, is “really pseudoscientific, but people treat it like it’s science.”

Indeed, despite its status as dogma, the Uncanny Valley is nothing more than a theory. “We have evidence that it’s true, and evidence that it’s not,” says Sara Kiesler, a psychologist at Carnegie Mellon University who studies human-robot interaction. She calls the debate “theological,” with both sides arguing with firm convictions and little scientific evidence, and says that the back-and-forth is most intense when it comes to faces. “I’d like to test it,” she says, “with talking heads.”

In a pivotal 2001 paper in Science, Olaf Sporns of
Indiana University in Bloomington and six other leading roboticists described a new breed of robots that navigate the world and learn on their own. The new bots, which include Sporns’s Darwin V, have mobile bodies and sensors that let them perceive their environment, much as we do. They’re endowed with a developmental program that starts learning at birth. And they need human caretakers to teach them what they need to know about the world. As Triesch, who programs his robots on similar principles, says, “We’re getting more into raising robots like children.”

It will take decades at least to raise robots that are as smart and independent as we are, but the work has begun. Robots that learn on their own, robots that walk, robots that socialize with people, are all now in various stages of development. “A realistic autonomous humanoid is the Holy Grail,” Sporns says. And, on the far side of the Uncanny Valley, robots would have a realistic, emotionally expressive face– a face that challenges robot-brain builders to make smarter robots, a face that fools us into treating a machine as if it were human. A face a person could grow attached to.

In his 2002 book, Flesh and Machines, leading MIT roboticist Rodney Brooks, who oversaw Kismet’s development, writes that “mankind’s centuries-long quest to build artificial creatures is bearing fruit.” We’ll have different relationships with these machines than all earlier machines, he suggests. “The coming robotics revolution,” Brooks writes, “will change the fundamental nature of our society.”

On a cool, sunny day this past spring, Hanson, Nelson and I scrambled up and down the steep rises and canyons of Griffith Park, the ubiquitous Hollywood sign perched on a nearby hillside, the sky bright blue above the L.A. smog. We rested on a hilltop, where we could see for miles. Humans are facing an identity crisis, Hanson said, one that just a few people know about but many sense. “If we can mechanize what makes us human, that will make us feel like a mechanism,” he said. Maybe that’s what really lies behind the resistance to realistic humanoids, the reluctance to venture into the Uncanny Valley. And when we do cross over? At the February AAAS conference, someone asked Hanson his ultimate goal. A compassionate robot, he said: a peer, a friend. The goal, he said, is “letting it loose.”

Dan Ferber is a freelance writer based in Urbana, Illinois. He is a contributing correspondent for Science.

ROCKET OF DOOM

ROCKET OF DOOM

Armadillo Aerospace, led by videogame programming whiz John Carmack, plans to launch a manned suborbital rocket by the end of next year. Unfortunately for us, Carmack left his spaceship-in-progress back home in the shop. He did, however, show an early Armadillo design for a stainless-steel, hydrogen-peroxide-powered rocket engine [left]. In addition, he brought his aluminum, liquid-oxygen-and-ethanol engine that will power the suborbital ship [right]. Armadillo welder James Bauer [center] holds the engine design used by the small-scale Armadillo rocket flown at the X Prize Cup.
one-third-scale model rocket of the World War II-era German V2 rocket

VON BRAUN WOULD BE PROUD

Members of the Tripoli amateur rocketry association polish a one-third-scale model rocket of the World War II-era German V2 rocket. Tripoli’s planned launches at the X Prize Cup were scrubbed because of high winds. The group was demonstrating its technology but is not planning a larger model for human flight.
solid-fueled SpaceLoft

REAL HARDWARE

The solid-fueled SpaceLoft, built by UP Aerospace and shown here with company president Jerry Larson, was the only rocket at the X Prize Cup actually capable of reaching space. Its predecessor, the GoFast rocket, became the first amateur-built rocket to reach space in May of last year.
capsule-and-rocket-block combo

THE BALLOON STRATEGY

Brian Feeney, a self-taught industrial designer from Toronto, has put together what he calls the world’s largest volunteer engineering effort, the da Vinci Project, in a bid to launch himself into space. He’ll ascend to 80,000 feet in this capsule-and-rocket-block combo dangling from the bottom of the world’s largest helium balloon, at which point he’ll light the rocket to reach space. After reentry the balloon, capsule, and rocket block will land separately on parachutes.
Armadillo

PROBLEMATIC TEST FLIGHT

Armadillo’s 10-foot-tall technology demonstrator took off, hovered, and then descended gently under power, just like its planned suborbital big brother. After the first of its three planned flights at the X Prize Cup, though, the rocket touched down with only one of its four legs on its steel-plate landing pad. The other legs sank into the mud beside the pad, and the rocket tipped over and crashed to its side. It was damaged too badly to fly again.
full-scale model of the V2

CANADIAN MOCK-UP

Shown here, a full-scale model of the V2-derived manned suborbital rocket under construction by Canadian Arrow. Like Armadillo, the Canadians left their actual flight hardware at home.
Starchaser's three-man space capsule

ENGINE EXPLOSION

Starchaser’s three-man space capsule will ride to suborbital space atop an 80-foot rocket. The rocket’s engine exploded on the stand at the close of the X Prize Cup, making EZ Rocket’s two flights the only successful demonstrations at the event.
Rocketplane Ltd. in Oklahoma

THE NEXT BEST HOPE FOR SPACE

Rocketplane Ltd. in Oklahoma may be the next private company to fly people into space. Flight testing of its jet-and-rocket-engine hybrid is scheduled to begin next year, with suborbital tourist flights to begin in 2007. The rocketplane will jet to 20,000 feet before lighting a liquid-fueled rocket engine in its tail for the boost to space. After reentry it will restart its twin jet engines for landing.
XCOR's EZ Rocket

READY FOR THE RACES

The only manned craft to fly at the X Prize Cup was XCOR’s EZ Rocket, a modified Long-EZ airplane powered by twin liquid-oxygen-and-alcohol-burning rocket engines and flown by former space shuttle commander Rick Searfoss. The EZ Rocket is the prototype for a quartet of planned Rocket Racers that are to fly at next year’s X Prize Cup.