5 paths to the walking, talking, pie-baking humanoid robot
It’s striding toward us from the kitchen, smoothly and silently. As I set down my overnight bag and turn to...
It’s striding toward us from the kitchen, smoothly and silently. As I set down my overnight bag and turn to question my friend Jack, it ambles gracefully into the foyer. I can sense Jack watching me out of the corner of his eye, looking for a reaction as his newest purchase stops and stands beside us on two thin mechanical legs and clasps two four-fingered hands behind its back. It’s smaller than the average person, lithe, entirely unthreatening–I could take it in a fight. The face isn’t human, but it’s not the face of an appliance either. And like any good butler, it-he?-humbly bows.
The robotic butler is coming. It’s being developed, piece by piece, in university labs and government research centers across the globe. Some scientists are designing dexterous fingers; some are focused on creating sensitive yet rugged artificial skin. Groups in Japan and Germany, at Cornell University and at tiny Olin College in Needham, Massachusetts, are trying to solve the extraordinarily difficult problem of getting a machine to walk efficiently and effectively on two legs. And then there’s the matter of building a robot brain, giving a machine the kind of artificial intelligence that allows it to control its limbs and fingers, interact with family and strangers and, perhaps most important, bake a perfect apple pie.
Of course, many top roboticists are motivated by more immediate concerns than the future market for household robo-servants. Cornell engineer Andy Ruina, for example, builds walking robots so he can better understand human locomotion and find a way to assist the elderly when their legs start to fail. But others, like Carnegie Mellon University computer scientist James Kuffner and Olin engineer Gill Pratt, are convinced that the disparate work being done today will eventually converge into a single platform-a machine, Pratt says, that will mow the lawn and rid us of the tedium of housework: a real-life Rosie the Robot. Developing this multitasking, human-shaped machine will require breakthroughs in five key areas: interaction, locomotion, navigation, manipulation and intelligence. Luckily, the past few years have seen an explosion of research and advances in each one.
Path 1: Interaction
The robot, whom Jack has (unoriginally) dubbed Jeeves, looks me in the eye and says, “Hello.” A surge of envy stops me from answering. I’ve heard about these new humanoids, but they’re far out of my price range. “This is Greg,” Jack cuts in. “He’s a friend of mine.” “Nice to meet you, Greg,” Jeeves says in a pleasantly low voice. “May I take your bag? It looks heavy.” I consider. “Sure. How about a beer while you’re at it?” Jeeves tilts his head to one side. His mechanical brow furrows. Jack rephrases my order: “Greg would like you to bring him a beer.”
Getting along with robots, experts agree, won’t be all that hard. Colin Angle, the CEO of iRobot in Burlington, Massachusetts, says that 60 percent of Roomba owners feel close enough to their robotic vacuum cleaners to give them names (Jeeves and Rosie are the most common). For more advanced machines, scientists working in the nascent field of human-robot interaction have shown that seemingly minor social cues greatly increase people’s comfort levels. A raised eyebrow or
tilted head can go a long way toward making humanoids seem more human. And since we get suspicious when someone doesn’t look us in the eye, robots will definitely meet our gaze.
These won’t be just vapid stares. A robot butler, on encountering a new face, will scan it, comparing the skin tone and prominent features to entries in its digital library of faces. If the robot has met the person before, it will know. If not, it will probably ask the stranger his name and then store that information so it can greet him in a more familiar way the next time.
Hartwig Holzapfel, a computer scientist and linguist at the University of Karlsruhe in Germany, is already building this basic interactive functionality into a humanoid called Armar-3. The next big challenge, he says, will be creating robots that understand our com- mands. The translation process will probably start with a speech-recognition system that interprets the words in the request. The text will then be compared with a library of phrases stored in the robot’s memory. If the phrase is too obtuse and there’s no obvious match, the robot could ask for a clarification. Or it might simply produce a questioning expression. Finally, after the speaker translates the order into a recognizable command, the robot identifies a match, which activates a series of algorithms that start it on a path to, say, the fridge.
For some scientists, though, this level of interaction just isn’t deep enough. At the Massachusetts Institute of Technology Media Lab, cognitive scientist Deb Roy and his team are training their robot, Trisk, to attach significance to words by grounding definitions in experience. Instead of simply programming the meaning of the word “weight” into Trisk’s brain, for example, they have the robot lift objects to experience their relative heaviness. Roy’s work could lead to robots that understand what we’re saying-not because a definition is programmed into their CPUs but because they can match words to their own experience.
Path 2: Locomotion
After depositing my bag in a closet, Jeeves walks toward the basement, where Jack keeps an extra fridge for drinks. “Run along and fetch that beer,”
I yell after Jeeves, starting to enjoy this whole robo-servant thing. Jeeves interprets my command literally-stupid robot!-and accelerates to a jog. He turns the corner into the kitchen and hurries down the basement steps, grabs a bottle, and sprints back upstairs.
Today’s best-known humanoid, the mini Storm Trooperâ€like Honda Asimo, can run, climb stairs, even do the hula. And because it’s designed not to fall over, it’s safe. But with motors in every joint controlling every motion of its limbs, Asimo devours energy. Its battery life can be as short as 30 minutes; it wouldn’t last through a dinner party.
Recently, though, roboticists have started approaching biped locomotion from another angle. Last year, several groups working in concert introduced machines designed to walk in the loose, free-swinging fashion of humans. Instead of driving all the motions with motors and carefully calibrating each step, the leg action is more like that of a pendulum. The result is walking robots that are much more efficient. Unfortunately, they’re also much less stable. Andy Ruina, the Cornell engineer who led one of the teams, is quick to criticize his group’s robot: “It can only do one thing. It can just walk in a straight line. It can’t even stand up.”
For a humanoid to be able to race through a house without falling over and finish its chores on a single charge, researchers need to find the optimal point between stability and efficiency. While some roboticists point, as a future solution, to the development of artificial muscles-materials that contract or expand in response to an electric charge or laser pulse-new, improved actuators could provide a more immediate fix. If smaller, less power-hungry systems were driving Asimo’s legs, it wouldn’t need to run to the recharging station as often.
Path 3: Navigation
As Jeeves hustles through the kitchen with the beer, the family dog, a yapping terrier who hasn’t yet adjusted to the new help, plants itself in a doorway and refuses to budge. The robot, avoiding confrontation, opts for an alternative route. He turns, steps over a childproof gate into the family room, and encounters a minefield: blocks and stuffed animals strewn about by Jack’s two-year-old. Jeeves has a standing order to pick up such things-in addition to vacuuming, scrubbing the floors, dusting, and washing the dishes-but the beer is a higher priority right now. He gingerly steps through the room, careful to avoid each toy, and delivers the bottle.
There’s significant debate among roboticists about whether a robo-butler would need legs, since even wheelchairs can now climb stairs, but there are some distinct advantages to the bipedal approach. A wheeled humanoid might have to clear a path for itself before rolling through a cluttered room. A biped could just tiptoe its way through, and also climb over obstacles.
Whether walking or rolling, the robo-butler is going to have to find its way around. First, it will need the right hardware to sense obstacles. Some
scientists advocate loading humanoids with sensors-laser range finders, infrared, 3-D vision-that would provide a detailed, continuously updated, 360-degree model of the shape, size and placement of everything in a room. Then there are the purists who believe that a humanoid should not have abilities above and beyond the most direct human equivalent. This view, more common in Japan than in the U.S., is a quasi-philosophical devotion to the challenge of building a mechanized human. This camp says that a robo-butler should walk on two legs because that’s how humans do it, not because legs are better than wheels. To this group, dispatching laser pulses to estimate the distance of objects instead of just relying on binocular vision, as humans do, would undermine the effort to replicate human abilities in a machine. It would be cheating.
James Kuffner, who does path-planning work with both virtual and real humanoids, likens the decision-making process associated with robot navigation to chess. When crossing a room, the robot uses object-recognition software to help it determine what could be moved out of the way and what it would need to circumvent-an ottoman versus a large couch. Each potential step is considered in terms of the long-term outcome: getting to the other side of the room. The robot picks a path that is the right mix of fast, safe and efficient, and starts to walk. All the while, it updates its model of the environment, checking to see if anything has changed and ensuring that it has chosen the best route.
Path 4: Manipulation
In the living room, Jeeves asks if there are any other requests. “When will dinner be ready?” Jack wonders. He and his wife, Mia, who should be home soon, have requested one of their favorite dishes, veal sauted in garlic and olive oil. “At 7:00,” Jeeves answers. After quickly picking up the mess in the family room, Jeeves removes the ingredients from the refrigerator and starts to prep. The olive oil is in a cabinet above the stove. Jeeves’s fingers and palm, covered in sensitive artificial skin, act almost as a second pair of eyes as he reaches out, carefully touches his fingers to the bottle, and picks it up gently by the neck.
Autonomous manipulation-the ability to grasp and study unknown objects without crushing or dropping them-is a growing area of research. It furthers intelligence work by outfitting robots with the tools they need to interact with and learn about their environment, but it’s also important on a more basic level. We’re going to want our humanoid to push a vacuum, stack and empty a dishwasher, open doors, and use knives to chop garlic and parsley. To do all that, it will need hands.
NASA’s Robonaut, designed to perform maintenance and repair work on the International Space Station, has thin, human-size hands capable of wielding a variety of tools. And last year, roboticists at the University of Tokyo developed a hand that can catch a ball projected at 186 mph. These are significant advances, but another critical part of manipulation, roboticists say, is feel. “Our skin is a ridiculously good sensor,” observes Oliver Brock, a roboticist at the University of Massachusetts Amherst, who is developing a hand designed to open doors.
At MIT, roboticist Eduardo Torres-Jara is fine-tuning Obrero, a one-armed robot with artificial skin on its fingertips and palm that can not only sense the presence and magnitude of forces applied to it, but the direction from which those pressures are being applied. If a bottle of olive oil were to start slipping out of a robo-chef’s hand, the artificial skin would tell the robot how it’s falling and allow it to recover its grip before the bottle fell to the floor.
Path 5: Intelligence
Once he’s set the table for three, Jeeves announces that dinner is ready. Jack, Mia and I are in the middle of a light-hearted argument over the new air- traffic laws for personal sky cars. As I take my seat, Jeeves leans over to set a steaming plate of food before me. There’s a brief lull in the discussion, so I ask Jeeves for his opinion on the matter.
Just how smart a household humanoid will be remains an open question. There are two basic approaches to intelligence, bottom-up and top-down. The former would involve an artificial brain that learns and evolves on its own, acquiring intelligence as it matures. Presumably, this brand of butler would eventually develop a stance on sky-car traffic regulations.
Top-down artificial intelligence, which is popular in American robotics labs, is the workmanlike approach, relying on dedicated algorithms to guide the robot through its tasks. With a top-down brain, the robo-butler probably wouldn’t be able to formulate opinions. It would be more like a personal computer, says Carnegie Mellon’s Kuffner. The owner might start with a scaled-down version capable of basic housework, then add programs as you would on a PC. Instead of Photoshop, though, you’d download Turkey Roasting.
The intelligence debate may ultimately be decided by consumers. Will they prefer a Jeeves that stays silent until spoken to and communicates only through social cues? Or will they want him to entertain their friends and keep themselves company in their old age? It’s a choice to be made at some key moment on the path to that humanoid future. “We’re getting closer,” says Kuffner, who contends that we’ll have robots making us meals in, at most, 50 years. “All the technology is improving every year-and at a rapid pace.”
Contributing editor Gregory Mone just bought a Roomba robotic vacuum.