Path 1: Interaction
The robot, whom Jack has (unoriginally) dubbed Jeeves, looks me in the eye and says, "Hello." A surge of envy stops me from answering. I've heard about these new humanoids, but they're far out of my price range. "This is Greg," Jack cuts in. "He's a friend of mine." "Nice to meet you, Greg," Jeeves says in a pleasantly low voice. "May I take your bag? It looks heavy." I consider. "Sure. How about a beer while you're at it?" Jeeves tilts his head to one side. His mechanical brow furrows. Jack rephrases my order: "Greg would like you to bring him a beer."
Getting along with robots, experts agree, won't be all that hard. Colin Angle, the CEO of iRobot in Burlington, Massachusetts, says that 60 percent of Roomba owners feel close enough to their robotic vacuum cleaners to give them names (Jeeves and Rosie are the most common). For more advanced machines, scientists working in the nascent field of human-robot interaction have shown that seemingly minor social cues greatly increase people's comfort levels. A raised eyebrow or
tilted head can go a long way toward making humanoids seem more human. And since we get suspicious when someone doesn't look us in the eye, robots will definitely meet our gaze.
These won't be just vapid stares. A robot butler, on encountering a new face, will scan it, comparing the skin tone and prominent features to entries in its digital library of faces. If the robot has met the person before, it will know. If not, it will probably ask the stranger his name and then store that information so it can greet him in a more familiar way the next time.
Hartwig Holzapfel, a computer scientist and linguist at the University of Karlsruhe in Germany, is already building this basic interactive functionality into a humanoid called Armar-3. The next big challenge, he says, will be creating robots that understand our com- mands. The translation process will probably start with a speech-recognition system that interprets the words in the request. The text will then be compared with a library of phrases stored in the robot's memory. If the phrase is too obtuse and there's no obvious match, the robot could ask for a clarification. Or it might simply produce a questioning expression. Finally, after the speaker translates the order into a recognizable command, the robot identifies a match, which activates a series of algorithms that start it on a path to, say, the fridge.
For some scientists, though, this level of interaction just isn't deep enough. At the Massachusetts Institute of Technology Media Lab, cognitive scientist Deb Roy and his team are training their robot, Trisk, to attach significance to words by grounding definitions in experience. Instead of simply programming the meaning of the word "weight" into Trisk's brain, for example, they have the robot lift objects to experience their relative heaviness. Roy's work could lead to robots that understand what we're saying-not because a definition is programmed into their CPUs but because they can match words to their own experience.single page
The incredible innovations, like drone swarms and perpetual flight, bringing aviation into the world of tomorrow. Plus: today's greatest sci-fi writers predict the future, the science behind the summer's biggest blockbusters, a Doctor Who-themed DIY 'bot, the organs you can do without, and much more.