Could Robots Take Over the World?
Could robots take over the world? That's the premise of this summer's I, Robot. And AI researchers aren't scoffing.
by Courtesy Twentieth Century Fox
Imagine a world where a robot delivers your mail, collects your garbage, cleans your house, even plays with your children. They are everywhere.
Now imagine that they turn on you.
This is the future presented in the SF thriller I, Robot (opening July 16), which is loosely based on Isaac Asimov’s 1950 collection of short stories. In 2035, robots make up 20 percent of the population and pervade nearly every aspect of life. They are implicitly trusted because they are hardwired with what Asimov called the Three Laws of Robotics–rules that force them to protect humans above all else. The seemingly perfect circle of protection breaks down, however, when a maniacal computer brain reinterprets the three laws, reasoning that the best way to protect humans is to rule them. Standing against this strictly logical, rebellious machine is Sonny, a new kind of robot designed to evolve and learn through experience and emotion.
We’re still years away from such a scenario, but the two varieties of robot featured in the movie do in fact mirror the reigning programming paradigms being pursued by today’s artificial-
intelligence and robotics researchers.
The classical approach is to model reasoning and other forms of higher-order thinking by creating logic-based rules systems. Deep Blue, the brute-force program that beat chess champion Garry Kasparov at his own game in 1997, is the best-known example. This computational style is what governs the rebellious robots in the movie. Sonny incorporates another approach–a biologically inspired, bottom-up technique in which intelligence “emerges” as a robot encounters new problems and learns from its experiences. MIT roboticists Rodney Brooks and Cynthia Breazeal are each building robots that learn and interact with the world from “birth.”
Over the past 20 years, the biological approach has been gaining momentum, because the classical method has proved limited in its ability to respond to complex real-world scenarios. “The problem is that logic and rules have to be applied to data,” says Roger Clarke, a computer scientist at Australian National University. Creating a data set that enables a robot to navigate a flight of stairs is one thing; making one that governs interactions with people would be devilishly difficult, Clarke explains.
As Asimov predicted, AI researchers are up against a paradox: To be truly useful, robots must be able to make their own decisions, but as soon as you give them autonomy, you give them the ability to disobey. “The more sophisticated an organism becomes, the more difficult it is to legislate rules to govern its behavior,” says
I, Robot director Alex Proyas. “And if robots are functioning as humans do, then the rules will need to be a lot more sophisticated than Asimov’s three laws.”
Which is exactly what Carnegie Mellon University machine learning expert Tom Mitchell has in mind. Mitchell imagines programming “sanity checks” into future
robots so that before a command is carried out it must be run through a very
complex set of laws that could override incompatible or malfunctioning commands. Robots could be reprogrammed, and the laws updated, as often as necessary.
Not everyone has faith in rules-based governance, however. Stuart Russell, a UC Berkeley AI professor, considers all the solutions proposed so far to be completely inadequate. “What’s to stop some intelligent robots from getting together and rewiring themselves so that the safeguards don’t work?” he asks.
For now, though, these matters remain purely academic, as reality lags far
behind science fiction. Today’s smartest robots hardly approach the intelligence
of our pets. “We can’t even build a machine as smart as a dog,” MIT Media Lab’s
Rosalind Picard says. “I don’t think this is something we need to be afraid of anytime soon. Besides,” she adds, “we can always just pull the plug.”