Why A Real CHAPPiE Robot Would Be More Of A Mystery Than A Friend

The movie’s A.I. expert explains what it'll take to create the first conscious robot--and why it won't be what we expect

Share

Neil Blomkamp’s new film CHAPPiE, which hits U.S. theaters this weekend, follows the unlikely transformation of a defective robot into a one-of-a-kind conscious machine. The movie goes down in a Johannesburg, South Africa, that’s protected by a fully robotic police force. The brilliant designer of these bots, Deon Wilson (played by Dev Patel), isn’t satisfied with soulless automatons–so he secretly works after-hours to instill his creations with consciousness.

Thus CHAPPiE, the first thinking robot, is born shortly after being slated for destruction. The machine begins its new life awed by what we’d normally find mundane. He (it?) soon begins to understand more about the humans who shelter him and alters his way of thinking based on his artificially rapid upbringing.

Blomkamp’s flick ultimately rejects the warnings of bright minds such as Stephen Hawking and Elon Musk and makes an optimistic statement about the future of artificial intelligence. If a robot starts with a blank slate, yet is given consciousness to help it learn and grow, Blomkamp implies it’d be a benevolent bot–a being interested in academic and creative pursuits, and not some SkyNet-like apocalypse machine. Such a robot would also trust the humans that created it, emulate them, and even show love to those who befriend it.

This is a rosy and distant take from the usual ways moviemakers present A.I.–all-powerful, robotic beings bent on destroying or enslaving humans.

If we ever manage to create conscious robots, what type of system can we expect?

If we ever manage to create conscious robots, what type of system can we expect? A robot determined to rid the Earth of its pesky human population? Or one that’d graciously integrate into our society?

We spoke with Dr. Wolfgang Fink, an A.I. researcher at the California Institute of Technology, for some answers. He says the bigger questions will go unanswered for quite some time–we’re nowhere close to creating the robot envisioned in CHAPPiE. According to Fink, who consulted on the film, researchers are still missing the biggest piece of the A.I. puzzle: how to make a system that is both self-aware and environmentally aware. When those traits can be programmed, then we’ll have a truly autonomous system. And it will be like nothing we’ve ever encountered before.

What Is A.I.?

Fink says three different levels of robotic intelligence exist. The first is human-controlled, which is the dynamic behind most robots today. In CHAPPiE, for example, Vincent Moore (played by Hugh Jackman) dons a sensor-embedded helmet to control a giant military robot called MOOSE. “It’s nothing but a mechanical extension or a tele-presence of the human,” Fink explains. “The intelligence is instilled by the human.”

The next tier is what Fink actually considers to be genuine A.I.: a pre-installed, rule-based system. Equipped with a list of situations the system might encounter, a robot can react (appropriately or disastrously) to a variety of pre-programmed scenarios. CHAPPiE plays up this level of intelligence with Johannesburg’s mechanical police force. The bot cops follow a set of rules defining how a law enforcement officer should behave to keep citizens safe, and there’s no real improvisation.

“It makes that statement: ‘I am CHAPPiE.’ That’s a profound statement because it symbolizes self awareness, and that’s something nobody has been able to figure out.”

The hurdle with this type of intelligence–if you’re striving for sentient machinery, anyway–is that it lacks the fundamental aspects that make beings think. That’s because it’s not truly aware of itself and its surrounding environment.

“If there’s a situation where someone draws a gun, then [the robot] will draw a gun,” says Fink, referring to the police bots. “But what happens when the situation changes? If civilians run onto the scene, they might not know what to do and harm an innocent person in the process. They’re not self-aware or situation-aware.”

Creating a robot with both flavors of awareness, according to Fink, would raise the bar to the third level of robotic intelligence–true autonomy–and signify the first artificial consciousness. Like CHAPPiE, such a robot would be capable of modifying its thinking and personality, taking cues from its environment and its interactions with humans. And most of all, it would understand that it is a thinking being.

“It makes that statement: ‘I am CHAPPiE,’” says Fink. “That’s a profound statement because it symbolizes self awareness, and that’s something nobody has been able to figure out.”

From Intelligent To Autonomous

In all likelihood, the “brain” of an autonomous system won’t look like anything we know of in robotics or biology today. However it looks, though, Fink says to make it a reality we should look to (wait for it) geology.

No, autonomous systems don’t need to be experts on rocks. Fink argues the value of geology in designing A.I. systems rests in the field’s reliance on abductive reasoning, a skill that can takes years to acquire. “You will find a stellar genius physicist at 15 or 16, but not a geologist,” Fink says. “Geology works on a different level.”

So far, robots are stuck with only deductive reasoning skills, or top-down logic. With deductive reasoning, a conscious being works from general information to a more specific conclusion. It’s a process that essentially determines “if A then B.” If you’re told “all astronauts have flown in space,” for example, then when you’re told someone is an astronaut, you’ll infer they’ve flown in space. With their reliance on rule-based systems, A.I. bots have this process nailed down.

It’s abductive reasoning that has roboticists stumped. This type of bottom-up logic enables making the best hypothesis from an incomplete set of observations. For example, if a person has a headache and a runny nose, a doctor might conclude that the individual has a common cold. But what if that person left out another symptom–such as a sore throat? Then the conclusion of a cold could be incorrect, since it might be strep throat; the doctor has to make the best educated guess given the facts at hand. And that’s what makes abductive reasoning so tricky: It’s intelligence that accommodates new information and strengthens through experience.

For a robot to really be conscious, it will need to master the abductive reasoning process.

Abductive reasoning is a process geologists must use every day, making the profession difficult to master. “You see a rock that shouldn’t be there, a geologist will think, ‘Well there may have been a flood,’” says Fink. “So they might look for evidence of a riverbed. If that’s not being corroborated, they must reject the idea and start over again.”

For that reason Fink works closely with geologists in hopes of untangling the elements of the abductive reasoning process. Writ large, this logic is also what makes humans both self-aware and situation-aware–it insists we pay attention to our life experiences and surroundings and use that information to shape our assumptions and actions. For a robot to truly be conscious, it will need to do the same.

“It all starts with observing something in your environment, thinking what could have caused this, coming up with a hypothesis, and then that tells you what to do next,” says Fink. “My claim is this will be the path to truly autonomous systems.”

What Does A Truly Automated System Look Like?

Let’s say a robot with abductive reasoning comes to fruition. How would it behave, exactly? Fink has a simple answer: We can’t know it until it happens.

“If you give a robotic system the capabilities and tools to absorb the environment, be taught and learn, and be judged by its actions and then modify its thinking,” Fink says, “then the sky’s the limit as what it will develop into.”

From Fink’s standpoint, CHAPPiE gets that part right. The robot has abductive reasoning skills, and his personality is largely based on his experiences with the people who “raise” him. (Including some gangsters who kidnap him.) Yet even if the most benevolent of people trained an autonomous system, Fink isn’t convinced it’d lead to either a compassionate entity or an evil one–it’d just be different.

“It’s not governed by these core values. It may have a way of thinking and acting that may be incomprehensible to us, but it’s a totally valid way of thinking,” he says. “It just may be devoid of ethical behavior and moral behavior.”

It’s that unknown outcome that spooks public figures like Hawking and Musk. Most if not all human behaviors stem from our desire to obtain a certain goal, be it survival, finding love, protecting our families, etc. A robot can be programmed with the same kinds of optimization goals, which would help us predict the ways in which it would behave. But if the robot is truly autonomous, it will have the ability to update or change these goals whenever it chooses and possibly without remorse–a luxury we lack, given our biology (and biological motivations)–leaving us in the dark about what it really wants. We might also be incapable of accurately anticipating what it will do next.

Fortunately, Fink says we have some time before we see an autonomous system in action. Researchers haven’t even come close to mastering artificial abductive reasoning skills. But as Blomkamp depicted in CHAPPiE, radical technologies often seem to arrive out of nowhere. “It will not be an incremental process, but a disruptive process,” says Fink. “It’ll be a discontinuous jump to true autonomy.”

Correction (3/6/2015, 7:54 a.m. ET): The original story mislabeled abductive reasoning. We’ve corrected the piece throughout.