A self-driven Volvo SUV owned and operated by Uber Technologies Inc. is flipped on its side after a collision in Tempe, Arizona, U.S. on March 24, 2017. Courtesy FRESCO NEWS/Mark Beach/Handout via REUTERS ATTENTION EDITORS - THIS IMAGE WAS PROVIDED BY A THIRD PARTY. EDITORIAL USE ONLY. MANDATORY CREDIT - RTX32PJX
A self-driven Volvo SUV owned and operated by Uber Technologies Inc. is flipped on its side after a collision in Tempe, Arizona, U.S. on March 24, 2017. Courtesy FRESCO NEWS/Mark Beach/Handout via REUTERS ATTENTION EDITORS - THIS IMAGE WAS PROVIDED BY A THIRD PARTY. EDITORIAL USE ONLY. MANDATORY CREDIT - RTX32PJX. © Handout . / Reuters
SHARE
Uber crash

A self-driven Volvo SUV owned and operated by Uber Technologies Inc. is flipped on its side after a collision in Tempe

A self-driven Volvo SUV owned and operated by Uber Technologies Inc. is flipped on its side after a collision in Tempe, Arizona, U.S. on March 24, 2017.

The dynamic between human drivers and autonomous vehicles is complicated, but researchers at IBM have patented a new cognitive system that could help determine if and when a person—or the self-driving system—should take control. It’s based on a variety of indicators, including factors like human fatigue and emotional state, as well as the overall mechanical function of the vehicle.

Onboard sensors would monitor physiological aspects of the human—like their heart rate, the direction of their gaze, and if their attention is focused—and the cognitive system might realize that the car is better able to safely navigate a given situation. The system would simultaneously keep close tabs on the technical aspects of the car, looking out for obstacles or errors that might be better navigated by a human. IBM envisions it as a third intelligence, keeping watch over both potential drivers.

For instance, if the system detects a minor problem like an issue with tire pressure, it might see that as a good opportunity for the human to take control. However, it would first use its analysis to make sure the driver was prepared and ready to take the wheel, using collected data to make its decision and recommendation. And if the system decides neither the car nor the human are fit to drive, it would attempt to automatically slow down and stop in a safe location. Beyond directly measurable factors, the system could also cross-check other self-driving vehicle traffic patterns and accident histories to learn more about its environment.

“What we are doing is envisioning a self-driving vehicle that is able to assess the readiness and risk associated with a human taking control of the vehicle, given some anomaly on board,” says James Kozloski, a master inventor with IBM Research who studies computational neuroscience.

Imagine another scenario in which a child runs out into the street in front of the car. The person may slam on the brakes, avoiding the accident, but could then become frazzled or distracted in the aftermath. The IBM system would pick up on that, and suggest the car drive itself. That, Kozloski says, would be the perfect moment for the car to take the wheel and drive slowly until the human has had a chance to calm down.

It’s the computer-to-human transition that’s the tricky one, says Aaron Steinfeld, an associate research professor at Carnegie Mellon University who researches the relationship between people and complicated systems.

In many autonomous vehicles right now, people can take over from the computer at any time by doing something like tapping the brakes or hitting a switch, Steinfeld says, regardless of whether or not it’s a good idea. Alternatively, he adds, the car can request a person take over with a sound and light, although the method varies. Typically, however, it’s because of something like a system error or bad weather. A smarter system would help human drivers have more trust in the AI beyond a mechanical problem or emergency.

“The reality is that the [professional] drivers are trained to look for those [alerts], and are on the lookout for them,” he says, while the general public is not.

As for safety in general, Steinfeld says that Waymo—Alphabet’s autonomous car company—has a lower crash-per-mile rate than regular vehicles, although humans are always present to take over.

One of the big challenges, however, is being able to teach AI to recognize the readiness of a human being. “If you’re in a traffic jam, you kind of zone off, and are barely paying attention—you from the outside look very similar to someone who’s paying attention,” he says. “This is why it’s a hard problem.”