What will it take for humans to trust self-driving cars?

They're coming—but are we ready to let a computer take the wheel?
pedestrian with backpack waiting to cross the street
"What if they don't see me?" Scott Webb via Unsplash

Share

On March 18, 2018, Elaine Herzberg, 49, was crossing a road in Tempe, Arizona, when a Volvo SUV traveling at 39 miles per hour hit and killed her. ­Although she was one of thousands of U.S. pedestrians killed by vehicles every year, one distinctive—and highly modern—aspect set her death apart: Nobody was driving that Volvo. A computer was.

A fatality caused by a self-driving car might not be more tragic than another, but it does encourage the wariness many of us feel about technology making life-and-death decisions. Twelve months later, a survey by AAA revealed that 71 percent of Americans were too scared to zip around in a totally autonomous ride—an eight percent increase from a ­similar poll taken before Herzberg’s death.

Self-driving cars are already cruising our streets, their spinning lasers and other sensors scanning the world around them. Some are from big companies such as Waymo—part of Google’s parent conglomerate Alphabet—or General Motors, while others are the work of outfits you might not have heard of, including Drive.ai or Aptiv. (Uber operated the Volvo involved in Arizona’s fatal crash and took its self-​­driving cars off the roads for about nine months afterward.) But what makes some of us so wary of these robotic chauffeurs, and how can they earn our trust?

To understand these questions, it first helps to consider what psychologists call the theory of mind. Put simply, it’s the recognition that other people have brains in their heads that are busy thinking, just like ours (usually) are. The theory comes in handy on the road. Before we venture into a crosswalk, we might first make eye contact with a driver and then think, He sees me, so I’m safe, or He doesn’t, so I’m not. It’s a technique we likely use more than we realize, both behind the wheel and on our feet. “We know how other people are going to act because we know how we would act,” explains Azim Shariff, an associate professor of psychology at the University of British Columbia, who has written about this issue in the journal Nature Human Behaviour.

But you can’t make eye contact with an algorithm. Autonomous cars generally have backup humans ready to take control if necessary, but when the car is in self-driving mode, the computer’s in charge. “We’re going to have to learn a theory of the machine mind,” Shariff says. What that means in practice is that self-driving cars will need to provide clear signals—and not just turn signals—to let the public know what that machine mind is planning.

One solution comes from Drive.ai, a company ­running self-driving vans in Texas. The bright-orange-and-blue vehicles have LED signs on all four sides that respond to the environment with messages. They can tell a pedestrian who wants to cross in front of the car, “Waiting for You.” Or they can warn them: ­”Going Now/Please Wait.” A related strategy is intended for passengers, not pedestrians: Screens in Waymo vehicles show car occupants a simple, animated version of what the autonomous vehicle is seeing. Those displays can also show what the car is doing, like if it’s pausing to allow a human to cross. “Trust is the willingness to make yourself vulnerable to somebody else,” Shariff says. “We engage in it because we can pretty easily predict what the other person will do.” All of which means that if the cars are predictable and do what they say they will do, people will be more likely to trust them. Sound familiar?

Communicating with the machine mind is important, but that doesn’t mean we want it to mimic exactly how humans think and act while driving. In fact, the promise of traveling by autonomous car is that silicon brains won’t do dumb things such as text and drive, or drink and drive, or rocket down the highway while upset after a breakup. (Cars don’t date.) “I believe that they have the potential to be safer” than regular cars, says Marjory S. Blumenthal, a senior policy ana­lyst at the RAND Corporation think tank who has researched the vehicles. But she says there’s not enough good data yet to know for sure.

One practical way to create a reputation for safety is to start slow. The University of Michigan’s pair of self-driving shuttles go just 12 miles per hour. Huei Peng, a professor of mechanical engineering who oversees the little buses, says the research team behind the project is building trust by not asking too much: The predetermined route is just about a mile long, so they’re not exactly speeding down a highway in the snow. “We’re trying to push the envelope but in a very cautious way,” Peng says. Like other experts, Peng compares self-​­driving cars to elevators: an initially frightening technology that people eventually got used to.

RELATED: The role of humans in self-driving cars is even more complicated after Uber’s fatal crash

Ultimately, not everyone will have to trust driverless cars enough to go for a ride, and especially not at first. Indeed, the public isn’t homogeneous, says Raj Rajkumar, who directs the Metro21: Smart Cities Institute at Carnegie Mellon University. He notices three categories of potential users: tech skeptics, who know that their computer crashes and worry about getting into a vehicle controlled by one; early adopters, who are delighted by the promise of new tech; and people who are stressed by driving and would rather not do it if they don’t have to. The early adopters will buy in first, followed by the folks who just dislike driving, and then finally the skeptics, he argues. “So it’s a long process.” Trust grows like a self-driving shuttle drives: slowly.

This article was originally published in the Spring 2019 Transportation issue of Popular Science.

 

Win the Holidays with PopSci's Gift Guides

Shopping for, well, anyone? The PopSci team’s holiday gift recommendations mean you’ll never need to buy another last-minute gift card.