Best of What's New photo
SHARE

The Slack channel at the University of Pennsylvania’s human-machine interaction lab, where I work, is typically a steady drip of lecture reminders and wall-climbing robot videos. But this past spring, news of the first Tesla Autopilot-related fatality turned the feed into a Niagara Falls of critical chatter:

Graduate Research Assistant: It’s a habit of all people launching products to claim things are working to keep people/investors excited before they actually even start it.

Post-doc Research Fellow: My opinion is that such failures are inevitable, at least until the technology improves. Tesla took the plunge first, and therefore is subject to increased scrutiny.

Student Researcher: If a driver was attentively behind the wheel, they wouldn’t have mistaken a tractor trailer for a road sign.

It went on like that for days. All sides of the argument seemed to have merit. So I did my homework, reading about the accident—which claimed the life of an Ohio man driving a Tesla Model S with Autopilot active—in more detail. I wanted to understand how the system, perhaps the most advanced public experiment in human-machine interaction yet, had gone so terribly wrong. The man’s car crashed into a tractor-trailer crossing U.S. Highway 27A in Florida. According to Tesla, the car’s emergency braking system didn’t distinguish the white side of the truck from the brightly lit sky.

From a technical standpoint, that’s where the fault lay. The more important factor, to auto safety experts and to Tesla, is that the driver didn’t notice the impending collision either. So he didn’t brake—and his car ran under the trailer.

As autonomous cars begin to hit the road, it’s time to assess some long-held misconceptions we have about smart machines and robots in our lives. Many of us grew up with the promise of all-knowing partner robots like Knight Rider’s intelligent car sidekick, KITT. (“I expect a full simonize once this is over.”)

Fiction, yes, but our expectations were set—and perhaps cemented even further by set-it-and-forget-it home robotics like the Roomba and the ubiquitous task-mastering dishwasher.

Autopilot is not the KITT scenario we collectively had in mind. Its instructions clearly state that humans must remain part of the automation equation. (Even with smart machines, we still must read instructions!) Tesla publicly proclaimed its Autopilot software to be a beta program (meaning it was still working out the bugs), and cautioned drivers to stay alert and keep their hands on the wheel. But there are, if subtle, disconnects between engineering and marketing. Telsa’s October 2015 blog post announcing the software update was titled “Your Autopilot has arrived,” as if a robot chauffer was about to pull up and pick us up. Overseas, the company’s Chinese marketing translated this new feature as “self-driving”—a fact that caused one driver to blame Tesla for having sideswiped a parked car.

Early Autopilot adopters perpetuated our fantasies and desire to misuse the tech. Ecstatic YouTube videos began popping up, showing grown (and giggling) adults test-riding the cars with their hands in the air like they were on a roller coaster, and playing checkers and Jenga in traffic. One professionally produced video review, viewed a half million times, offered this not-so-helpful tip in its description: “DISCLAIMER:…The activities performed in this video were produced and edited. Safety was our highest concern. Don’t be stupid. Pay attention to the road.”

So, don’t do what we just did.

Right.

What they’re missing: Shared control is the name of the autonomous-driving game.

We can glean a lot about this type of relationship from fighter-pilot training. Professionals have flown with so-called fly-by-wire, a catchall term for any computer-controlled flight assistance, since the Carter administration. Like Autopilot, fly-by-wire is an assistive technology meant to augment, but not absorb, the pilot’s responsibility to manage the craft. Pilots undergo years of training before taking control of the cockpit, gaining an intimate awareness of what the computer is seeing and how it’s processing the information. They also learn to maintain situation awareness and be ready to react, despite the presence of technology—as opposed to taking a laissez-faire, let-the-plane-do-the-work attitude.

A casual driver cannot possibly go through the deep training that a pilot does. But automakers must find effective workarounds. For starters, they need something beyond the pages of software-release notes that display on-screen when a driver installs an Autopilot software update. They should develop short training programs—not unlike the Saturday courses some states require for a boater’s license—to help people understand how automation works, when it is and isn’t designed to work, and why human drivers need to be ready to step in. “A problem with automated technologies like Autopilot is that when an error occurs, people tend to be out of the loop, and slow to both detect the problem as well as understand how to correct it,” says Mica Endsley, former chief scientist of the U.S. Air Force and an expert in fly-by-wire and man-machine interaction.

Training smarter drivers is part of the solution. But self-driving software needs to reinforce that training. Engineers who design this software and these cars need to understand human behavior and cognition to become better able to communicate with the public. Thankfully, this human-to-machine interaction has become a growing research field for automakers and academics. At Stanford, interaction-design specialists are learning how to make an autonomous car’s camera, radar and sensor perceptions, and reasoning more transparent to humans. Automakers, they say, should employ colloquial vocal cues (“braking due to obstacle ahead”) and physical changes to controls (such as shifting the angle of the steering wheel when the driver needs to take over) to make drivers aware of changes on the road—say, trucks about to cut them off—or prevent them from daydreaming themselves into a ditch.

Such handoff signals are still fairly subtle but should become less so. On the Model S, an audio tone and color change in the Autopilot dashboard icon are all the cues drivers get when they need to take control. Cadillac’s SuperCruise and Volvo’s Pilot Assist subtly vibrate the seat or steering wheel to draw attention to achieve the same goal. But automakers need to be more aggressive in helping us; a recent study from Stanford suggests that a multisensory approach—in which, say, a shimmying steering wheel is combined with a vocal prompt and a flashing light—might be a better way to speed reaction times.

When it comes to new technology, no one wants to go slowly. Not in an age of instant apps and maps and finger-swipe transactions. But drivers should proceed with caution (and attention!) into the world of autonomous autos. Technology that might lull people into relaxing their focus while barreling down the highway requires both better training for the humans and smarter alert systems for the machines.

This past May’s fatal autopilot accident may have been a worst-case scenario, but it underscores the importance of humans and machines figuring out how to share the driver’s seat.

This article was originally published in the November/December 2016 issue of Popular Science, under the title “Don’t Blame the Robots; Blame Us.”