Robots photo
SHARE

It happens quickly—more quickly than you, being human, can fully process.

A front tire blows, and your autonomous SUV swerves. But rather than veering left, into the opposing lane of traffic, the robotic vehicle steers right. Brakes engage, the system tries to correct itself, but there’s too much momentum. Like a cornball stunt in a bad action movie, you are over the cliff, in free fall.

Your robot, the one you paid good money for, has chosen to kill you. Better that, its collision-response algorithms decided, than a high-speed, head-on collision with a smaller, non-robotic compact. There were two people in that car, to your one. The math couldn’t be simpler.

This, roughly speaking, is the problem presented by Patrick Lin, an associate philosophy professor and director of the Ethics + Emerging Sciences Group at California Polytechnic State University. In a recent opinion piece for Wired, Lin explored one of the most disturbing questions in robot ethics: If a crash is unavoidable, should an autonomous car choose who it slams into?

It might seem like a simple thought experiment, a twist on the classic “trolley problem,” an ethical conundrum that asks whether you’d save five people on a runaway trolley, at the price of killing one person on the tracks. But the more detailed the crash scenarios get, the harder they are to navigate. Assume that the robot has what can only be described as superhuman senses and reaction speed, thanks to its machine reflexes and suite of advanced sensors. In that moment of truth before the collision, should the vehicle target a small car, rather than a big one, to err towards protecting its master? Or should it do the reverse, aiming for the SUV, even if it means reducing the robo-car owner’s chances of survival? And what if it’s a choice between driving into a school bus, or plowing into a tree? Does the robot choose a massacre, or a betrayal?

The key factor, again, is the car’s superhuman status. “With great power comes great responsibility,” says Lin. “If these machines have greater capacity than we do, higher processor speeds, better sensors, that seems to imply a greater responsibility to make better decisions.”

Current autonomous cars, it should be said, are more student driver than Spider-Man, unable to notice a human motorist waving them through an intersection, much less churn through a complex matrix of projected impacts, death tolls, and what Lin calls “moral math” in the moments before a collision. But sensors, processors and software are the rare elements of robotics that tend to advance rapidly (while actuation and power density, for example, limp along with typical analog stubbornness). While the timeframe is unclear, autonomous cars are guaranteed to eventually do what people can’t, either as individual sensor-laden devices, or because they’re communicating with other vehicles and connected infrastructure, and anticipating events as only a hive mind can.

So if we assume that hyper-competence is the manifest destiny of machines, then we’re forced to ask a question that’s bigger than who they should crash into. If robots are going to be superhuman, isn’t it their duty to be superheroes, and use those powers to save as many humans as possible?

* * *

This second hypothetical is bloodier than the first, but less lethal.

A group of soldiers has wandered into the kill box. That’s the GPS-designated area within which an autonomous military ground robot has been given clearance to engage any and all targets. The machine’s sensors calculate wind-speed, humidity, and barometric pressure. Then it goes to work.

The shots land cleanly, for the most part. All of the targets are down.

But only one of them is in immediate mortal danger—instead of suffering a leg wound, like the rest, he took a round to the abdomen. Even a robot’s aim isn’t perfect.

The machine pulls back, and holds its fire while the targets are evacuated.

No one would call this kind of robot a life-saver. But in a presentation to DARPA and the National Academy of Scientists two years ago, Lin presented the opposite what-if scenario: A killer robot that’s accurate enough to shoot essentially every one of its target.

According to Lin, such a system would risk violating the Geneva Conventions’ article on restricting “arms which cause superfluous injury or unnecessary suffering.” The International Committee Red Cross developed more specific guidelines in a later proposal, calling for a ban on weapons with a “field mortality of more than 25% or hospital mortality of more than 5%.” In other words, new systems shouldn’t kill a target outright more than a quarter of the time, or have more than a five percent chance of leading to his or her death in a hospital.

“It’s implicit in war, that we want to give everyone a fair chance,” says Lin. “The other side probably aren’t all volunteers. They could be conscripted. So the laws of war don’t authorize you to kill, but to render enemy combatants unable to fight.” A robot that specializes in shooting people in the head, or some other incredibly effective, but overwhelmingly lethal capability—where death is a certainty, because of superhuman prowess—could certainly be defined as inhumane.

As with the autonomous car crash scenario, everything hinges on that level of technological certainty. A human soldier or police officer isn’t legally or ethically expected to aim for a target’s leg. Accuracy, at any range or skill level, is never a sure thing for mere mortals, much less ones full of adrenaline. Likewise, even the most seasoned, professione driver can’t be expected to execute the perfect maneuver, or the ethically “correct” decision, in the split-second preceding a sudden highway collision.

But if it’s possible to build that level of precision into a machine, expectations would invariably change. The makers of robots that do bodily harm (though intention or accident) would have to address a range of trolley problems during development, and provide clear decisions for each one. Armed bot designers might have it relatively easy, if they’re able to program systems to cripple targets instead of executing them. But if that’s the clear choice—that robots should actively reduce human deaths, even among the enemy—wouldn’t you have to accept that your car has killed you, instead of two strangers?

* * *

Follow this line of reasoning to its logical conclusion, and things start to get a little sci-fi, and more than a little unsettling. If robots are proven capable of sparing human lives, sacrificing the few for the good of the many, what sort of monster would program them to do otherwise?

And yet, nobody in their right mind would buy an autonomous car that explicitly warns the customer that his or her safety is not its first priority.

That’s the dilemma that makers of robot vehicles could eventually face if they take the moral and ethical high road, and design them to limit human injury or death without discrimination. To say that such an admission would slow the adoption of autonomous cars is an understatement. “Buy our car,” jokes Michael Cahill, a law professor and vice dean at Brooklyn Law School, “but be aware that it might drive over a cliff rather than hit a car with two people.”

Okay, so that was Cahill’s tossed-out hypothetical, not mine. But as difficult as it would be to convince automakers to throw their own customers under the proverbial bus, or to force their hand with regulations, it might be the only option that shields them from widespread litigation. Because whatever they choose to do—kill the couple, or the driver, or randomly pick a target—these are ethical decisions being made ahead of time. As such, they could be far more vulnerable to lawsuits, says Cahill, as victims and their family members dissect and indict decisions that weren’t made in the spur of the moment, “but far in advance, in the comfort of corporate offices.”

In the absence of a universal standard for built-in, pre-collision ethics, superhuman cars could start to resemble supervillains, aiming for the elderly driver rather than the younger investment banker—the latter’s family could potentially sue for considerably more lost wages. Or, less ghoulishly, the vehicle’s designers could pick targets based solely on make and model of car. “Don’t steer towards the Lexus,” says Cahill. “If you have to hit something, you could program it hit a cheaper car, since the driver is more likely to have less money.”

The greater good scenario is looking better and better. In fact, I’d argue that from a legal, moral, and ethical standpoint, it’s the only viable option. It’s terrifying to think that your robot chauffeur might not have your back, and that it would, without a moment’s hesitation, choose to launch you off that cliff. Or weirder still, concoct a plan among its fellow, networked bots, swerving your car into the path of a speeding truck, to deflect it away from a school bus. But if the robots develop that degree of power over life and death, shouldn’t they have to wield it responsibly?

“That’s one way to look at it, that the beauty of robots is that they don’t have relationships to anybody. They can make decisions that are better for everyone,” says Cahill. “But if you lived in that world, where robots made all the decisions, you might think it’s a dystopia.”