Sebastian ThrunStanford University
He's making a car so smart it drives itself. Someday we may all travel that way.
by John B. Carnett
Sebastian Thrun isn’t watching the road when his driverless Volkswagen SUV veers off-course and heads for a 50-foot precipice. He’s in the backseat looking at a laptop that’s tracking the car’s brain, which consists of seven Pentium processors. When he feels the car swerve abruptly to the left, Thrun looks up, pushes aside a bundle of cables blocking his view, and realizes that his car is about to pull a Thelma and Louise.
Thrun, 38, director of the Stanford Artificial Intelligence Laboratory, is field-testing what he hopes will be the world’s first fully autonomous car. Outfitted with lasers, radar, cameras, GPS and, most important, Thrun’s breakthrough road-finding and obstacle-recognition software, it will compete in the second annual DARPA Grand Challenge robotic-vehicle race, to be held in the Southwestern desert on October 8. But it’s not the $2-million purse that motivates Thrun. An unwavering optimist, he envisions robot cars traveling our nation’s highways, cars that drive better than humans, causing fewer fatal accidents.
Optimism will come in handy. The most successful entrant in last year’s race completed just 7.4 miles of the 175-mile course. Despite vast gains in computing power, intelligence still eludes robots. Model-based robots can’t simulate real-world complexity, while reactive robots lack the ability to plan ahead. In 1998, while programming a tour-guide robot to navigate a crowded museum, Thrun had a Zen-like revelation: “A key prerequisite of true intelligence is knowledge of one’s own ignorance,” he thought. Given the inherent unpredictability of the world, robots, like humans, will always make mistakes.
So Thrun pioneered what’s known as probabilistic robotics. He programs his machines to adjust their responses to incoming data based on the probability that the data are correct. In last year’s DARPA race, many derailments occurred when a ‘bot’s sensors provided faulty information, causing it to, for example, mistake a tumbleweed for a rock and stop in its tracks. Thrun’s car didn’t go off the cliff mentioned above, because its software ignored the bad GPS data (which it judged to have a significant probability of error) and responded instead to the more accurate laser readings. (If the car hadn’t made the right choice, Thrun or a colleague would have hit two giant red buttons next to the wheel to disable the AI.)
By early July, Thrun’s car had navigated 88 miles of last year’s route. It would have logged more, but the pace car got a flat tire after its (human) driver failed to avoid a bump in the road.