Carol Reiley
Carol Reiley has always been a DIYer. At age 8, she designed a humane mousetrap to catch a renegade pet hamster. Marc Olivier Le Blanc
SHARE

Many people’s first experience with a driverless car is as a bystander. So part of our mission is transparency: making sure our vehicles can communicate intention to pedestrians. A roof-mounted LED, a screen on the grille, or projecting laser onto the ground would allow that communication through words or even emoji. You can tell people when the car is in self-driving mode by using a blue light. Even people who are colorblind can see blue.

On the inside, we use artificial intelligence that allows the car to be powered by deep learning. We wanted to bypass the need to hard-code detection of specific features—such as lane markings, guardrails, bicyclists—and avoid creating a near-infinite number of “if, then, else” statements. That’s too impractical to code when trying to account for the randomness that occurs on the road. This sort of “deep driving” can identify objects and intent, and can process piles of data. We’re using it for everything from building maps to identifying objects to combining the input from sensors. Deep learning is also used to offer a smoother ride by learning from examples. This eliminates jerkiness for a more natural feel.

And then there’s the roughly two hours of commute time you gain back each day from your car driving itself. We think this will trigger the next big app boom. Thinking of the car as a computer platform, it will become your third living space. It’s not just about getting from point A to point B. It’ll be like you’re sitting inside your cellphone.

This article was originally published in the January/February 2017 issue of Popular Science, under the title “Computer on Wheels.”