SHARE

Self-driving cars are already on our roads, and in order to be safe, courteous drivers, they obviously need to be able to see around them. For that, they have perception systems that include sensors like lidar units (spinning lasers that map the surrounding streetscape), and cameras. But just perceiving the world as it is isn’t enough. A good driver, whether it’s a computer or human, should also be able to predict where a car, or person, will be in a few moments.

To get better at predicting where a pedestrian will be, and even what the configuration of their limbs might look like, researchers at the University of Michigan conducted real-world research at several intersections in Ann Arbor, Michigan. They parked a car that had both lidar units and cameras on it near an intersection, and let it gather data from the unknowing pedestrians walking about, doing typical human things like using their smartphones and carrying bags.

Consider the complexity of a four-way intersection with stop signs, but no signals to tell people when they can go. There is a “dance that happens” at intersections like that, says Matthew Johnson-Roberson, an associate professor of engineering at the University of Michigan. “It’s just a really hard social situation.” Cars, and pedestrians, need to figure out when it’s safe to go without traffic lights to tell them.

By gathering real-world data at intersections like these, Johnson-Roberson and his team were able to train an artificial intelligence system called a neural network that could predict pedestrian motion. They also made sure that the AI took into account some ground rules, or constraints, about how humans move in the first place. For example: “Humans can’t fly,” Johnson-Roberson says. “Your weight needs to be distributed on your feet in such a way that it is balanced, and you’re not falling over.”

The end result was a system that is good at predicting where humans will be as they walk. “It was quite accurate actually,” Johnson-Roberson, who is the senior author on a new study on the topic, says. The predictions were “within a body length of where the person actually was, in almost all of the cases we had.” It worked with people who were up to about 148 feet away.

All of this is important because if self-driving cars can be good at estimating where a pedestrian will be in future moments, they can plan actions that steer them safely, and courteously, around the fragile humans with whom they share the roads.

The research allowed the team to observe the fascinating, complicated body motions of how people move. “We had people carrying heavy bags,” Johnson-Roberson says. “And you would see how the gait would change—people would lean to one side and compensate for the weight of their bags.”

The system could even notice if someone was using a smartphone, since it could capture not just the motion of pedestrians, but also the orientation, or poses, of their bodies—even where their head is looking. “When someone’s on their phone, they are not looking at the cars,” Johnson-Roberson says. That means that they’re not doing what a smart pedestrian should be: looking around them, and making eye contact with drivers so they don’t get smooshed.

In instances like that, as humans walk around cities increasingly distracted by the computers in their hands, the computers driving our cars should recognize that cluelessness, and take it into account while making decisions.