This AI can see people through walls. Here’s how.

Besides artificial intelligence, you also need radio waves.

Share

Radio signals coupled with artificial intelligence have allowed researchers to do something fascinating: see skeleton-like representations of people moving on the other side of a wall. And while it sounds like the kind of technology a SWAT team would love to have before kicking through a door, it’s already been used in a surprising way—to monitor the movements of Parkinson’s patients in their homes.

Interest in this type of technology dates back decades, says Dina Katabi, the senior researcher on the project and a professor of electrical engineering and computer science at MIT. “There was a big project by DARPA to try to detect people through walls and use wireless signals,” she says. But before this most recent research, the best these systems could do was reveal a “blob” shape of a person behind a wall.

The technology is now capable of revealing something more precise: it depicts the people in the scene as skeleton-like stick figures, and can show them moving in real time as they do normal activities, like walk or sit down. It focuses on key points of the body, including joints like elbows, hips, and feet. When a person—either occluded by a wall or not—takes a step, “you see that skeleton, or stick figure, that you created, takes a step with it,” she says. “If the person sits down, you see that stick figure sitting down.”

How it works

The radio signal they use is similar to Wi-Fi, but substantially less powerful.

The system works because those radio waves can penetrate objects like a wall, then bounce off a human body—which is mostly water, no friend to radio wave penetration—and travel back through the wall and to the device. “Now the challenge is: How do you interpret it?” Katabi says. That’s where the AI comes into play, specifically a machine learning tool called a neural network.

The way that artificial intelligence researchers train a neural network—which can deduce its own rules from data in order to learn—is by feeding it annotated information. It’s a process called supervised learning. Want to teach a self-driving car what a traffic light looks like? Show it images that include traffic lights, and annotate them to show the AI where in the image the light is. Neural networks are commonly used to interpret images, but can also be used carry out complex tasks like translate from one language to another, or even generate new text by imitating the data it’s given.

But in this case, they had a problem. “Nobody can take a wireless signal and label it where the head is, and where the joints are, and stuff like that,” she says. In other words: labeling an image is easy, labeling radio wave data that’s bounced off a person, not so much.

Their solution, just for the training period, was to couple the radio with a camera, and then label the images the camera created to help the neural network correlate the activities. This had to be done without a wall, so the camera could actually see. “We used those labels from the camera,” she says, “along with wireless signal, that happened concurrently, and we used them for training.”

After the training, they were surprised to discover that even though the system had only been trained with the people visible, and not occluded, it could detect people who were hidden. “It was able to see and create the stick figure of the human behind the wall,” she says, “although it never saw such thing during training.”

Not only that, it can even tell people apart by their gait. With the help of another neural network, the system could see examples of people walking, and then later, in new instances involving the same people, identify individuals with an accuracy of more than 83 percent, even through walls.

How will it be used?

The researchers have already started using the system, in a small study, with Parkinson’s patients. By putting the devices in the patients’ homes, they could monitor their movements in a comfortable setting without using cameras—in that sense, it’s a less invasive way of learning about someone’s body movements than traditional video would be. That study involved seven people and lasted eight weeks.

The results had a “high correlation” with the standard questionnaire used to evaluate the patients, Katabi says. “Also, it revealed additional info about the quality of life of a Parkinson’s patient—the behavior and functional state.” The Michael J. Fox foundation is funding further research; monitoring patients like this can help avoid “white coat syndrome,” Katabi says—when patients acts differently in front of doctors during an occasional visit.

All of this raises privacy issues, but Katabi says it’s not meant to be used on people without their consent.

 

Win the Holidays with PopSci's Gift Guides

Shopping for, well, anyone? The PopSci team’s holiday gift recommendations mean you’ll never need to buy another last-minute gift card.