Artificial intelligence thinks your face is full of data. Could it actually unmask you?

Why humans, and by extension our machines, are so determined to “read” people.
Veoneer driving facial recognition mood
Veoneer's new autonomous vehicle software will read your face as you drive, and make decisions based on your expression. Veoneer

Share

Each January, some 4,500 companies descend upon Las Vegas for the psychological marathon known as the Consumer Electronics Show, or CES.

The 2019 festivities were much like any other. Companies oversold their ideas. Attendees tweeted out the craziest products, and Instagrammed the endless miles of convention space. Trend-spotting was the name of the game, and this year’s trends ran the gamut: drones, voice-activated home assistants, something called “8K” television. But the most provocative robots were those that claimed to “read” humans faces, revealing our emotions and physical health in a single image.

Some were overwhelming if toothless mashups of meme culture and pseudoscience. One machine interpreted a photo of our 36-year-old technology editor, Stan Horaczek, as “adorable, age 30, and looks like a G-Dragon.” (Two of three isn’t bad.) Another determined he was like age 47 and “male 98 percent.” Both featured many, many emoji.

But some of the proposals could have profound consequences for our everyday lives. Intel offered an update on its effort to build a wheelchair controlled by facial expressions (turn left with a wink, or right with a kissy-face), which would have clear and positive implications for mobility. Veoneer promoted its “expression recognition” concept for autonomous vehicle AI. It will judge facial expressions to determine if drivers are engaged, sleepy, or otherwise distracted on the road. And still others expressed an intention to automate part of a doctor’s visit, peering deep into our faces to determine what ails us.

The wares on display at CES may be shiny and new, but the human desire to turn faces into information has its origins in antiquity. The Greek mathematician Pythagoras selected his students “based on how gifted they looked,” according to Sarah Waldorf of the J. Paul Getty Trust. In the 1400s, the vermillion birthmark on the face of James II of Scotland (alias: “Fiery Face”) was considered an outward manifestation of his smouldering temper. And in colonial Europe, many scientists lent credibility to racist caricatures, which linked human expressions to animal behavior.

“Physiognomy,” the name for the widely-held belief that our faces are wrinkled with ulterior meaning, has never really gone away. In The New York Times Magazine, Teju Cole argued that the belief manifests itself in every work of photography: “We tend to interpret portraits as though we were reading something inherent in the person portrayed,” he writes. “We talk about strength and uncertainty; we praise people for their strong jaws and pity them their weak chins. High foreheads are deemed intelligent. We easily link the people’s facial features to the content of their character.”

But what can two eyes, a mouth, and a nose actually tell us?

Face textbook emotion perception
Humans can’t “read” faces, but we do a pretty good job of interpreting other people’s emotions in context. Adrigu via Flickr

“I think it’s possible that technology, at some point, could be developed to read your mood from your face,” says Lisa Feldman Barrett, an expert in the psychology and neuroscience of emotion at Northeastern University. “Not your face alone, however—your face within context.”

Consider the grimace. It’s thought to be a near-universal sign of displeasure. “You saw this in Inside Out,” Barrett says, referencing the 2015 Pixar film about scrambled emotions. “The little anger character looks the same in every person’s brain… This is a stereotype that people believe.” But the most robust evidence suggests something else. “People do scowl when they’re angry—more or less 20 or 30 percent of the time,” she says. “But sometimes they don’t. And they often scowl when they’re not angry, which means scowling is not particularly diagnostic for anger.”

That’s where context comes in. We’re constantly analyzing other people’s “body language,” facial expressions, and even the tone of their voice. As we watch, we take into consideration what just happened, what is currently happening, and what might happen next. We even consider what’s taking place inside our own bodies, Barrett emphasizes, and what we’re feeling, seeing, and thinking. Some people are better at this than others, and certain factors can influence your success in a given interaction. If you know someone well—if, through trial and error, you’ve come to understand the way their particular emotions might manifest—you’re more likely to interpret their scowl accurately.

But none of this is really “reading” someone’s face. “It’s actually a bad analogy,” Barrett says, “because we don’t detect psychological meaning in a person’s movements, we infer it. And we infer it largely based on the context.” At best, you’re working in collaboration with another’s face—creating something new from the data (a curled lip) and your preconceptions, something robots currently can’t observe or understand.

These innate qualities help us empathize, understand others, and communicate our own emotions. But they can lead us astray. “The literature suggests we tend to overestimate our ability to read character from the face,” Brad Duchaine, a professor of brain science at Dartmouth, wrote in an email. “For example, people make consistent judgments about who looks trustworthy and who doesn’t, but these judgments don’t appear to effectively predict trustworthiness in real situations.”

Attempting to glean an individual’s health status from their face is just as complicated. Ian Stephen, a lecturer at Macquarie University in Sydney, Australia, uses a primarily evolutionary paradigm to study how our physiology is reflected in our face. He’s found that face shape can be predictive of things like BMI and blood pressure. His most interesting finding has to do not with countenance, but with coloring: Research participants rated white people with yellower and redder skin pigmentation as healthier. Stephen argues this corresponds to keratinoids (an orange pigment we get from eating lots of fruits and vegetables) and oxygenated blood (a warm tone depleted by cardiovascular issues)—two very real markers of health.

Most of these determinations are made subconsciously. In Pride and Prejudice, Mr. Darcy is befuddled by Elizabeth Bennett’s ruddy complexion after she makes a three-mile hike to Netherfield. But Darcy doesn’t link his attraction to oxygenated blood or reproductive fitness—and thank God for that. He merely responds to what he sees. While this may seem quite superficial, Jane Austen’s romantic novel reveals a deeper truth: “Faces that are perceived as attractive are also perceived as healthy,” Stephens says.

Many evolutionary biologists would argue that the occasional co-mingling of physical health and perceived beauty is advantageous. It helps animals pick mates and propagate the species, at least in theory. But it’s hardly foolproof: beauty is, in many, varied, and crucial ways, culturally determined. Americans, for example, value thinness and vilify fatness, but thin people can be unhealthy and fat people can be exceedingly fit. These arbitrary categories already have real ramifications: simply because of their physical appearance, obese people, women, and people of color are discriminated against, from the workplace to the emergency room.

In the eye of many beholders, beauty can block out everything else. In one 2017 Nature study, the authors concluded that “male perceived health was predicted positively by averageness, symmetry, and skin yellowness.” The perceived health of females, meanwhile, “was predicted by femininity.”

Physiognomy book facial expressions meaning
A 19th century book about physiognomy describes two emotions visually. On the left, “utter despair.” On the right, “anger mixed with fear.” Wikimedia

Some believe creating machines to translate appearances into more meaningful insights has the potential to overcome human folly. Others worry it will magnify it beyond control. In a recent study, published in the journal Nature amidst the CES hoopla, researchers affiliated with the FDNA, a for-profit genetics company with a federal-sounding name, used artificial intelligence to identify genetic disorders in photos of children’s faces. The program, called DeepGestalt, named for the German philosophical movement that sought order from chaos, was trained on a dataset of 17,000 images to identify more than 200 syndromes.

Depending on your orientation to technology, DeepGestalt may inspire a swelling hope, or a plunging dread. While it still requires a doctor to interpret the results, such machinery could provide a “safer, more convenient” way of diagnosing many ailments, according to Stephen. But, its usage comes with serious ethical questions. “Things that once had been private, are potentially much easier to identify,” Stephen says. “Do companies start pulling photos of you off your Facebook profile and do analysis of risk they weren’t able to previously do and deny you coverage or charge you extra?”

Similar face-interpretation algorithms have raised privacy concerns of their own. An effort in 2017 to create an “AI gaydar” was denounced by organizations like Glaad and human rights advocacy groups. One critic called it “the algorithmic equivalent of a 13-year-old bully.” In the very narrow confines of the study, the machine was able to accurately predict whether or not a man was gay 31 percent better than chance. Humans did 11 percent better than chance.

The fear of a person or machine that can read our thoughts and feelings, even those we wish to conceal, is a foundational part of technological anxiety, encapsulated in George Orwell’s novel Nineteen Eighty-Four, published 70 years ago this June. But in this domain, imagination still far outpaces our technology.

Feldman Barrett thinks highly-skilled emotion-reading robots are achievable—for better or worse—but the companies currently operating in this marketplace don’t actually seem equipped to manufacture them. “Computers are getting better and better about the computer vision aspect of things—the detection of movements,” she says. “Unfortunately, [programmers] think detecting a movement is detecting an emotion.” To make a true breakthrough, she says, “what needs to change in some ways is not the technology, but the mindset, the hypotheses.”

In his Times column on physiognomy, Teju Cole describes a 1980s-era black-and-white photo of a young man. “I want to fall back on old ways and say that the gentle arch of the boy’s left eyebrow seems to mark him as an ironic sort, or that the symmetry of his features make him both trusting and trustworthy,” he writes. “But really, that would be projecting.”

So if you placed an order for an emotion- or health-reading mirror at CES, there’s probably still time to cancel it—and wait for a newer model.