Why Microsoft is rolling back its AI-powered facial analysis tech

Plus, here's what Facebook and Zoom have been doing in this problematic field.
a blurry crowd of people
Photo by mauro mora on Unsplash

Share

Microsoft announced on Tuesday that it will remove certain facial analysis tools from its Azure AI services in accordance with its new Responsible AI Standard. The ability to automatically “infer emotional states and identity attributes such as gender, age, smile, facial hair, hair, and makeup,” as Microsoft’s Chief Responsible AI Officer, Natasha Crampton, explains it in the statement, will cease to be available to new users this week and will be phased out for existing users this year. 

AI-powered facial recognition has been criticized by groups like the Electronic Frontier Foundation (EFF) for years. While law enforcement use is often the most worrying, studies have shown that these tools simply aren’t accurate in identifying attributes like gender—especially among diverse and minority groups. For example, MIT’s Media Lab found that IBM, Face++, and Microsoft’s facial recognition disproportionally misclassified the gender of darker-skinned faces and female faces. The worst performing tools misclassified the gender of darker-skinned female faces 34.7 percent of the time while the gender of lighter-skinned male faces were misclassified between just 0 percent and 0.8 percent of the time. 

What’s more, in the press release Microsoft acknowledges that facial expressions and emotions are not universal across cultures. Crompton writes, “Experts inside and outside the company have highlighted the lack of scientific consensus on the definition of ‘emotions,’ the challenges in how inferences generalize across use cases, regions, and demographics, and the heightened privacy concerns around this type of capability.”

This is all part of Microsoft’s Responsible AI Standard V2, which it has just released to the public. The document is an attempt to set guiding principles (grouped under Accountability, Transparency, Fairness, Reliability and Safety, Privacy and Security, and Inclusiveness) for its product development teams, while recognizing that society’s laws and norms simply haven’t caught up to the unique risks and challenges that artificial intelligence poses. (Meanwhile, the EU, in typically heavy-handed fashion, looks set to be the first group to bring in strict regulations for how AI can be used in a wide variety of settings.)

Of course, Microsoft isn’t the only company that has been criticized for its facial recognition programs (and it doesn’t provide its facial recognition services to law enforcement at the moment). Facebook ended its facial-recognition feature that would recognize and suggest friends to “tag” in your photos late last year after more than a decade of use, two hefty fines, and a lot of criticism. 

Zoom is also facing criticism at the moment for its AI-powered mood and engagement recognition features. More than 25 human rights groups signed a letter last month calling on Zoom to pull the features because they are manipulative, discriminatory, and pseudoscientific. According to Zoom’s help documents, Zoom IQ for Sales would track metrics like “talk-listen ratio,” “talking speed,” “filler words,” “longest spiel,” “patience,” “engaging questions,” and offer sentiment and engagement analysis for each caller. Zoom didn’t respond to a request from PopSci for comment, nor has it publicly responded to the letter. 

With Microsoft, it’s important to note that its facial recognition tools aren’t going away entirely. It will still offer them to companies like Uber looking to do things like verify that someone signing up for a service has a valid ID. However, it is taking the lessons it has learned implementing “appropriate use controls” with its Custom Neural Voice (which allows for the creation of a synthetic voice that sounds nearly identical to the original source) to ensure that they can’t be abused. It plans to limit their use to managed customers and partners, narrow the allowed use cases to “pre-defined acceptable ones,” and leverage technical controls to keep everything above board. 

Whether this is enough to offset the general criticism and legitimate concerns of facial recognition tools remains to be seen. While the tools can be helpful for purposes such as automatically blurring faces in security camera footage, they are also incredibly easy to abuse. And beyond concerns about private enterprises, the potential for the government and federal agencies to overstep with facial recognition is near limitless.

 

Win the Holidays with PopSci's Gift Guides

Shopping for, well, anyone? The PopSci team’s holiday gift recommendations mean you’ll never need to buy another last-minute gift card.