Facebook archived more than a billion user faces. Now it’s deleting them.
Facebook's Face Recognition system will be shuttering in the coming weeks. Here's how that could impact you.
On Tuesday, Facebook announced that it was ending its Face Recognition system on the app and rolling back the technology in the coming weeks. In a press release, Jerome Pesenti, VP of artificial intelligence at Meta (the new name for Facebook’s parent company), said that the shutting down of the Face Recognition system and the imminent deletion of Facebook’s library of facial recognition templates is “a company-wide move away from this kind of broad identification, and toward narrower forms of personal authentication.”
Soon, Facebook will no longer automatically recognize people’s faces in Memories, photos or videos uploaded to the app, or give suggestions for tagging who’s in a photo or video. It will also not be able to notify users if they appear in other photos or videos across the site. However, users can still manually tag friends in photos.
This change also means that the Automatic Alt Text (AAT) technology that creates image descriptions for people who are blind or visually impaired will be turned off. The company noted that AAT is currently used to identify people in about 4 percent of photos. Other ATT functions that are not image-identification related will operate as normal.
According to the company, more than a third of Facebook’s daily active users opt into the Face Recognition setting, so removing the system will mean that more than a billion people’s individual facial recognition templates will be deleted. Users who opted out of this setting do not have a stored face recognition template, and will not be impacted.
“We still see facial recognition technology as a powerful tool, for example, for people needing to verify their identity, or to prevent fraud and impersonation,” Pesenti said. “But the many specific instances where facial recognition can be helpful need to be weighed against growing concerns about the use of this technology as a whole.”
Adam Schwartz, senior staff attorney at the Electronic Frontier Foundation, said that this move reflects a growing awareness around the country and the world that face recognition technology is dangerous, harmful, privacy-invading, and biased.
“There are cities in the United States that are banning their police from using it. There’s a strong law on the books in Illinois that bans companies from using it unless they first get permission from consumers,” Schwartz says. “There are efforts to pass a law just like that in Congress and around the country.”
Facebook was sued under the Illinois Biometric Information Privacy Act and agreed to settle a case for $650 million earlier this year for using faceprints and other biometric identifiers without permission.
In 2019, Facebook paid the Federal Trade Commission a $5 billion fine for making misleading statements about who was going to be face printed.
“Facebook misrepresented users’ ability to control the use of facial recognition technology with their accounts,” the FTC wrote in a statement two years ago. “According to the complaint, Facebook’s data policy, updated in April 2018, was deceptive to tens of millions of users who have Facebook’s facial recognition setting called ‘Tag Suggestions’ because that setting was turned on by default, and the updated data policy suggested that users would need to opt-in to having facial recognition enabled for their accounts.”
In response to these events, Facebook changed its face print system to one requiring opt-in consent, Schwartz says. But, even with permission, he notes that it’s still a dangerous technology. “These images could be diverted to other uses, they can be stolen by data thieves, they can be seized by the police with a warrant,” he says.
Following Facebook’s announcement yesterday, The New York Times reported that “although Facebook plans to delete more than one billion facial recognition templates by December, it will not eliminate the software that powers the system, which is an advanced algorithm called DeepFace,” which can also discern human faces in photos.
Facial recognition is a common technology. “Unfortunately it’s not hard to get,” Schwartz says. It’s usually comprised of sophisticated computer algorithms that are able to take images of two faces, make a mathematical representation of each face, and then compare the two mathematical representations and see if they’re similar enough to be a match.
After that, it would need human confirmation, which is why Facebook only suggests a tag or why police are supposed to look at the computer’s suggestions of a match and decide whether there actually is a match, Schwartz explains.
Even if Facebook obliterated every one of its algorithms that powers face recognition, “it could get another one,” Schwartz says. But since they’re no longer screening uploaded images and destroying their library of a billion faceprints, “if they were to restart their program, they would have to start again from zero on recreating their database of faceprints.”
Pesenti said in the release that Facebook still thinks that facial recognition technology could be useful in a narrow set of cases, like helping people gain access to a locked account or verifying the user’s identity to access a financial product or other types of personal data.
“Facial recognition can be particularly valuable when the technology operates privately on a person’s own devices,” and sends no face data to an external server, Pesenti wrote. “We believe this has the potential to enable positive use cases in the future that maintain privacy, control and transparency, and it’s an approach we’ll continue to explore as we consider how our future computing platforms and devices can best serve people’s needs.”