SHARE

Ten federal agencies are planning to expand their use of facial recognition technology by 2023, according to a new report from the Government Accountability Office, or GAO. The news comes on the heels of numerous complaints by privacy advocates about this type of tech. 

The report, published on August 24, surveyed 24 federal agencies and found that most of them were using facial recognition technology. A total of 14 of the agencies were using it to unlock their agency-issued smartphones, while two were using it to verify the identities of people visiting government websites. Six of the agencies were using it to generate leads in criminal investigations, and 10 agencies said they were planning to expand its use. 

“This technology is dangerous. It leads to people being falsely arrested, it invades our privacy, it deters people from going to protests,” says Adam Schwartz, senior attorney at the Electronic Frontier Foundation. “The government should not be using it at all, so it is pretty sad to read that they’re actually expanding their use of it.” 

Why does this expansion matter?

You might use facial recognition technology to do something as mundane as unlocking your phone. But while the federal government can employ it for similar reasons, it can also use it to generate leads in criminal investigations and to monitor locations and check for individuals on watchlists.

Using this software can lead to an increase in false arrests and citizen surveillance, Schwartz says. For example, at least three Black men have been falsely arrested because surveillance mistook them for someone else. This technology has also been used to identify people who were present at Black Lives Matter protests.

“We are always trying to tell people how they can practice surveillance self-defense, like using encryption or strong passwords, but with face surveillance, there’s much less self defense to be done,” Schwartz says. “Most of us show our face when we move about the public.”

How did these facial recognition companies get all their data?

Some of this software the government uses has accumulated its gallery of people’s faces by scraping publicly available data sources, like TikTok or Instagram, for photos. The Department of Homeland Security and the Department of Justice both reported using Clearview, a company currently being sued for scraping social media for photos to fill its database.

“There are rules and guidelines in place with regards to the use of personal information by the government,” says Ashkan Soltani, former chief technologist of the Federal Trade Commission, “but they’re essentially laundering some of that by relying on third-party commercial entities that built their databases and trained their models based on publicly-available data.” 

[Related: A Texas town approved an AI-powered sentry tower for border security]

This data, like pictures of your family’s Thanksgiving dinner that made their way to Facebook, or DMV photos, is not subject to the same oversight it might be if the government was directly collecting it. 

This is dangerous, not just from a privacy perspective, but from a public safety perspective, experts say. “Some of these systems have high false positive rates where they identify the wrong individual or misrepresent certain communities,” says Soltani. 

Soltani’s main concern is that this expands the power of the government. “Oftentimes we strike a balance of power with regards to surveillance and government influence, and that’s essentially dictated by manpower, hours, and resources,” he says. “But if you translate that to automated technology that can follow you around with no added cost to law enforcement, then the balance of power is upset, and the invasion of privacy is greater.” 

How effective is this technology?

This type of system is designed to allow people to do their jobs more efficiently, although its efficiency is debatable. U.S. Customs and Border Protection said since 2018, they’ve used this technology to prevent “more than 850 people” from illegally entering U.S. borders, a number that seems remarkably low considering over 88 million people have been scanned by these systems. 

“We know that there’s high error rates, both with people being falsely identified as a suspect, or failing to identify an actual suspect,” says Schwartz. “When we see data about so-called success, it seems like a pretty miniscule rate of finding fraud. The costs far outweigh any supposed benefits.” 

[Related: Why the new FTC chair is causing such a stir]

The system can also be abused. Uighurs, a persecuted Muslim minority in China, have been subject to surveillance based on on facial recognition. 

One federal bill, sponsored by Senators Bernie Sanders and Elizabeth Warren, among others, is trying to ban all government use of facial recognition technology. After public pressure campaigns, Amazon announced that it would no longer be selling Rekognition, its facial recognition software, to police because of the lack of governance on how the technology was being used. Plus, 20 cities and three states across America have banned facial recognition technology. 

“It is so dismaying to see the government moving in exactly the wrong direction,” says Schwartz. 

In an email, GAO director Candice Wright says that the GAO expects to start work soon on examining privacy issues related to federal law enforcement’s use of facial recognition technology.

MORE TO READ