On Wednesday night, Florida Rep. Matt Gaetz (R) took to the floor of the House to claim members of the mob that descended on the Capitol building earlier that day were “Antifa.” As proof, he simply asserted that there was “some pretty compelling evidence from a facial recognition company.” His statement likely traced back to a loosely-sourced, and now-deleted Washington Times article containing a claim that was later thoroughly debunked. Even though the article has been removed, however, the claim that facial recognition has proved the identity of these individuals echoes across social media and internet comment sections, all the while ignoring the fact that this isn’t even how facial recognition is meant to work in the first place. It’s meant to be a starting point for investigations, if it’s used at all.
“The main thing to realize is that facial recognition is not perfect,” says Marios Savvides, a professor of artificial intelligence and director of Carnegie Mellon’s CyLab Biometrics Center. We have seen oversimplified versions of facial recognition in pop culture in which a computer program spits out a definitive match, but that’s not the case in reality.
In real-world facial recognition situations, researchers feed the algorithm pictures or frames from a video and the computer then builds a template of the person’s face that it can check against a database of individuals. “Based on the degree of match on those templates, it comes up with a ranked order list of individuals,” Savvides explains. “There’s a top match that might be 89 percent, then another at 85 percent and down the line.” It doesn’t provide law enforcement—or whoever is performing the research—with a definitive match and they don’t treat it as such.
Depending on the scope of the case or research, the list of possible matches can vary in size. “It could be 20, 50, or the top 100 matches,” says Savvides. “It depends on variables like the severity of the crime. For a high-profile case like the Boston Marathon case, they would search in the hundreds.”
Even relying on facial recognition as a starting point for law enforcement can still be troublesome. Last year, Detroit police arrested Robert Julian-Borchak Williams, making him the “first known account of an American being wrongfully arrested based on a flawed match from a facial recognition algorithm,” according to the New York Times. In that case, Detroit police got a match after facial recognition tech analyzed a picture against the Statewide Network of Agency Photos (SNAP), which is overseen by a collective of investigators from various agencies.
An official FAQ about the SNAP program explicitly states that facial recognition isn’t a form of positive ID and even lays out the potential for false positives.
Those issues with accuracy also explain why you don’t hear about facial recognition data coming up in courtroom scenarios. ” To my knowledge, it has never been introduced as evidence in a court anywhere in the country,” says Farhang Heydari, executive director of the Policing Project and adjunct professor at NYU Law. “Right now, facial recognition is considered too unreliable to be used as evidence anywhere.”
Those reliability issues aren’t consistent across populations, either, which further complicates the matter. A number of studies conducted by organizations such as MIT and the National Institute of Standards and Technology have demonstrated that facial recognition systems are up to 100 times more accurate when reading Caucasian faces vs. those of African-American or Asian people.
Facial recognition has improved dramatically in recent years, especially during the last 12 months, according to Savvides, who cites the COVID-19 pandemic’s mask mandates for motivating researchers to overcome the issues that come with occluded facial features. But, it still has a long way to go before it gains anywhere near the kind of legal credibility enjoyed by other techniques like fingerprints and DNA evidence.
While you likely won’t see facial recognition showing up as hard evidence from prosecutors anytime soon, experts like Heydari do believe it should come up in court more often than it currently does. “Most defense attorneys never know that facial recognition is actually being used in their case and I don’t think that’s right,” he says, “Regardless of whether you’re in favor of facial recognition generally, I think defendants have a constitutional right to know what technology is being used in their investigation.” Then, it would be more clear if the possibility for any of those known facial recognition issues could have possibly come up during the investigation process.
In the case of the Washington Times article that started all of this, the case is much more clear cut. The publication has deleted the article and the AI company in question, XRVision, has publicly stated through its lawyer that it never provided anything close to what the article claimed. While that kind of thorough evidence is useful for law enforcement and the legal system, it likely won’t stand up in the courts of Facebook comment threads.