FCC wants to make AI-generated robocalls illegal

New AI voice-cloning tools are making already frustrating robocalls more dangerous.
The new policy proposal, if accepted, would make AI-generated robocalls easier to investigate and prosecute.
The new policy proposal, if accepted, would make AI-generated robocalls easier to investigate and prosecute. DepositPhotos

The US’ top communications regulator believes AI-generated robocalls like the one recently impersonating President Joe Biden in New Hampshire should be considered illegal under existing law. That legal designation would make it easier to charge voice cloning scammers with fraud and could act as a deterrent to push back against a rising tide of scams carried out using generative AI tools.

In a proposal released this week, Federal Communications Commission (FCC) Chairwoman Jessica Rosenworcel said the FCC should recognize AI-generated voice calls as under Telephone Consumer Protection Act (TCPA). The TCPA already places restrictions on automated marketing calls, also known as robocalls, though it’s still unclear whether or not AI generated content neatly falls under that category. An FCC vote in favor of Rosenworcel’s proposal would clear up that ambiguity and make AI-generated robocalls illegal without the need for any new legislation. That vote, according to an FCC spokesperson speaking with TechCrunch will occur at Commissioner’s discretion. 

“AI-generated voice cloning and images are already sowing confusion by tricking consumers into thinking scams and frauds are legitimate,” Rosenworcel said in a statement. “No matter what celebrity or politician you favor, or what your relationship is with your kin when they call for help, it is possible we could all be a target of these faked calls.”

An FCC spokesperson told PopSci that clarifying that AI-generated calls are robocalls under existing laws would make it easier for state and federal investigators to take enforcement actions.

Why are AI-generated robocalls an issue? 

Increasingly convincing and easy to use AI voice cloning tools are making already frustrating robocalls more dangerous. Scammers can now use these tools to make it seem as if the person on the other end of the line is a famous celebrity, politician, or even a direct relative. That added layer of familiarity can make callers on the other end of the line more comfortable and more susceptible to handing over sensitive information. Scams like these are becoming more common. One out of every 10 respondents surveyed by security software firm McAfee last year said they were personally targeted by a voice scam. 77% of the targeted victims reported losing money. 

Rosenworcel isn’t the only one who wants to outlaw the practice either. Earlier this month, attorneys general from 26 states formed a coalition and sent a letter to the FCC urging the agency to restrict genertive’s AI’s use by telemarketers. The AG letter says telemarketers looking to impersonate humans should fall under the TCPA’s “artificial” designation which would require them to obtain written consent from consumers before targeting them with calls. 

“Technology is advancing and expanding, seemingly, by the minute, and we must ensure these new developments are not used to prey upon, deceive, or manipulate consumers,” Pennsylvania Attorney General Michelle Henry said in a statement.  

FCC’s long battle against robocallers 

The FCC has spent years pushing back against more traditional, non-AI generated robocalls with varying degrees of success. Last year, the agency issued a record breaking $300 million fine against a large-robocalling operation that was reportedly responsible for billions of dollars worth of automobile warranty scams. Prior to that, the agency levied a $5 million fine against a pair of operatives who carried out over 1,100 unlawful robocalls as part of an effort to suppress Black voter turnout in the 2020 presidential elections. 

[ Related: FCC slaps voter suppression robocall scammers with a record-breaking fine. ] 

Still, rooting out all robocalls remains an exceedingly difficult challenge. Many robocall operations originate from outside of the US, which makes them difficult to prosecute. US carriers, meanwhile, are limited in what cell numbers they can reasonably block. Evolving robocalling techniques, like “spoofing” phone numbers to make them seem as if they are in your area code, also make enforcement more difficult. 

Rising anxieties around potentially election interference and sophisticated scams exacerbated by voice clones could motivate the FCC to act quickly this time. And unlike other proposals attempting to penalize AI deepfakes on the web, this policy change could occur without corralling divided members of Congress together to agree on a new bill.