SHARE

Shortly after CNN’s Town Hall with Donald Trump last week, the former president’s son tweeted a clearly manipulated 9-second video clip featuring an AI-generated vocal imitation of CNN anchor Anderson Cooper offering a vulgar compliment of the former president’s town hall performance. “I’m told this is real…,” wrote Donald Trump, Jr. “[I]t seems real and it’s surprisingly honest and accurate for CNN… but who knows these days.”

Despite a Twitter Community Note flagging the video as fake, one commenter replied “Real or not, it’s the truth just the same.”

Two days later, Trump re-upped the same altered clip to Truth Social, the alternative social media platform favored by his supporters. And while many replies on both Twitter and Truth Social appear to indicate users are largely aware of the clumsy parody, experts warn Trump’s multiple recent instances of embracing AI-generated content could sow confusion and chaos leading up to his bid for reelection in next year’s presidential campaign.

[Related: “This fictitious news show is entirely produced by AI and deepfakes” ]

“Manipulating reality for profit and politics not only erodes a healthy society, but it also shows that Trump has incredible disrespect for his own base, forget about others,” Patrick Lin, a professor of philosophy and director of California Polytechnic State University’s Ethics and Emerging Sciences Group, told PopSci. “It’s beyond ironic that he would promote so much fake news, while in the same breath accuse those who are reporting real facts of doing the same,” said Lin.

And there’s no indication the momentum behind AI content will slow—according to Bloomberg on Wednesday, multiple deepfake production studios have collectively raised billions of dollars in investments over the past year. 

Barely a month after Trump posted an AI-generated image of himself kneeling in prayer, the Republican National Committee released a 30-second ad featuring AI-created images of a dystopian America should President Biden be reelected.

“We’re not prepared for this,” A.J. Nash, vice president of intelligence at the cybersecurity firm ZeroFox told AP News over the weekend regarding the rise of audio and video deepfakes. “When you can do that on a large scale, and distribute it on social platforms, well, it’s going to have a major impact.”

According to Lin, the spread of AI-manipulated footage by a former president, even if done so jokingly, is a major cause for concern, and “should be a wake-up call that we need regulation of AI right now,” they say. To him, recent high-profile stories focused on AI’s theoretical existential threats to humanity are a distraction from the “clear and present dangers” of today’s generative AI, ranging “from discrimination to disinformation.”

Correction 05/19/23: A previous version of this article misattributed A.J. Nash’s comments to an interview with PBS, instead of with AP News.