This fictitious news show is entirely produced by AI and deepfakes

'Wolf News' videos feature grammatical errors and weird AI-generated anchors.
Screenshot of deepfaked news anchor for fake news channel
If something feels off, it's because it is. Graphika

Share

A research firm specializing in misinformation called Graphika issued a startling report on Tuesday revealing just how far controversial deepfake technologies have come. Their findings detail what appears to be the first instance of a state-aligned influence operation utilizing entirely AI-generated “news” footage to spread propaganda. Despite its comparatively hamfisted final products and seemingly low online impact, the AI television anchors of a fictitious outlet, Wolf News, promoted critiques of American inaction regarding gun violence last year, as well as praised China’s geopolitical responsibilities and influence at an upcoming international summit.

As detailed in supplementary reporting supplied on Tuesday by The New York Times, the two Wolf News’ anchors can be traced back to “Jason” and “Anna” avatars offered by Synthesia, a five-year-old startup in Britain offering deepfake software to clients for as little as $30 a month. Synthesia currently offers at least 85 characters modeled on real human actors across a spectrum of ages, gender, ethnicities, voice tones, and clothing. Customers can also generate avatars of themselves, as well as anyone who grants consent.

[Related: A history of deepfakes, misinformation, and video editing.]

Synthesia’s products are largely intended and marketed as tools for cost- and time-saving solutions such as a company’s in-house human resource training videos. Past clients advertised on the company website include Amazon and Novo Nordisk. Synthesia’s examples, as well as the propaganda clips highlighted by NY Times, aren’t exactly high quality—the avatars speak in largely monotonous tones, with stilted facial expressions, delayed audio, and unrealistic movements such as blinking too slowly.

Usually, this isn’t an issue, as clients are willing to sacrifice those aspects in lieu of drastically cheaper operating costs for often mundane projects. Still, experts at Graphika caution that the technology is quickly improving, and will soon present misinformation that is much less distinguishable from real videos.

[Related: ‘Historical’ chatbots aren’t just inaccurate—they are dangerous.]

Synthesia’s terms of service clearly prohibit generating “political, sexual, personal, criminal and discriminatory content”. Although it has a four-person department tasked with monitoring for clients’ deepfake content violations, it remains difficult to flag subtle issues like misinformation or propaganda as opposed to hate speech or explicit content. Victor Riparbelli, Synthesia’s co-founder and CEO, told NY Times that the company takes full responsibility for the security lapse, and that the subscribers behind Wolf News have been subsequently banned for violating the company’s policies.

Although the digital propaganda uncovered by Graphika appears to have reached few people online, the firm cautions it is only a matter of time until bad actors leverage better technology for extremely convincing videos and images for their respective influence operations. In a blog post published to Synthesia’s website last November, Riparbelli put the onus on governments to enact comprehensive legislation to regulate the “synthetic media” and deepfake industries. 

 
The best Black Friday deals including a jackery generator, airpods, a TV arranged on a plain background.

PopSci's Guide to Cyber Monday

The best Cyber Monday sales, deals, and everything else you need to know. Our team spends hundreds of collective hours searching and evaluating every deal we can find online, focusing on well-made and reviewed products for prices that make sense.