Deepfake videos may be convincing enough to create false memories

In a new study, deepfaked movie clips altered around half of participants' recollection of the film.
College of television screen images
Deepfakes are unfortunately pretty good at making us misremember the past. Deposit Photos

Share

Deepfake technology has already proven itself a troublingly effective means of spreading misinformation, but a new study indicates the generative AI programs’ impacts can be more complicated than initially feared. According to findings published earlier this month in PLOS One, deepfake clips can alter a viewer’s memories of the past, as well as their perception of events.

To test the forgeries’ efficacy, researchers at University College Cork in Ireland asked nearly 440 people to watch deepfaked clips from falsified remakes of films such as Will Smith in The Matrix, Chris Pratt as Indiana Jones, Brad Pitt and Angelina Jolie in The Shining, and Charlize Theron replacing Brie Larson for Captain Marvel. From there, the participants watched clips from the actual remakes of movies like Charlie and the Chocolate Factory, Total Recall, and Carrie. Meanwhile, some volunteers were also provided with text descriptions of the nonexistent remakes.

[Related: This fictitious news show is entirely produced by AI and deepfakes.]

Upon review, nearly 50 percent of participants claimed to remember the deepfaked remakes coming out in theaters. Of those, many believed these imaginary movies were actually better than the originals. But as disconcerting as those numbers may be, using deepfakes to misrepresent the past did not appear to be any more effective than simply reading the textual recaps of imaginary movies. 

Speaking with The Daily Beast on Friday, misinformation researcher and study lead author Gillian Murphy did not believe the findings to be “especially concerning,” given that they don’t indicate a “uniquely powerful threat” posed by deepfakes compared to existing methods of misinformation. That said, they conceded deepfakes could be better at spreading misinformation if they manage to go viral, or remain memorable over a long period of time.

A key component to these bad faith deepfakes’ potential successes is what’s known as motivated reasoning—the tendency for people to unintentionally allow preconceived notions and biases to influence their perceptions of reality. If one is shown supposed evidence in support of existing beliefs, a person is more likely to take that evidence at face value without much scrutiny. As such, you are more likely to believe a deepfake if it is in favor of your socio-political leanings, whereas you may be more skeptical of one that appears to “disprove” your argument.

[Related: Deepfakes may use new technology, but they’re based on an old idea.]

Motivated reasoning is bad enough on its own, but deepfakes could easily exacerbate this commonplace logical fallacy if people aren’t aware of such issues. Improving the public’s media literacy and critical reasoning skills are key factors in ensuring people remember a Will Smith-starring Matrix as an interesting Hollywood “What If?” instead of fact. As for whether or not such a project would have been better than the original—like many deepfakes, it all comes down to how you look at it.

 

Win the Holidays with PopSci's Gift Guides

Shopping for, well, anyone? The PopSci team’s holiday gift recommendations mean you’ll never need to buy another last-minute gift card.