SHARE

“Fate has ordained that the men who went to the moon to explore in peace will stay on the moon to rest in peace.” 

Those were the words that Richard Nixon read on television in 1969 while breaking the terrible news to the nation that the Apollo 11 mission had failed and astronauts Neil Armstrong and Buzz Aldrin had perished while attempting the first lunar landing.

But only in an alternate reality. Nixon never had to utter those lines because Apollo 11 was a historic success, and Armstrong, Aldrin, and their pilot Michael Collins made it safely back to Earth. But a speech was prepared for then-President Nixon in case they did not. The short film In Event of Moon Disaster shows us how that scenario would have unfolded with an incredibly convincing deepfake of Nixon delivering the disastrous news. 

[Related: 7 easy ways you can tell for yourself that the moon landing really happened]

A deepfake is a combination of “deep,” meaning deep learning, and “fake,” as in fabricated. Together it’s a label for an audio or video clip that uses artificial intelligence to portray a scenario that never really happened. Usually, that consists of a person saying or doing something they never did, often without the consent of those portrayed, says Halsey Burgund, one of the directors of In Event of Moon Disaster.

While deepfakes are a recent development, they build upon a long and established line of distorted media that still exists as low-tech, impactful misinformation today. Although deepfake technology is evolving quickly, there are efforts to slow its dissemination. And while there are many malicious uses of deepfakes, there are some beneficial applications in areas like human rights and accessibility. An ongoing exhibit at the Museum of the Moving Image in New York City, Deepfake: Unstable Evidence on Screen, explores these themes with In Event of Moon Disaster as its centerpiece.

AI photo

is a deepfake of Richard Nixon telling the nation that Apollo 11 failed.

The difference between deepfakes and other misinformation

To make a deepfake of a person, creators have to train a computer by giving it lots of video, audio, or images of the “target,” the person whose image and voice you are trying to manipulate, and the “source,” the actor who is modeling the words or action you want the target to appear to say or do. To ace this, the computer uses a form of artificial neural networks, which are meant to function like a human brain trying to solve a problem by looking at evidence, finding patterns, and then applying that pattern to new information. Neural networks were first conceptualized in 1943, and can be used to do everything from writing a recipe to translating convoluted journal articles. Deep learning and deepfake creation involve many layers of neural networks, so much so that the computer can train and correct itself.

While deepfake technology might seem harmful in itself, it’s aided by how quickly social media users spread the information, often without pausing to question its source.

“Deepfakes as a production technology presents a lot of concern,” Barbara Miller, co-curator of the exhibit and deputy director for curatorial affairs at the Museum of the Moving Image, says. “I think it’s impossible to think about that concern without looking at the lightning speed that information circulates.”

But the effective spread of misinformation predates deepfakes and even social media. The exhibit showcases deepfakes in the context of the long history of “unstable nonfiction media,” Miller adds, so visitors aren’t left with the assumption that the rise of AI-driven manipulation is the source of all distrust in media. 

“These are techniques that have always existed for as long as the media itself has existed,” Burgund says. 

Using basic video editing skills, almost anyone can slice and dice footage to change the meaning or tone.

In the 1890s, the Edison Manufacturing Company was eager to flex the capabilities of motion pictures by capturing the Spanish-American War on camera. However, cameras in the 19th century were a whole lot clunkier than those today, making it difficult to film combat close up. So, the company scattered staged footage of American soldiers swiftly defeating enemy regiments among the real footage of marching soldiers and weaponry. The cuts stoked patriotism among American viewers, who weren’t told the difference between the real and fake scenes. 

Even today, you do not need AI to create effective and impactful disinformation. “The tried and true methods of manipulation that have been used forever are still effective,” Burgund says. Even putting the wrong caption on a photo, without even editing the image, can create misinformation, he explains.

Take the 2020 presidential election, for example. In the months leading up to it, Miller says there was worry that deepfakes could throw a wrench in the democratic process. However, the technology didn’t really make a big splash during the election, at least when compared to cruder forms of manipulation that were able to spread misinformation successfully.  

Using basic video editing skills, almost anyone can slice and dice footage to change the meaning or tone. These are called “cheapfakes” or “shallowfakes” (the spliced Spanish-American war videos were one of the earliest instances). The intro to In Event of Moon Disaster uses these techniques on archival footage to make it seem like Apollo 11 crashed. The directors interspersed footage of the lunar lander returning between quick cuts of the astronauts and set it to the soundtrack of accelerating beeping and static noises to create the anxiety-inducing illusion that the mission went awry. Because these techniques require minimal expertise and little more than a laptop, they are much more pervasive than deepfakes.

“[Shallowfakes is] where we see the widest range of damage,” says Joshua Glick, co-curator of the exhibit and an assistant professor of English, film, and media studies at Hendrix College.

In fact, some of the most well-known videos that have been debated to be deepfakes are actually cheapfakes. In 2019 Rudolph Giuliani, then-President Donald Trump’s lawyer, tweeted a video of Nancy Pelosi in which she appeared to slur her words, leading some of her critics to assert that she was drunk. The video was found to have been edited and slowed down but did not use any deepfake technology.

Burgund and his co-director, Francesca Panetta, think that confirmation bias is really what aids the dissemination of deepfakes or cheapfakes, even when they’re clearly poorly made. “If the deepfake is portraying something that you want to believe, then it hardly has to look real at all,” Burgund says.

Slowing the spread of deepfakes

While it currently requires some technical know-how to create a deepfake like Burgund and Panetta’s, Matthew Wright, the director of research for Rochester Institute of Technology’s Global Cybersecurity Institute and a professor of computing security, says the technology is quickly spreading to the masses, and there are already many deepfake apps and software.

“This is democratizing a potentially dangerous technology,” Wright says. 

There are efforts to slow or counteract the spread of deepfakes, however. While the usual impulse among tech researchers is to share methods and tools with the public, Wright says some of the experts developing new deepfake technologies have vowed to keep their results more private. Additionally, there are projects such as the Content Authenticity Initiative, which is a consortium of companies and organizations like Adobe, Twitter, and the New York Times that aims to track the origins of media by watermarking them even if they are edited. This is not a perfect solution, Wright says, because there are ways to bypass those checks. Still, if every video coming out of the White House, say, was digitally watermarked, then it could slow or prevent their manipulation. 

Wright is working on creating deepfake detection tools that could be used by journalists and regular internet users. (Microsoft launched a similar product in 2020.) Wright says he and his colleagues are very careful about not sharing all of the source code because it’s possible someone could create a deepfake to deceive these detectors if they had access to it. But if there’s a diversity of authenticators, there’s less of a chance of that happening.

“As long as multiple detection tools are actually being used against these videos, then I think overall our chances of catching [deepfakes] are pretty good,” Wright says. 

AI photo

used deepfake technology to mask the faces of its vulnerable subjects.

The values of deepfake technology

You may have encountered the benefits of deepfakes in entertainment, like in the most recent Star Wars films, or in satire, like this Star Trek throwback with Jeff Bezos and Elon Musk’s faces subbed in. However, the technology also has utility in human rights and disability accessibility.

The Museum of the Moving Image exhibit features clips from Welcome to Chechnya, an award-winning documentary by David France that uses deepfake technology to conceal the true faces of LGBTQ activists facing persecution in the Russian republic. This allows the viewer to see the emotion of the subjects while still protecting their identities.

The technology has also been used to improve accessibility for those who have lost their voice due to an illness, injury, or disability, such as Lou Gehrig’s disease, Burgund says. VocaliD, for instance, uses AI to recreate the user’s voice from old recordings for text-to-speech technology, or help them pick a voice that best fits their personality from a bank of options. 

[Related: Deepfakes could help us relive history—or rewrite it]

While Panetta and Burgund want the viewers of their deepfake to interrogate the origins of the media they encounter, they don’t want the audience to be alarmed to the point of creating a zero-trust society.

“This is not about trying to scare people into not believing anything they see,” Panetta says, “because that is as problematic as the misinformation itself.”

Just like trust in media can be weaponized, distrust in media can be weaponized, too. 

As the exhibit points out, even the theoretical existence of deepfakes results in a “liar’s dividend,” where one can insinuate a real video is a deepfake to sow seeds of doubt.

In 2018, Gabonese President Ali Bongo Ondimba gave a New Year’s address after suffering a stroke and being out of the public eye as a result. His political rivals said that he looked unnatural and pushed the idea that the video was a deepfake. While experts agreed the video seemed off, no one could say for sure it was a deepfake or not, with some attributing the peculiarity of Bongo’s appearance to his poor health. A week later, citing the oddness of the video, his opponents attempted a coup but were unsuccessful. 

Wright says that he and his colleagues have started to see more of these cry-wolf situations in the political sphere than actual deepfakes circulating and causing damage. “There can be deepfakes, but they’re not that commonly used,” he says. “What you need to do is understand the source.”

For anyone who’s inundated with information while scrolling through social media and the news, it’s important to pause and question, “how did this information reach me? Who is disseminating this? And can I trust this source?” Doing that can determine whether a deepfake (or cheapfake) becomes potent misinformation or just another video on the internet.  

Deepfake: Unstable Evidence on Screen will be on display at the Museum of the Moving Image in Queens, New York through May 15, 2022.