There’s a glaring issue with the AI moratorium letter

The statement makes some valid notes—but critics argue signatories missed the point.
Phone showing ChatGPT chat screen against backdrop of website homepage
Longtermists believe it is morally imperative humans do whatever is necessary to achieve a techno-utopia. Deposit Photos

Share

An open letter signed on Wednesday by over 1,100 notable public figures, including Elon Musk and Apple co-creator Steve Wozniak, implores researchers to institute a six-month moratorium on developing artificial intelligence systems more powerful than GPT-4

“[R]ecent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control,” reads a portion of the missive published on Wednesday by the Future of Life institute, an organization attempting to “steer transformative technology towards benefitting life and away from extreme large-scale risks.” During the proposed six-month pause, the FLI suggests unnamed, independent outside experts develop and implement a “rigorously audited” shared set of safety protocols, alongside potential governmental intervention.

[Related: The next version of ChatGPT is live—here’s what’s new.]

But since the letter’s publication, many experts have highlighted that a number of the campaign’s supporters and orchestrators subscribe to an increasingly popular and controversial techno-utopian philosophy known as “longtermism” that critics claim has historical roots in the eugenics movement.

Championed by Silicon Valley’s heaviest hitters, including Musk and Peter Thiel, longtermism mixes utilitarian morality alongside science fiction concepts like transhumanism and probability theory. Critics now worry the longtermist outlook alluded to in FLI’s letter is a diversion from large language models’ (LLMs) real problems, and revealing the co-signers’ misunderstandings of the so-called “artificial intelligence” systems themselves.

Broadly speaking, longtermists believe it morally imperative to ensure humanity’s survival by whatever means necessary to maximize future lives’ wellbeing. While some may find this reasonable enough, proponents of longtermism—alongside similar overlapping viewpoints like effective altruism and transhumanism—are primarily motivated by their hope that humans colonize space and attain virtually unimaginable technological advancements. To accomplish this destiny, longtermists have long advocated for the creation of a friendly, allied artificial general intelligence (AGI) to boost humanity’s progress.

[Related: OpenAI’s newest ChatGPT update can still spread conspiracy theories.]

Many longtermists endorsing FLI’s letter believe rogue AI systems pose one of these most immediate “existential risks” to future humans. As generative language programs like OpenAI’s ChatGPT and Google Bard dominate news cycles, observers are voicing concerns on the demonstrable ramifications for labor, misinformation, and overall sociopolitical stability. Some backing FLI’s missive, however, believe researchers are on the cusp of unwittingly creating dangerous, sentient AI systems akin to those seen in popular sci-fi movie franchises like The Matrix and The Terminator

“AGI is widely seen as the savior in [longtermist] narrative, as the vehicle that’s going to get us from ‘here’ to ‘there,’” Émile P. Torres, a philosopher and historian focused on existential risk, tells PopSci. But to Torres, longtermist supporters created the very problem they are now worried about in FLI’s open letter. “They hyped-up AGI as this messianic thing that’s going to save humanity, billionaires bought into this, companies started developing what they think are the precursors to AGI, and then suddenly they’re freaking out that progress is moving too quickly,” they say.

Meanwhile, Emily M. Bender, a professor of linguistics at the University of Washington and longtime large language model (LLM) researcher, highlighted similar longtermists’ misunderstandings about how these programs actually work. “Yes, AI labs are locked in an out-of-control race, but no one has developed a ‘digital mind’ and they aren’t in the process of doing that,” argues Bender on Twitter.

[Related: Microsoft lays off entire AI ethics team while going all out on ChatGPT.]

In 2021, Bender co-published a widely read research paper (the first citation in FLI’s letter) highlighting their concerns with LLMs, none of which centered on “too powerful AI.” This is because LLMs cannot, by their nature, possess self-awareness—they are neural networks trained on vast text troves to identify patterns and generate probabilistic text of their own. Instead, Bender is concerned about LLMs’ roles in “concentration of power in the hands of people, about reproducing systems of oppression, about damage to the information ecosystem, and about damage to the natural ecosystem (through profligate use of energy resources).”

Post Unavailable

Torres seconds Bender’s stance. “The ‘open letter’ says nothing about social justice. It doesn’t acknowledge the harm that companies like OpenAI have already caused in the world,” they say, citing recent reports of poverty-level wages paid to Kenyan contractors who reviewed graphic content to improve ChatGPT’s user experience.

Like many of the open letter’s signatories, Bender and their allies agree that the current generative text and image technologies need regulation, scrutiny, and careful consideration, but for their immediate consequences affecting living humans—not our supposedly space-bound descendants. 

 

Win the Holidays with PopSci's Gift Guides

Shopping for, well, anyone? The PopSci team’s holiday gift recommendations mean you’ll never need to buy another last-minute gift card.