Sam Altman: Age of AI will require an ‘energy breakthrough’

Speaking at Davos, OpenAI's CEO spoke of a vague AI future made possible only by currently unavailable resources.
Sam Altman, chief executive officer of OpenAI, attends the World Economic Forum (WEF) in Davos, Switzerland. Halil Sagirkaya/Anadolu via Getty Images

Share

Open AI CEO Sam Altman believes long-awaited nuclear fusion may be the silver bullet needed to solve artificial intelligence’s glutinous energy appetite and pave the way for an AI revolution. When that revolution does arrive, however, it might not seem quite as shocking as he once claimed. 

Altman touched on  AI’s growing demands earlier this week while speaking at a Bloomberg event outside of the annual World Economic Forum meeting in Davos, Switzerland. The CEO said powerful new AI models would likely require even more energy consumption than previously imagined. Solving that energy deficit, he suggested, will require a “breakthrough” in nuclear fusion.

“There’s no way to get there without a breakthrough,” Altman said at the event according to Reuters. “It motivates us to go invest more in [nuclear] fusion.”

AI’s energy problem 

Though some AI proponents believe insights gleaned from advanced models could help fight climate change in novel ways, a growing body of research suggests the up-front energy required to train these complex models is taking a toll of its own. Experts expect the vast amounts of data needed to train models like OpenAI’s GPT and Google’s Bard could increase the global data server industry, which the International Energy Agency (IEA) estimates already accounts for around 2-3% of global greenhouse gas emissions. 

Researchers estimate training a single large language model like GPT-4 could use around 300 tons of CO2. Others estimate a single image spit out by AI image generator tools like Dall-E or Stable Diffusion requires the same amount of energy as charging a smartphone. The massive server farms needed to facilitate AI training also require vast amounts of water to stay cool. GPT-3 alone, recent research suggests, may have consumed 185,000 gallons of water during its training period

[ Related: A simple guide to the expansive world of artificial intelligence ]

Altman hopes climate-friendly energy solutions like more affordable solar energy and nuclear fusion can help AI companies meet this growing demand without worsening an already bleak climate outlook. Fusion, which mimics the power generated by stars, has long-attracted scientists and entrepreneurs as a source of nearly limitless, clean energy when produced on an industrial scale

Scientists have already hit several important milestones along the journey towards fusion, but it’s unlikely we will see fully functioning fusion reactors capable of powering AI training models anytime soon. The IAE expects a prototype fusion reactor could come online by 2024. Altman is getting in on the action in the meantime. In 2021, the OpenAI CEO and former Y Combinator President personally invested $375 million in Helion Energy, a US-based company developing a fusion power plant. 

AI will ‘change the world much less than we all think’

When he wasn’t pondering a fusion-fueled future, Altman was busy backpedaling away from some of his more cataclysmic claims related to AI. Less than one year ago, Altman signed onto a letter warning of runaway AI possibly ending all human life and wrote a blog post preparing for a world beyond superintelligent AI. Now, speaking to the crowd outside the World Economic Forum event, the CEO says the technology will “change the world much less than we all think.” 

Altman still believes artificial general intelligence, a vague and evolving industry term for a model capable of outperforming humans and exhibiting human-like cognitive abilities  is around the corner, but he seems less concerned about its disruptive impact than he did just months earlier. 

“It [AGI] will change the world much less than we all think and it will change jobs much less than we all think,” Altman said during a conversation at the World Economic Forum, according to CNBC. He went on to loosely predict AGI would be developed in the “reasonably close-ish future.” 

[ Related: What happens if AI grows smarter than humans? The answer worries scientists. ]

Altman continued on with his relatively reserved tenor during a Tuesday conversation with Microsoft CEO Satya Nadella and The Economist Editor-in chief Zanny Minton Beddoes. 

“When we reach AGI,” Altman said according to VentureBeat, “the world will freak out for two weeks and then humans will go back to do human things.”

Speaking on Thursday at the World Economic Forum, Altman continued pouring water over his company’s own technology, describing the tool as a “system that is sometimes right, sometimes creative, [and] often totally wrong.” Specifically, Altman said AI shouldn’t be trusted to make life or death decisions.

“You actually don’t want that [AI] to drive your car,” Altman said according to CNN. ”But you’re happy for it to help you brainstorm what to write about or help you with code that you get to check.”

It’s not entirely clear what caused AI’s loudest evangelist to muffle his tune on the technology’s impacts in such a short period of time. The change in tone notably comes just two months after Altman survived an attempt by OpenAI’s then board of directors to oust him from his role at the company.

At the time, the board members said they sought to remove Altman because he had not been “consistently candid in his communications.” Some observers interpreted that vague explanation as code for Altman allegedly prioritizing AI product launch speed over safety. Altman eventually returned as CEO following a week of late-night corporate jockeying fit for prime-time television

Altman’s about-face on AI’s impact and his previous doomsday scenarios may sound diametrically opposed but they share one key attribute: neither of them are based on open data verifiable by researchers or the greater public. OpenAI’s training methodology remains closed off, leaving predictions about its coming computational power mere speculation.