Is ChatGPT groundbreaking? These experts say no.

Meta's Chief AI scientist claims that Google, Meta, and other startups are working with very similar models.
In this photo illustration, a silhouetted woman holds a smartphone with the OpenAI logo displayed on the screen. Photo Illustration by Rafael Henrique/SOPA Images/LightRocket via Getty Images

Share

ChatGPT, OpenAI’s AI-powered chatbot, has been impressing the public—but AI researchers aren’t as convinced it’s breaking any new ground. In an online lecture for the Collective[i] Forecast, Yann LeCun, Meta’s Chief AI Scientist and Turing Award recipient, said that “in terms of underlying techniques, ChatGPT is not particularly innovative,” and that Google, Meta, and “half a dozen startups” have very similar large language models, according to ZDNet

While this might read as a Meta researcher upset that his company isn’t in the limelight, he actually makes a pretty good point. But where are these AI tools from Google, Meta, and the other major tech companies? Well, according to LeCun, it’s not that they can’t release them—it’s that they won’t

Before we dive into the nitty gritty of what LeCun is getting at, here’s a quick refresher on the conversations around ChatGPT, which was released  to the public late last year. It’s a chatbot interface for OpenAI’s commercially available Generative Pre-trained Transformer 3 (GPT-3) large language model that was released in 2020. It was trained on 410 billion “tokens” (simply, semantic fragments) and is capable of writing human-like text—including jokes and computer code. While ChatGPT is the easiest way for most people to interact with GPT, there are more than 300 other tools out there that are based on this model, the majority of them aimed at businesses. 

From the start, the response to ChatGPT has been divisive. Some commenters have been very impressed by its ability to spit out coherent answers to a wide range of different questions, others have pointed out that it’s just as capable at spinning total fabrications that merely adhere to English syntax. Whatever ChatGPT says sounds plausible—even when it’s nonsense. (AI researchers call this “hallucination”.)

For all the think-pieces being written (including this one), it’s worth pointing out that OpenAI is an as-yet-unprofitable start up. Its DALL-E 2 image generator and GPT models have attracted a lot of press coverage, but it has not managed to turn selling access to them into a successful business model. OpenAI is in the middle of another fundraising round and is set to be valued at around $29 billion after taking $10 billion in funding from Microsoft (on top of the $3 billion Microsoft has invested previously). It’s in a position to move fast and break things, that as LeCun points out, more established players aren’t. 

For Google and Meta, their progress has been slower. Both companies have large teams of AI researchers (though less after the recent layoffs) and have published very impressive demonstrations—even as some public access projects have devolved into chaos. For example,  last year, Facebook’s Blenderbot AI chatbot started spewing racist comments, fake news, and even bashing its parent company within a few days of its public launch. It’s still available, but its kept more constrained than ChatGPT. While OpenAI and other AI start ups like StabilityAI have been able to roll through their models’ open bigotry, Facebook understandably has had to roll back. Its caution comes from the fact that it’s significantly more exposed to regulatory bodies, government investigations, and bad press. 

With that said, both companies have released some incredibly impressive AI demos that we’ve covered here on PopSci. Google has shown off a robot that can program itself, an AI-powered story writer, an AI-powered chatbot that one researcher tried to argue was sentient, an AI doctor that can diagnose patients based on their symptoms, and an AI that can convert a single image into a 30-second video. Meta meanwhile has AIs that can win at Go, predict the 3D structure of proteins, verify Wikipedia’s accuracy, and generate videos from a written prompt. These incredibly impressive tasks represent just a small fraction of what their researchers are doing—and because the public can’t be trusted, we haven’t got to try them yet. 

Now though, OpenAI might have influenced Google and Meta to give more publicly accessible AI demonstrations and even integrate full-on AI features into their services. According to The New York Times, Google views AI as the first real threat to its search business, has declared “code red”, and even corralled founders Larry Page and Sergey Brin into advising on AI strategy. It’s expected to release upwards of 20 AI-adjacent products over the next year, and we will presumably see more from Meta too. Though given how long some Google products last after launch, we will see if any stick around.

 

Win the Holidays with PopSci's Gift Guides

Shopping for, well, anyone? The PopSci team’s holiday gift recommendations mean you’ll never need to buy another last-minute gift card.