No, the AI chatbots (still) aren’t sentient

Experts say that personification and projections of sentience on Microsoft and Google chatbots distract from the real issues.
Image of hands coming out of a computer or a man hiding behind a laptop
Chatbots simply cannot develop personalities—they don’t even understand what “personality” is. Deposit Photos

Share

Since testers began interacting with Microsoft’s ChatGPT-enabled Bing AI assistant last week, they’ve been getting some surreal responses. But the chatbot is not really freaking out. It doesn’t want to hack everything. It is not in love with you. Critics warn that this increasing focus on the chatbots’ supposed hidden personalities, agendas, and desires promotes ghosts in the machines that don’t exist. What’s more, experts warn that the continued anthropomorphization of generative AI chatbots is a distraction from more serious and immediate dangers of the developing technology.

“What we’re getting… from some of the world’s largest journalistic institutions has been something I would liken to slowing down on the highway to get a better look at a wreck,” says Jared Holt, a researcher at the Institute for Strategic Dialogue, an independent think tank focused on extremism and disinformation. To Holt, companies like Microsoft and Google are overhyping their products’ potentials despite serious flaws in their programs.

[Related: Just because an AI can hold a conversation does not make it smart.]

Within a week after their respective debuts, Google’s Bard and Microsoft’s ChatGPT-powered Bing AI assistant were shown to generate incomprehensible and inaccurate responses. These issues alone should have paused product rollouts, especially in an online ecosystem already rife with misinformation and unreliable sourcing. 

Though human-programmed limits should technically prohibit the chatbots from generating hateful content, they can be easily bypassed. “I’ll put it this way: If a handful of bored Redditors can figure out how to make your chatbot spew out vitriolic rhetoric, perhaps that technology is not ready to enter every facet of our lives,” Holt says.

Part of this problem resides in how we choose to interpret the technology. “It is tempting in our attention economy for journalists to endorse the idea that an overarching, multi-purpose intelligence might be behind these tools,” Jenna Burrell, the Director of Research at Data & Society, tells PopSci. As Burrell wrote in an essay last week, “When you think of ChatGPT, don’t think of Shakespeare, think of autocomplete. Viewed in this light, ChatGPT doesn’t know anything at all.”

[Related: A simple guide to the expansive world of artificial intelligence. ]

ChatGPT and Bard simply cannot develop personalities—they don’t even understand what “personality” is, other than a string of letters to be used in pattern recognition drawn from vast troves of online text. They calculate what they believe to be the next likeliest word in a sentence, plug it in, and repeat ad nauseam. It’s a “statistical learning machine,” more than a new pen pal, says Brendan Dolan-Gavitt, an assistant professor in NYU Tandon’s Computer Science and Engineering Department. “At the moment, we don’t really have any indication that the AI has an ‘inner experience,’ or a personality, or something like that,” he says.

Bing’s convincing imitation of self-awareness, however, could pose “probably a bit of danger,” with some people becoming emotionally attached to misunderstanding its inner workings. Last year, Google engineer Blake Lemoine’s blog post went viral and gained national coverage; it claimed that the company’s LaMDA generative text model (which Bard now employs) was already sentient. This allegation immediately drew skepticism from others in the AI community who pointed out that the text model was merely imitating sentience. But as that imitation improves, Burrell agrees it “will continue to confuse people who read machine consciousness, motivation, and emotion into these replies.” Because of this, she contends chatbots should be viewed less as “artificial intelligence,” and more as tools utilizing “word sequence predictions” to offer human-like replies.

Anthropomorphizing chatbots—whether consciously or not—does a disservice to understanding both the technologies’ abilities, as well as their boundaries. Chatbots are tools, built on massive resources of prior human labor. Undeniably, they are getting better at responding to textual inputs. However, from giving users inaccurate financial guidance to spitting out dangerous advice on dealing with hazardous chemicals, they still possess troubling shortfalls.

[Related: Microsoft’s take on AI-powered search struggles with accuracy.]

“This technology should be scrutinized forward and backwards,” says Holt. “The people selling it claim it can change the world forever. To me, that’s more than enough reason to apply hard scrutiny.”

Dolan-Gavitt thinks that potentially one of the reasons Bing’s recent responses remind readers of the “rogue AI” subplot in a science fiction story is because Bing itself is just as familiar with the trope. “I think a lot of it could be down to the fact that there are plenty of examples of science fiction stories like that it has been trained on, of AI systems that become conscious,” he says. “That’s a very, very common trope, so it has a lot to draw on there.”

On Thursday, ChatGPT’s designers at OpenAI published a blog post attempting to explain their processes and plans to address criticisms. “Sometimes we will make mistakes. When we do, we will learn from them and iterate on our models and systems,” the update reads. “We appreciate the ChatGPT user community as well as the wider public’s vigilance in holding us accountable.”

 

Win the Holidays with PopSci's Gift Guides

Shopping for, well, anyone? The PopSci team’s holiday gift recommendations mean you’ll never need to buy another last-minute gift card.