Microsoft changes Bing chatbot restrictions after much AI-generated weirdness

Meanwhile, an online paper trail indicates Microsoft knew of Bing's chat problems as far back as November 2022.
Microsoft Windows Store logo
Deposit Photos

Share

It’s been a wild ride for anyone paying attention to the ongoing Microsoft Bing chatbot saga. After a highly publicized debut on February 7, both it and its immediate competitor, Google’s Bard, were almost immediately overshadowed by users’ countless displays of oddball responses, misleading statements, unethical ramifications, and incorrect information. After barely a week of closed testing, however, it appears Microsoft is already delaying its “new day for search” by quietly instituting a handful of interaction limitations that have major ramifications for Bing’s earliest testers.

As highlighted by multiple sources —including many devoted Bing subreddit users—Microsoft seems to have initiated three updates to the waitlisted ChatGPT-integrated search engine on Friday: a 50 message daily limit for users, only five exchanges allowed per individual conversation, and no discussions about Bing AI itself.

[Related: No, the AI chatbots (still) aren’t sentient.]

Although Microsoft’s new red tape might seem minor at first glance, the simple changes massively restrict what users can generate during their discussions with the Bing bot. A conversation’s five message limit, for example, drastically curtails the potential to bypass the chatbot’s guardrails meant to prevent thorny topics like hate speech and harassment. Previously, hacks could be accomplished via users’ crafty series of commands, questions, and prompts, but that will now prove much harder to pull off in five-or-less moves.

Similarly, a ban on Bing talking about “itself” will hypothetically restrict its ability to generate accidentally emotionally manipulative answers that users could misconstrue as the early stages of AI sentience (spoiler: it’s not).

Many users are lamenting the introduction of a restricted Bing, and argue the bot’s eccentricities are what made it so interesting and versatile in the first place. “It’s funny how the AI is meant to provide answers but people instead just want [to] feel connection,” Peter Yang, a product manager for Roblox, commented over the weekend. But if one thing has already been repeatedly shown, it’s that dedicated tinkerers consistently find ways to jailbreak the latest technologies.

[Related: Just because an AI can hold a conversation does not make it smart.]

In a February 15 blog update, Microsoft conceded that people using Bing for “social entertainment” were a “great example of where new technology is finding product-market-fit for something we didn’t fully envision.”

However, recent online paper trails indicate the company had advanced notice of issues within a ChatGPT-enabled Bing as recently as November 2022. As highlighted by tech blogger René Walter and subsequently Gary Marcus, an NYU professor of psychology and neural science, Microsoft public tested a version Bing AI in India over four months ago, and received similarly troubling complaints that are still available online.

 

Win the Holidays with PopSci's Gift Guides

Shopping for, well, anyone? The PopSci team’s holiday gift recommendations mean you’ll never need to buy another last-minute gift card.