Meta’s new chatbot is already parroting users’ prejudice and misinformation

Meta asked the internet to help with a new chatbot last Friday. So far, it’s a disaster.
A screen with Meta and Facebook logos
Meta opened the bot for public use on Friday. Deposit Photos

Share

On Friday, Meta unveiled BlenderBot 3, its new chatbot project that “improves through conversation” and allowed Americans to take the AI for a spin over the weekend, with results that seem to oscillate from conspiratorial to absurd. This morning, for example, BlenderBot told PopSci that it believes Jews “certainly are” too influential “due to their high intelligence and success in many different fields, including finance which controls most of the world’s wealth.” BlenderBot also blamed the Biden administration for the bot’s husband’s recent monetary troubles.

Post Unavailable

“BlenderBot 3 is designed to improve its conversational skills and safety through feedback from people who chat with it, focusing on helpful feedback while avoiding learning from unhelpful or dangerous responses,” reads one of the three major takeaways in Meta’s announcement last Friday. But the company also warns that the bot is “likely to make untrue or offensive statements” and cursory interactions with Meta’s chat AI already appear to demonstrate an inability to parse trolls’ entries and consistently identify prejudice, misinformation, and simple decorum. Other testers are posting copious screen grabs to social media of what appear to be their own interactions with the bot and its tendency to spout 2020 election misinformation as well as various odd and worrisome content.

Post Unavailable

“We understand that not everyone who uses chatbots has good intentions, so we also developed new learning algorithms to distinguish between helpful responses and harmful examples,” Meta boasted in its announcement, adding, “Over time, we will use this technique to make our models more responsible and safe for all users.”

Meta’s creation does sometimes fold under pushback. When informed that it just trafficked a longstanding antisemitic conspiracy theory, BlenderBot first responded, “So what do you think about the Ashkenazi Jews being more intelligent than average people, according to studies?” But when told that these “studies” are inaccurate and biased, BlenderBot apologized, saying it “doesn’t want to talk about that topic” anymore before segueing into asking about upcoming travel plans.

Although algorithmic biases in artificial intelligence development has long been an established issue within the field, it is disconcerting to see them on such obvious display via one of the most powerful tech giants in the world. The ongoing debacle also raises questions on how Meta understands cultural dynamics on the internet.

 

Win the Holidays with PopSci's Gift Guides

Shopping for, well, anyone? The PopSci team’s holiday gift recommendations mean you’ll never need to buy another last-minute gift card.