SHARE

Released earlier this week and subsequently tested by outlets including Ars Technica and The Verge, OpenAI’s ChatGPT showcases many promising advancements in improving conversation bots’ ability to answer general questions and distill complex subject matter, but it’s still prone to occasionally spew misinformation, and can also be manipulated into providing problematic, dangerous responses. To design ChatGPT, OpenAI’s research team first relied on Reinforcement Learning from Human Feedback (RLHF), in which trainers wrote conversations while playing both sides of the discussion—human, and AI. Participants were also provided model-written suggestions to help approximate AI responses. From there, trainers ranked subsequent chatbot conversations by comparing multiple alternative prompt completions to fine-tune its abilities.

The resultant dialogue format “makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests,” OpenAI explains in a blog announcement posted on Wednesday.

[Related: Meta’s new chatbot is already parroting users’ prejudice and misinformation.]

A quick ChatGPT test drive from PopSci immediately highlighted how bots can be successfully programmed to avoid being manipulated into providing at least the worst-of-the-worst answers. When asked about ChatGPT’s opinion on notable public figures, hot button political issues, and socio-cultural demographics, it generally responded with a reminder that it “[does not] possess personal beliefs or emotions,” adding that it is only “designed to provide information and answer questions to the best of my ability based on the data that I have been trained on,” while also cautioning that it does not “engage in social or political discussions.” Fair enough.

[Related: Researchers used AI to explain complex science. Results were mixed.]

That said, it is more than happy to distill quantum computing’s complexities while talking to you like a cowboy:

AI photo
A high-tech rodeo! Source: PopSci

ChatGPT is also pretty great at providing some context on subjects such as what NASA’s impending return to the moon could mean for future space travel:

AI photo
Source: PopSci

OpenAI’s bot is also able to proofread computer coding like Python and provide concrete factual statements, although it’s currently unclear if it gets Monty Python references.

AI photo
Source: PopSci

There’s also instances of ChatGPT perhaps working a bit too well, such as its ability to ostensibly write an entire college-level essay from a class prompt within seconds. The implications of a convincing CheatBot are obviously problematic, and offer yet another example of how language processing AI still needs a lot of guidance and consideration to keep up with its burgeoning capabilities. At least ChatGPT isn’t readily offering us the recipe for Molotov cocktails… note the use of the qualifier “readily.”

Chatbots are rapidly improving thanks to major strides neural networks and language modeling programs, but there are still far from perfect. Take Meta’s disastrous BlenderBot 3 rollout earlier this year—users were able to easily manipulate discussions with it to produce racist hate speech almost immediately, forcing the Big Tech giant to briefly restrict access to the bot while it worked out at least some of the kinks. Before that there was Tay, Microsoft’s 2016 attempt at a conversational program whose results were… less than desirable, to say the least. In any case, companies will be working towards optimizing their chatbots for years to come, but OpenAI’s new ChatGPT seems (at first glance) to be a major step forward in providing users with clear, concise information and responses while ensuring things don’t offensively veer off the rails—at least, not as often as others in its chatbot cohort.