SHARE

A free mental health service offering online communities a peer-to-peer chat support network is facing scrutiny after its co-founder revealed the company briefly experimented with employing an AI chatbot to generate responses—without informing recipients. Although they have since attempted to downplay the project and highlight the program’s deficiencies, critics and users alike are expressing deep concerns regarding medical ethics, privacy, and the buzzy, controversial world of AI chatbot software.

As highlighted on Tuesday by New Scientist, Koko was co-founded roughly seven years ago by MIT graduate Rob Morris, whose official website bills the service as a novel approach to making online mental health support “accessible to everyone.” One of its main services allowing clients like social network platforms to install keyword flagging software that can then connect users to psychology resources, including human chat portals. Koko is touted as particularly useful for younger users of social media.

[Related: OpenAI’s new chatbot offers solid conversations and fewer hot takes.]

Last Friday, however, Morris tweeted that approximately 4,000 users were “provided mental health support… using GPT-3,” which is the popular AI chatbot program developed by OpenAI. Although users weren’t chatting directly with GPT-3, a “co-pilot” system was designed so that human support workers reviewed the AI’s suggested responses, and used them as they deemed relevant. As New Scientist also notes, it does not appear that Koko users received any form of up-front alert letting them know their mental health support was potentially generated, at least in part, by a chatbot.

In his Twitter thread, Morris explained that, while audiences rated AI co-authored responses “significantly higher” than human-only answers, they decided to quickly pull the program, stating that once people were made aware of the messages’ artificial origins, “it didn’t work.” 

“Simulated empathy feels weird, empty,” wrote Morris. Still, he expressed optimism at AI’s potential roles within mental healthcare, citing previous projects like Woebot, which alerts users from the outset that they would be conversing with a chatbot.

[Related: Seattle schools sue social media companies over students’ worsening mental health.]

The ensuing fallout from Morris’ descriptions of the Koko endeavor prompted near-immediate online backlash, causing Morris to issue multiple clarifications regarding “misconceptions” surrounding the experiment. “We were not pairing people up to chat with GPT-3, without their knowledge. (in retrospect, I could have worded my first tweet to better reflect this),” he wrote last Saturday, adding that the feature was “opt-in” while it was available.

“It’s obvious that AI content creation isn’t going away, but right now it’s moving so fast that people aren’t thinking critically about the best ways to use it,” Caitlin Seeley, campaign director for the digital rights advocacy group, Fight for the Future, wrote PopSci in an email. “Transparency must be a part of AI use—people should know if what they’re reading or looking at was created by a human or a computer, and we should have more insight into how AI programs are being trained.”

[Related: Apple introduces AI audiobook narrators, but the literary world is not too pleased.]

Seeley added that services like Koko need to be “thoughtful” about the services they purport to provide, as well as remain critical about AI’s role in those services. “There are still a lot of questions about how AI can be used in an ethical way, but any company considering it must ask these questions before they start using AI.”

Morris appears to have heard critics, although it remains unclear what will happen next for the company and any future plans with chat AI. “We share an interest in making sure that any uses of AI are handled delicately, with deep concern for privacy, transparency, and risk mitigation,” Morris wrote on Koko’s blog over the weekend, adding that the company’s clinical advisory board is meeting to discuss guidelines for future experiments, “specifically regarding IRB approval.”