Building ChatGPT’s AI content filters devastated workers’ mental health, according to new report

Ensuring the popular chatbot remained inoffensive came at a cost.
Rows of desktop computers in computer lab
Sama employees were paid as little as $2 an hour to review toxic content. Deposit Photos

Share

Content moderation is a notoriously nasty job, and the burgeoning labor outsourcing industry surrounding it routinely faces heated scrutiny for the ethics of its approach to subjecting human workers to the internet’s darkest corners. On Wednesday, Time published a new investigative deep dive into Sama, a company that recently provided OpenAI with laborers solely tasked with reading some of the worst content the internet has to offer.

Although the endeavor’s overall goal was to develop helpful and necessary internal AI filters for the popular, buzzworthy ChatGPT program, former Sama employees say they now suffer from PTSD from their tenures sifting through thousands of horrid online text excerpts describing sexual assault, incest, bestiality, child abuse, torture, and murder, according to the new report.  Not to mention, the report states that these employees, largely based in Kenya, were paid less than $2 an hour.

[Related: Popular youth mental health service faces backlash after experimenting with AI-chatbot advice.]

OpenAI’s ChatGPT quickly became one of last year’s most talked about technological breakthroughs for its ability to near instantaneously generate creative text from virtually any human prompt. While similar programs already exist, they have been frequently prone to spewing hateful and downright abusive content due to their inability to internally identify toxic material amid the troves of internet writing utilized as generative reference points.

With already well over 1 million users, ChatGPT has been largely free of such issues (although many other worries remain), largely thanks to an additional built-in AI filtering system meant to omit much of the internet’s awfulness. But despite their utility, current AI programs aren’t self-aware enough to notice inappropriate material on their own—they first require training from humans to flag all sorts of contextual keywords and subject matter. 

Billed on its homepage as an “the next era of AI development,” Sama, a US-based data-labeling company that employs workers in Kenya, India, and Uganda for Silicon Valley businesses, claims to have helped over 50,000 people around the world rise above poverty via its employment opportunities. According to Time’s research sourced via hundreds of pages of internal documents, contracts, and worker pay stubs, however, the cost for dozens of workers amounted to self-described “torture” for takehome hourly rates of anywhere between $1.32 and $2.

[Related: OpenAI’s new chatbot offers solid conversations and fewer hot takes.]

Workers allege to Time that they worked far past their assigned hours, sifting through 150-250 disturbing text passages per day and flagging the content for ChatGPT’s AI filter training. Although wellness counselor services were reportedly available, Sama’s employees nevertheless experienced lingering emotional and mental tolls that exceeded those services’ capabilities. In a statement provided to Time, Sama disputes the workload, and said their contractors were only expected to review around 70 texts a shift.

“These companies present AI and automation to us as though it eliminates workers, but in reality that’s rarely the case,” Paris Marx, a tech culture critic and author of Road to Nowhere: What Silicon Valley Gets Wrong About Transportation, explains to PopSci. “… It’s the story of the Facebook content moderators all over again—some of which were also hired in Kenya by Sama.”

Marx argues the only way to avoid these kinds of mental and physical exploitation would require a massive cultural reworking within the tech industry, something that currently feels very unlikely. “This is the model of AI development that these companies have chosen,” they write, “[and] changing it would require completely upending the goals and foundational assumptions of what they’re doing.”

Sama initially entered into content moderation contracts with OpenAI amounting to $200,000 surrounding the project, but reportedly cut ties early to focus instead on “computer vision data annotation solutions.” OpenAI is currently in talks with investors to raise funding at a $29 billion valuation, $10 billion of which could come from Microsoft. Reuters previously reported OpenAI expects $200 million in revenue this year, and upwards of $1 billion in 2024. As the latest exposé reveals yet again, these profits frequently come at major behind-the-scenes costs for everyday laborers.

 

Win the Holidays with PopSci's Gift Guides

Shopping for, well, anyone? The PopSci team’s holiday gift recommendations mean you’ll never need to buy another last-minute gift card.