Google’s AI contractors say they are underpaid, overworked, and ‘scared’

A new Bloomberg report sheds further light on the steep human toll to train generative AI programs.
Man at desktop computer entering computer code
Contractors are allegedly paid as little as $14 an hour to review copious AI responses. Deposit Photos

Share

Thousands of outsourced contract workers are reportedly paid as little as $14 an hour to review Google Bard’s wide-ranging responses at breakneck speeds to improve the AI program’s accuracy and consistency. The labor conditions, which allegedly have grown only more frantic as Big Tech companies continue their “AI arms race,” were reported on Wednesday by Bloomberg, who interviewed multiple workers at two Google-contracted companies, Appen Ltd. and Accenture Plc.

The workers, speaking on condition of anonymity out of fear of company retaliation, also provided internal training documents, which showcase Google’s complicated instructions for handling and assessing Bard responses. One task describes workers receiving a user question and AI generated response, as well as a few AI-generated target sentences and their sources. Google’s own document, however, cautioned that these answers may often “either misrepresent the information or will provide additional information not found in the [e]vidence.” According to Bloomberg, workers sometimes had as little as three minutes to issue their response.

[Related: Google stole data from millions of people to train AI, lawsuit says]

In some instances, Google expected workers to grade Bard’s answers “based on your current knowledge or quick web search,” the guidelines say. “You do not need to perform a rigorous fact check.” Some answers allegedly involved “high-stakes” subjects that workers are not necessarily equipped to quickly assess. One example within Google’s internal training documents, for instance, asks contractors to determine the helpfulness and veracity of Bard’s dosage recommendations for the blood pressure medication, Lisinopril. 

In the Bloomberg report, one contractor described workers as “scared, stressed, underpaid,” stating that the contractors often didn’t “know what’s going on.” This was especially prevalent as Google continued ramping up its AI product integrations in an effort to keep up with competitors such as OpenAI and Meta. “[T]hat culture of fear is not conducive to getting the quality and the teamwork that you want out of all of us,” they added.

[Related: Building ChatGPT’s AI content filters devastated workers’ mental health, according to new report.]

Google is not alone in its allegedly unfair contractor conditions. In January, details emerged regarding working standards for outsourced OpenAI content moderators largely based in Kenya. For often less than $2 per hour, workers were exposed to copious amounts of toxic textual inputs, including murder, bestiality, sexual assault, incest, torture, and child abuse.

Meanwhile, the very information Google contractors are expected to quickly parse and assess is also under  legal scrutiny. The company has been hit with multiple class action lawsuits in recent weeks, alleging copyright infringement and the possibly illegal data scraping of millions of internet users’ online activities.

 

Win the Holidays with PopSci's Gift Guides

Shopping for, well, anyone? The PopSci team’s holiday gift recommendations mean you’ll never need to buy another last-minute gift card.