SHARE

Back in March, Zoom released what appeared to be a standard update to its Terms of Service policies. Over the last few days, however, the legal fine print has gone viral thanks to Alex Ivanos via Stack Diary and other eagle-eyed readers perturbed by the video conferencing company’s stance on harvesting user data for its AI and algorithm training. In particular, the ToS seemed to suggest that users’ “data, content, files, documents, or other materials” along with autogenerated transcripts, visual displays, and datasets can be used for Zoom’s machine learning and artificial intelligence training purposes. On August 7, the company issued an addendum to the update attempting to clarify its usage of user data for internal training purposes. However, privacy advocates remain concerned and discouraged by Zoom’s current ToS, arguing that they remain invasive, overreaching, and potentially contradictory.

According to Zoom’s current, updated policies, users still grant the company a “perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license… to redistribute, publish, import, access, use, store, transmit, review, disclose, preserve, extract, modify, reproduce, share, use, display, copy, distribute, translate, transcribe, create derivative works, and process” users’ vague “customer content.” As Motherboard highlighted on Monday, another portion of the ToS claims users grant the company the right to use this content for Zoom’s “machine learning, artificial intelligence, training, [and] testing.”

[Related: The Opt Out: 4 privacy concerns in the age of AI]

In response to the subsequent online backlash, Zoom Chief Product Officer Smita Hashim explained via a company blog post on August 7 that the newest update now ensures Zoom “will not use audio, video, or chat customer content to train our artificial intelligence models without your consent.” Some security advocates, however, are skeptical about the clarifications.

“We are not convinced by Zoom’s hurried response to the backlash from its update,” writes Caitlin Seeley George, the Campaigns & Managing Director of the privacy nonprofit, Fight for the Future, in a statement via email. “The company claims that it will not use audio or video data from calls for training AI without user consent, but this still does not line up with the Terms of Service.” In Monday’s company update, for example, Zoom’s CTO states customers “create and own their own video, audio, and chat content,” but maintains Zoom’s “permission to use this customer content to provide value-added services based on this content.”

[Related: Being loud and fast may make you a more effective Zoom communicator]

According to Hashim, account owners and administrators can opt-out of Zoom’s generative AI features such as Zoom IQ Meeting Summary or Zoom IQ Team Chat Compose via their personal settings. That said, visual examples provided in the blog post show that video conference attendees’ only apparent options in these circumstances are to either accept the data policy, or leave the meeting. 

“[It] is definitely problematic—both the lack of opt out and the lack of clarity,” Seeley further commented to PopSci.

Seeley and FFF also highlight that this isn’t the first time Zoom found itself under scrutiny for allegedly misleading customers on its privacy policies. In January 2021, the Federal Trade Commission approved a final settlement order regarding previous allegations the company misled users over video meetings’ security, along with “compromis[ing] the security of some Mac users.” From at least 2016 until the FTC’s complaint, Zoom touted “end-to-end, 256-bit encryption” while in actuality offering lower levels of security.

Neither Zoom’s ToS page nor Hashim’s blog update currently link out to any direct steps for opting-out of content harvesting. Zoom press representatives have not responded to PopSci’s request for clarification at the time of writing.