SHARE

Coded language is nothing new—but the scale at which it can be deployed via social media is essentially unprecedented. Observers often compare online content moderation to games of Whack-a-Mole, in which platforms can barely stem the influx of targeted misinformation, conspiracy theories, propaganda imagery, and hate speech.

As a report from Bloomberg last week detailed, antivaxxers in particular are becoming increasingly reliant on coded language, often using emojis to convey prohibited misinformation and propaganda on social media platforms such as Facebook. What’s more, a former Facebook exec overseeing public policy says it is becoming clear that current AI moderation programs aren’t up to challenge—and there’s reason to believe they may never be.

“All these systems that these platforms continue to build are frankly still very much in their infancy of being able to do some of the stuff that they would like them to be able to do,” Katie Harbath, CEO of the tech policy strategy group, Anchor Change and a Nonresident Senior Fellow for the Atlantic Council’s Digital Forensics Research Lab who previously served for ten years as Facebook’s head of public policy, tells PopSci.

[Related: The complex realm of misinformation intervention.]

Meeting and conversing within groups and pages with vague names like “Died Suddenly,” Bloomberg notes antivaxxers continue to peddle patently false COVID-19 misinformation using phrases like “eaten the cake” to refer to taking vaccines. “Sometimes, users claim that loved ones have taken four or five ‘slices’ of the Pfizer or Moderna vaccines, using emoji for pizza, cupcakes and various fruits to express their point,” adds the report.

Context is key for flagging and removing prohibited content, something that is made even more complicated for AI monitoring programs by the introduction of emoji code languages. AI content moderation utilizes machine learning algorithms to identify, flag, and if needed, remove content it identifies as problematic—typically sexually explicit or violent images and writing, but coded language and emojis are still often an Achilles’ Heel for them. “[M]achines can still miss some important nuances, like misinformation, bias, or hate speech. So achieving one hundred percent clear, safe, and user-friendly content on the Internet seems almost impossible,” explains a rundown from data annotation service Label Your Data.

Harbath says that the challenge for tackling both emojis and coded language is twofold, both for AI systems and human overseers. “One, you have to retrain your moderators to be able to try to understand that context, and to figure out if they are trying to use this emoji,” she says. “That can be challenging, based on how much [context and material] the content moderators do or do not get.” Harbath says that these moderators often only see a single post or message at a time, depriving them of potentially vital context for enforcement decisions.

[Related: It’s possible to inoculate yourself against misinformation.]

Trying to get ahead of these groups presents its own challenges, as well. Harbath explains that updating or broadening a moderator system’s classifiers and nomenclature can also lead to higher false positives, creating new headaches and complications while simultaneously doing more harm than good. “Most people use emojis in a relatively benign way,” she says, “… It’s a constant fight that all the platforms have to deal with.”

Facebook’s parent company, Meta, chose to cite its successes in this realm when reached for comment on the issue. “Attempts to evade detection or enforcement are a sign that we are effectively enforcing our policies against COVID misinformation,” Aaron Simpson, a policy communications manager at Facebook, writes via email. Simpson also notes that, since the pandemic’s onset, Facebook has removed “more than 27 million pieces of content” for violating polices regarding COVID-19 misinformation across both Facebook and Instagram. Facebook alone counts approximately 241 million Americans on its platform as of this year.

Despite these many issues, there are still silver linings. AI programs continue to improve their efficacy, and given its very nature, coded language is generally only used by people already “in the know,” and therefore isn’t necessarily pivotal for recruitment efforts or spreading propaganda. It may be small consolation, but people like Harvath are wary of the alternatives, urging for more digital literacy programs as opposed to such tactics as outright prohibiting emojis.

“You could go wholesale on banning that stuff altogether, but then they would just go [back] to coded words,” says Harvath. “You’re pretty much getting to the point of, like, ‘Just shut the internet down.'”