‘Hate is addictive’: TikTok’s new policies might do little for LGBTQ users’ safety

While LGBTQ advocates applaud the moves, they call for more impactful changes across social media platforms.
tiktok on web browser
TikTok has updated its community guidelines to ban deadnaming and other forms of harassment against LGBTQ users. Solen Feyissa / Unsplash

LGBTQ people are almost four times more likely to experience violent victimization, according to the Williams Institute at UCLA School of Law. And even on social media—where many find affirming communities—they are still exposed to abuse. About 64 percent of members of the community have reportedly experienced online harassment in the past year, according to the Anti-Defamation League.

But on February 8, one major platform—TikTok—tried to make the space a little bit safer for LGBTQ users. It updated its community guidelines to explicitly ban deadnaming, misgendering, misogyny, and the promotion of so-called conversion therapy on its platform.

“Though these ideologies have long been prohibited on TikTok, we’ve heard from creators and civil society organizations that it’s important to be explicit in our Community Guidelines,” said Cormac Keenan, TikTok’s head of trust and safety, in a press release.  

Misgendering is the act of referring to a transgender person using the wrong gender, and deadnaming is referring to or revealing the former name of a transgender person without their consent. “Conversion therapy” is a scientifically discredited and harmful practice of trying to “convert” LGBTQ people to be heterosexual or cisgender. 

While LGBTQ users and advocacy groups applauded the updated community guidelines, it’s unclear how they will be enforced or how much safer they will make LGBTQ users on TikTok. The updated guidelines are also just one change out of many that LGBTQ advocates would like to see social media companies put in place.

Preventing real-world harm

Malicious misgendering and deadnaming can cause real-world harm, says Jenni Olson, the social media safety program director of GLAAD, an LGBTQ advocacy organization. For example, transgender and nonbinary youth who reported living with people who did not respect their pronouns attempted suicide at nearly double the rate of those who reported living with all people who respected their pronouns, according to a 2021 survey by the Trevor Project.  

The promotion of so-called conversion therapy can be just as dangerous. LGBTQ youth who were subjected to such practices were more than twice as likely to attempt suicide than those who were not, also according to the Trevor Project

Homophobia and transphobia, like other forms of bigotry, run rampant on social media in part because of how platforms are designed, says Jeffrey Marsh, a trans, nonbinary author and TikTok creator with more than 500,000 followers

“Hate is addictive,” Marsh says. If a platform cares most about engagement, they add, “hate is a very engaging activity.”

[Related: Tips for dealing with trolls on social media]

Marsh has had many of their videos go viral on the “wrong side of TikTok,” when their video appears more on the feeds of homophobic and transphobic users because they are fervently and viciously engaging with it. Marsh has even received death threats, but they believe sharing even the most vile comments is an opportunity to educate others. 

“It’s part of my mission to help others understand LGBTQ people even if they’re a hateful bigot,” Marsh says. “But if I was someone with a different mental health profile, it might be a very dangerous position to be in.”

LGBTQ users and advocates applaud the changes

The updated community guidelines came after GLAAD and more than 75 other groups signed an open letter in November tasking TikTok and other social media platforms with better protecting users from homophobia, transphobia, misogyny, and racism. 

In May, GLAAD published its inaugural Social Media Safety Index, which evaluates platforms based on how they protect, or fail to protect, LGBTQ users. It also makes recommendations for how they can improve. Olson says GLAAD meets regularly with all of the platforms to offer guidance. The explicit ban of deadnaming, misgendering, and the promotion of conversion therapy was one of the recommendations of the Index, but GLAAD points out that in 2018 Twitter explicitly banned misgendering and deadnaming, and Pinterest explicitly banned the promotion of conversion therapy and “denial of an individual’s gender identity or sexual orientation.”

[Related: How to use science to talk to kids about gender]

“It is really powerful for a company to make a statement to say, ‘These are the expectations that we have for people who are using our products,’” Olson says. 

Seeing the platform they love explicitly ban these behaviors could be very affirming to LGBTQ youth, Marsh says, given TikTok’s popularity. Marsh has been creating content for years across social media platforms, including Vine, one of the predecessors of TikTok, and feels that TikTok is the first platform to really center marginalized voices. 

“It’s great that they have seen the value in calling out this specific kind of hatred. To me, that is a form of respect,” Marsh says. “At the same time, I’m not sure if the day-to-day lives of LGBTQ creators will change that much.”

Enforcement is hit-or-miss

While TikTok stated that it uses a “combination of technology and people to identify and remove violations of our Community Guidelines,” the company did not offer many details on how it will enforce the specific updated guidelines. Both Olson and Marsh are dubious of how, or if, the platform will hold users accountable.

“There’s the policy and then there’s the enforcement,” Olson says. “And the enforcement of all these things is very uneven to say the least.”

According to TikTok, its algorithms review videos before they are uploaded and flag or remove videos that appear to violate community guidelines. This applies mostly to the categories in which the technology can be the most accurate such as the safety of minors, adult nudity and sexual activities, and violent and graphic content. 

For more nuanced categories, such as hateful behavior, harassment, and bullying, TikTok relies more on human moderators, Eric Han, TikTok’s head of US safety said in a July press release

Marsh thinks TikTok will rely mainly on user reports to enforce the updated community guidelines. 

Jamie Favazza, TikTok’s director of policy and safety communications, says via email: “We strive to proactively enforce our policies leveraging both technology and people, and we also value and review the reports we receive from our community.”

Transparency lacking specifics and accountability

TikTok publishes quarterly Community Guidelines Enforcement Reports that list the number of videos they have taken down, organized by general guidelines. From July to September 2021, TikTok removed more than 91 million videos, or about 1 percent of all videos uploaded to the platform, a little more than a third of which were removed automatically. Of all videos removed, about 1.5 percent were removed for “hateful behavior,” and about 5 percent were removed for “harassment and bullying.”

[Related: The early internet was a haven for trans youth]

But Olson points out that these numbers aren’t identified by specific forms of harassment, like homophobia and transphobia.

“They say how much they took down,” Olson says. “We have no idea how much they missed.” 

Though Olson applauds the updates to TikTok’s community guidelines and wishes other social media platforms would follow suit, she would like to see more regulatory oversight of platforms to ensure they are upholding their policies and keeping users safe. 

“Any other industry in America is regulated so that there are consequences for the public health and safety impact of their products,” Olson says. “We shouldn’t be having to absorb the public health and safety impact when there are things that can be done to mitigate that harm.” 

She specifically references the Social Media NUDGE Act, a bipartisan bill introduced to Congress earlier this month that aims to study and reduce harmful content on social media platforms. It plans to do this by adding levels of “friction,” such as Twitter’s policy of asking users if they’ve read an article before posting it. Olson also says that GLAAD is currently working on the 2022 Social Media Safety Index which will include scorecards for social media platforms.

Meanwhile, Marsh acknowledges that it is difficult for social media companies to fully eradicate homophobia and transphobia from their platforms because they reflect the homophobia and transphobia in society.

“LGBTQ hate was not invented by TikTok. It was not invented by the people who use TikTok. It is a systemic global problem,” Marsh says. “It’s not the easiest thing in the world to rid your platform of LGBTQ hate, because as a society, it is still a very addictive hatred.”