A look inside TikTok’s seemingly all-knowing algorithm

A leaked internal document offers clues on what information TikTok uses to curate your video stream.
tiktok on phone screen
Solen Feyissa / Unsplash

Share

Since it first launched in 2016, TikTok has amassed more than 1 billion monthly users. The key to its success lies in its addictively scrollable and endless stream of videos seemingly targeted right at you. For quite some time, people have been wondering what data the ByteDance-owned micro-video-sharing app gathers on its users to learn how to feed that content machine. Now, we’re getting closer to figuring it out.

On Sunday, a reporter from the The New York Times obtained an internal document from TikTok’s engineering team in Beijing that explained how likes, comments, watch time, and shares feed into a recommendation algorithm, which then rates how appealing a video is for a given user. Additionally, a screenshot the Times saw implied that a team of content moderators could see the videos you send to friends or upload privately, which tips an even deeper level of personalization.

This simplified breakdown of the algorithm offered “a revealing glimpse both of the app’s mathematical core and insight into the company’s understanding of human nature — our tendencies toward boredom, our sensitivity to cultural cues — that help explain why it’s so hard to put down,” wrote the Times. But it also highlighted how the algorithm can steer you down a rabbithole of toxic content that “could induce self-harm.”

The new details build off an investigation by The Wall Street Journal earlier this year that used 100 automated “bot” accounts to chart the migration of an individual’s TikTok experience from a wide variety of popular, mainstream videos to more targeted, interest-specific content. For example, a bot that the WSJ programmed to have a general interest in politics was ultimately served videos about election conspiracies and QAnon. A spokesperson from TikTok pushed back against the WSJ report, saying that their experiment “isn’t representative of real user behavior because humans have a diverse set of interests.”

[Related: Why Spotify’s music recommendations always seem so spot on]

According to the document the Times saw, the equation for rating videos based on user activity generally accounts for a combination of likes, comments, play, and time spent on a clip. Somewhere in there is a formula that also calculates how interesting individual creators are to viewers. “The recommender system gives scores to all the videos based on this equation, and returns to users videos with the highest scores,” the Times reported. 

The inventory problem

The ultimate goal is to present a content lineup that maximizes the time users spend on the app and keep them coming back. But TikTok engineers are aware that if they only feed one type of video to a user, that person would grow bored and leave the app. To solve this, they proposed two addendums to the algorithm that would show more videos from a creator it thinks you like, and put a daily limit on videos that have the same tags. They further considered diversifying recommendations in the “For You” tab, interspersing content you might like with others that you might not ordinarily see. 

“The basic idea is that they want to have eyeballs on the page. You want to get people to use your product,” says Joshua Tucker, co-director of NYU’s Center for Social Media and Politics. 

“I think the genius of TikTok is the interface where you can go to your followers, or you can say, TikTok, show me what you think I’d like to see,” says Tucker. By doing that, TikTok has solved what Tucker calls “the inventory issue” which plagued platforms like Facebook and initially, Twitter, as well. Machine learning now allows apps to crunch  huge quantities of data and make inferences about personal preferences rather than presenting every user with the same basic options for content. Outside of your preferences, platforms want to learn how your interactions may likely change depending on your network. For example, will you look at content because your friends are looking at it?

[Related: Social media really is making us more morally outraged]

Facebook is limited because, other than ads, it will only surface posts from friends and pages you follow. And for the longest time, Twitter only showed you tweets from users you followed. “If you notice the newest thing on Twitter, these ‘Topics,’ that’s solving the inventory problem,” says Tucker. “It gives you a way of getting more inventory, which means they can try more things out to see what you’d like. TikTok did that from the beginning with “For You.””

TikTok’s human content moderators deal with the controversial content that the computer algorithms have a hard time sorting. They can remove content, limit who views a video, and prevent videos from being recommended or going into the feed, according to a leaked document obtained by Netzpolitik. As TikTok increasingly automates its reviewing systems, creators can appeal to human content moderators if they believe their videos were removed by error. 

How well does TikTok know you?

Maybe the reason why TikTok can figure out your tastes so quickly is because they have much more data on what you might like, Tucker suggests. 

Last June, TikTok provided a rare inside view of its recommendation algorithm at work. In a blog post, the company wrote that data relating to user interactions (liking, commenting, or following an account), video information (captions, sounds, and hashtags), and account settings (language preference, location) are all weighed by the system to calculate a user’s interests.

[Related: Why YouTube is hiding dislikes on videos]

While TikTok claims that it uses likes, comments, and shares as metrics to measure your engagement with specific content, the WSJ found that the most important element the app analyzed was the watch time on a video—whether you immediately clicked away, paused, or rewatched. The algorithm sees what you’re reacting to, and can quickly pinpoint “the piece of content that you’re vulnerable to, that will make you click, that will make you watch, but it doesn’t mean that you really like it and that it’s the content you enjoy the most,” data scientist Guillaume Chaslot told WSJ upon reviewing their experiment. As a user’s stream becomes more niche, they’re more likely to encounter harmful content that is less vetted by moderators, according to the WSJ. This becomes a concern as TikTok’s user base tends to skew younger than other social media platforms like Facebook or Youtube. 

Continued scrutiny 

The recommendation algorithms of websites have been under increased scrutiny ever since Facebook whistleblower Frances Haugen testified in Congress that such websites which prioritize engagement over safety can risk amplifying dangerous misinformation. Lawmakers have responded by discussing possible regulatory changes to hold platforms that employ these algorithms responsible for harms that could come from recommended content. 

In researching YouTube’s recommendation algorithms, Tucker has been interested in the question of whether it’s really the algorithm that’s pointing you towards specific content, or if it’s about people’s individual choices. “Either way, if there’s content on these platforms that is contributing to glorifying suicide to children, that content shouldn’t be there regardless of how you get to it,” Tucker says. 

[Related: Congress is coming for big tech—here’s how and why]

TikTok says it has been vigilant about deleting content that it finds in violation of its rules (self-harm content included). It has in the past said that it uses a combination of computers and humans to vet their content. But mistakes happen, and sometimes videos get incorrectly flagged, or slip through the filters. 

In September, TikTok said in a news release that it was putting out new “well-being guides” to support users who share their personal experiences through the platform and suggest responsible engagement tips. It also announced that it would expand search interventions so that it could better provide crisis support resources when users looked for distressing content. 

“It’s a real challenge for these companies because they are so big,” says Tucker. “TikTok grew too fast. And this happened with Facebook: it grew too fast so it wasn’t aware of harms that were occurring in different languages, for example.”

 

Win the Holidays with PopSci's Gift Guides

Shopping for, well, anyone? The PopSci team’s holiday gift recommendations mean you’ll never need to buy another last-minute gift card.