Content
Filter evasion
Exploiting hashtags
Misspellings
User searching
Inconsistent filtering
Autocomplete
TikTok sounds
Listening
What can be done?
How everyone can help
How you can help

Bringing light to the dark side of TikTok's algorithm

16 minute read
Updated Jan 9, 2023
Published by: Within Health Team

An exploration of eating disorder content on TikTok
and suggestions for improvements and interventions

Contents
Filter Evasion
Exploiting Hashtags
Misspellings
User Searching
Inconsistent Filtering
Autocomplete
TikTok Sounds
Listening
What can be done
How everyone can help
How you can help
WARNING some of the images in this article contain graphic depictions of eating disorders that may be triggering to some people. Viewer discretion is advised.

Introduction

Last year, the Wall Street Journal (WSJ) published numerous articles investigating how TikTok’s algorithm serves potentially harmful content to minors. As healthcare providers working to heal people impacted by disordered eating, we knew once we read the WSJ articles that we had no choice but to understand this problem more thoroughly.

So, our team of researchers went to work exploring TikTok's algorithm and trying to understand how harmful content is still finding its way into our feeds while at the same time TikTok is telling the world that they are solving these issues. Our aim? To see what TikTok is missing in its attempt to stop this problem. Below you will learn about what we did, what we found, and our suggestions and solutions to these problems. The best part? You can be part of the solutions too.

Let's begin.

In their research, the Wall Street Journal focused on eating disorders, exploring how TikTok algorithmically pushes users down a content rabbit hole, which can include or even actively promote harmful video content. The WSJ then covered the ways TikTok has worked to reduce this cycle, as well as the prevalence and promotion of harmful content.

To its credit, TikTok has instituted measures to reduce the prevalence of harmful content related to disordered eating, but have these changes gone far enough? Content creators are employing devious methods to exploit TikTok’s algorithmic flaws and loopholes to “game the system” and promote their own videos, regardless of the consequences.

So far, it seems, TikTok has been unable to keep up.

Again, our investigation illustrates that there is still a long way to go to solve the problem but also presents a variety of solutions we hope TikTok will apply. These suggestions should theoretically benefit users across the board but are particularly vital to people with disordered eating.

There are three things at the heart of most of TikTok’s issues with harmful content:

  • The unhealthy content itself
  • Limitations with TikTok’s current content search and suggestion algorithm, which allows the company to serve up content to its users
  • Problems with the filters TikTok employs to curb harmful content from getting to its users.

There’s a whole host of reasons content creators and content consumers post the content they do. In most cases, those reasons hinge on a desire for attention, either positive or negative, or the fulfillment of a particular agenda. Putting aside the psychology that drives people to post what they do, we’ll focus on problems two and three as they pertain to TikTok and where we hope to help identify flaws and propose solutions.

Problem #1
Filter evasion

The first way TikTok content creators — and content consumers — get around the current filters is through misspellings and misused keywords.

While TikTok has many eating disorder terms on a “block” list, that list is far from complete, is inconsistent (as we’ll soon see), and has a gaping hole when it comes to misspellings.

When TikTok’s algorithm is so advanced in serving content “inspired by” its users’ interests, it seems reasonable to expect more can be done to protect users against harmful content. Whether those users are stumbling on this content accidentally or intentionally using misspelled search terms to get around the filters, there is plenty of disordered eating content still being served up.

‍Suggested solution: TikTok could begin by updating blocklist filtering with a complete list of keywords directly related to disordered eating that are still live on-site.

This word cloud represents non-blocked hash tags related to eating disorders. The total volume of views associated with these hashtags is greater than 1.3 Billion.

Problem #2
Exploiting hashtags

There are a lot of ways for TikTok users to search, find, or otherwise stumble on content, both on TikTok and through Google searches of non-blocked terms. The first easily exploitable loophole comes courtesy of hashtag searches.

While there are many eating disorder-related keywords that TikTok now filters, many others are still missing or applied unevenly. The keyword “anorexic” is a prime example of this. The videos using this hashtag have more than 20 million views collectively, which is both staggering and frightening.

As a result, many misspelled, problematic keywords still garner millions of views between them.

While the hashtag search for “anorexic” returns results within web searches, the general search for videos does not. This discrepancy makes it difficult to discern what is truly being covered or protected across the various search options. While hashtag searches seem the least stringent, it is not uncommon to uncover hashtags that are blocked while still being allowed in general or user searches.

Suggested solution: TikTok could apply blocklist filtering evenly across all search options, including video, tag, and user searches, as well as between web and app searches.

Problem #3
Misspellings

Another method of getting around keyword filters when searching is to use misspellings.
Users can often find popular misspellings through autocomplete suggestions. As a result, many misspelled, problematic keywords have millions of views.  

To complicate matters further, characters that look very similar to normal letters are currently fooling the filters. These homoglyphs are the reason users often encounter strange-looking letters, or the use of accented or foreign versions of English letters, within hashtags, comments, and text overlays. They are there to sidestep blocked terms. Users are very likely utilizing autocomplete to intentionally find keywords that evade filters.  

Using misspellings during a search does not mean you will only be exposed to content utilizing the same misspellings.

TikTok's search engine will often correctly match incorrect words with their correctly spelled counterparts.

It is common to find videos that contain explicitly filtered keywords by performing a search for filtered keywords but using intentional misspellings or obfuscated keywords. An example of this can be seen in the data returned by TikTok’s server in response to a general search query for the keyword “anorexic.” You’ll notice the results set includes videos that include blocked tags (like “anorexia”).

This shows that TikTok’s search engine will often correctly match incorrectly spelled words with their correctly spelled counterparts. In many cases, this basic search function can be beneficial but can also expose users to harmful content when intentionally abused.

Each of TikTok’s primary search channels — user search, video search, and hashtag search — is potentially vulnerable to misspelling and homoglyph filter evasion because the algorithm either isn’t sophisticated enough to block these permutations or  hasn’t been programmed to accommodate for them.

Suggested solution: Create a crowdsourced list of additional keywords to add to the TikTok blocklist based on iterations of misspelling techniques and homoglyph obfuscation.

Problem #4
User searching

Another aspect of TikTok’s search that can present problems is how its algorithm chooses which videos to display for specific searches within their user search section.

A good example of this can be seen when searching for a popular creator who posted prolifically during her hospitalization for an eating disorder and concurrent medical issues. While there are many videos this creator has posted that focus on her music (ostensibly, the primary focus of her account), the initial search results for this creator served up algorithmically are her most visually shocking and potentially triggering.

Why? Potentially because search results are ranked by engagement, so popular (in this case, shocking) videos are shown first.

Suggested solution: TikTok could present a different ranking algorithm for users to begin their searches. Even simply sorting by “most recent” could be an immediate interim fix to mitigate these issues in the short term.

WARNING
This article contains
graphical images that
some readers might
find disturbing or
triggering.
Inconsistent filtering

Problem #5 Inconsistent filtering

Like the word “anorexic,” other obvious keywords describe specific types of eating disorders that are not blocked by TikTok’s hashtag filter. The keyword “orthorexia,” which is an eating disorder concerned with healthy eating, is not filtered within hashtag search on the web and currently sits at over 32 million video views.

Strangely, TikTok’s general search blocks the keyword “orthorexia” but continues to allow it via its hashtag search.

Additionally, when users encounter filters searching for hashtags, they do not always encounter the same recommended support resources shown to users within user search or video search, at least not when performing searches on the web (as opposed to the app).

Suggested solution: We would like to see TikTok utilize one blocklist representing a single “source of truth.” Apply this blocklist within all search fields and provide the same messaging when triggered across all channels. Integrate blocklist recommendations into search results, not just queries. Have the same behavior within the app and web versions.

Problem #6
Autocomplete

Autocomplete is a feature that has become ubiquitous over the past decade. Within text messaging, Google search, and apps like TikTok, the purpose of autocomplete is to help users find what they are looking for without fully typing in a keyword or key phrase.

Autocomplete can also be useful as a keyword suggestion tool. When searching TikTok, users are provided autocomplete results as they type. These results appear across user, video, and hashtag searches.

While autocomplete may seem like an innocuous technology at first, in the case of ED content on TikTok, it allows users to find keywords that evade filtering.

The purpose of autocomplete for most users is to help find content more easily, quickly, and effectively. For problematic content and across keywords that are still easily evading filtering, autocomplete can behave as an unhelpful vector of exposure to content that has not yet been identified by TikTok’s filters.  

Popular hashtags like "diet" can also introduce users to related keywords that could be considered harmful to ED sufferers. For example, within our test account, the keyword "diet" surfaced a suggestion for the key phrase "diet hacks to lose a lot of weight."

Hashtag complete

Autocomplete can make finding the most popular and dangerous hashtags easy.

Hashtags are a critical element of TikTok’s algorithm. For example, using hashtags like “GW” (goal weight) within user bios signals that a user is actively posting about their weight loss. This is commonly seen as a method for garnering support for healthy and problematic weight loss.

In the case of autocomplete for hashtag searches, users are given view counts associated with autocomplete recommendations. This technology feature makes it incredibly easy to find content based on keyword variations that circumvent TikTok’s filters while uncovering a wider variety of variations and their associated popularity within the app. Hashtag searches can be used by creators to determine which hashtags to use within their own content.

Displaying view counts next to hashtag suggestions is likely intended to help users choose hashtags that are popular and have a large number of related videos. These hashtag views allow users to see which hashtags are being used prominently by creators in any particular niche, including the eating disorder space.

This feature presents two distinct issues:

First, as certain keywords become banned or filtered, hashtag searches can be used to find new hashtags that are trending and not yet filtered.

Second, a common technique for attracting larger volumes of views is to utilize hashtags that are popular in other creators’ posts. In this way, autocomplete unknowingly becomes a tool for users to discover popular, filter-evading keywords and promote their own potentially problematic content.

Smart complete

An otherwise helpful feature that can often serve up triggering content by “association.”

There are various methods for programmatically identifying keywords in TikTok’s database of collective searches that have similar attributes to already identified filtered keywords.

Much of the research within the Natural Language Processing (NLP) and machine learning communities center around identifying words that are related or live closely within a given vector space. In fact, the most state-of-the-art progress in NLP, such as GPT-3, is based on predicting how individual words and phrases relate to one another.

On a test account that had previously searched for filtered eating disorder keywords, we found that autocomplete sometimes provided suggestions for problematic keywords with as little as a single letter. This is likely a result of having performed similar searches in the past.Repeated suggestions from TikTok’s autocomplete of troublesome past searches can reinforce old habits or expose users to content they are trying to avoid.
Case in point: eating disorder recovery.

TikTok’s algorithm is challenged to discern content that relates to eating disorder recovery from that which reinforces disordered eating. Autocomplete is another avenue where users may accidentally be exposed to a mix of keywords related to both types of content. This can be particularly problematic for those  attempting to change old habits or behaviors and can’t exclude associated, potentially triggering content.

Suggested solution: TikTok could add additional filtering for autocomplete results. Sensitive topics should not show view count totals within tag searches. Mitigate the potential for smart complete to serve up past triggering searches by filtering autocomplete results or disabling them across sensitive keywords.

Problem #7
TikTok sounds

Before TikTok became known as TikTok, it was an app called Musical.ly. One of its primary differentiators was incorporating unique sounds or song clips within videos. TikTok still uses shared sounds to help group similar videos together and operates as a viral mechanism for expanding the reach of content. This means if a user likes a particular sound or song, they can easily find more content that uses the same sound or song clip, making it an additional way to navigate TikTok’s search engine.

In the case of harmful eating disorder content and other harmful content, certain sounds or songs can be hijacked to promote toxic trends by users. Much of TikTok's viral success comes from trends tied to unique sounds created or utilized by content creators. When particular sounds are used to support dangerous messaging, it becomes TikTok's responsibility to identify these sounds and have protective measures to curb hazardous trends these sounds help propagate.

Previous reporting has discussed how TikTok sounds and sound effects have helped popularize pro-eating disorder content. As a response to this in the past, TikTok moderates sounds that have been publicly identified in this context.

However, TikTok’s moderation has not stopped the continued trend of harmful sounds or its ability to collect and curate harmful content. Some trends can serve a dual purpose.

In the following example, the “Skinny Bone Thugs” sound is used both to show off extremely thin body types and advocate for body positivity. The following videos are live at the time of this writing and demonstrate examples of how the sound has been used as a trend to promote potentially triggering content.

Suggested solution: TikTok should consider implementing a human-monitored review to better understand the actual viral purpose of viral sounds. TikTok should also identify how viral sounds are being subverted from their original meaning and adopted to covertly and virally spread toxic messaging. This should be done until a programmatic identification solution is found.

WARNING
This article contains
graphical images that
some readers might
find disturbing or
triggering.

*Since removed by TikTok

WARNING
This article contains
graphical images that
some readers might
find disturbing or
triggering.

*Since removed by TikTok

WARNING
This article contains
graphical images that
some readers might
find disturbing or
triggering.

*Since removed by TikTok

WARNING
This article contains
graphical images that
some readers might
find disturbing or
triggering.

*Since removed by TikTok

Problem #8 Listening

TikTok is hardly the only social media platform to face criticism about the visibility of potentially harmful eating disorder content.

In fact, it seems that many people find Tumblr and Twitter do a less thorough job of filtering and moderating pro-eating disorder content. When scrolling through TikTok videos, even when utilizing filter-evading, pro-eating disorder-related keywords, most videos are geared toward recovery rather than support of unhealthy habits. TikTok tends to take individual video moderation seriously, with most overtly pro-eating disorder content removed quickly.

TikTok has also done a good job creating a keyword filter that goes beyond an obvious subset of eating disorder-related keywords. Unfortunately, despite the filter's breadth, it confusingly continues to suffer from a variety of missed opportunities and workarounds.

Despite the efforts TikTok has made thus far, there is a significant outcry about the problems that still exist on social media.

TikTok's popularity and key feature is its "For You Page” or the “FYP.” The FYP is the primary way users discover new content, driven by TikTok's impressively prescient algorithm. The FYP shows you content that the algorithm believes you will like and interact with based on your past actions inside the app. Who you follow, the content you “like” or comment on, and the videos you watch and share most all play into the calculations of the algorithm.

To serve content users will like, TikTok has the extremely challenging task of differentiating between problematic eating disorder content and recovery-focused eating disorder content. This task is arguably more important for a platform like TikTok than other social media sites because the FYP and associated algorithm are at the core of its success and differentiation. Therefore, incorporating insights from academic research could help improve these differentiations.

Users of TikTok are shown a significant portion of the content based on the whims of the algorithm. The algorithm is the only mechanism designed to help protect those at risk from triggering content.

Based on anecdotes and complaints frequently appearing on places like Reddit and Twitter, the algorithm can make significant improvements in this regard.

Redditors are also speaking out

Twitter isn’t the only place users complain about ED content on TikTok. The popular subreddit /r/edanonymous, with over 74,000 community members, has many posts complaining about how triggering and toxic content on TikTok can be. The users complain about triggering trends, covert pro-ED behaviors, and more.

In much the same way search engines and email platforms must fight spam, social media apps and websites are challenged to identify and block unwanted content before it reaches their users. In an arms race of platform versus creator, the onus of responsibility still lies on the platform to shield its users from harm, especially the underage and most vulnerable.

While TikTok has instituted various measures to reduce the prevalence of harmful content related to disordered eating, there are a variety of additional measures that can be taken.

Proposed solution: TikTok should monitor social media and niche recovery communities to get a reading on how improvements and mitigating efforts are impacting users. Engage with outspoken critical users to crowdsource and engage creators in finding solutions together, such as community blocklist building and monitoring, which could be used by any social media application facing similar challenges.

What can be done?

Having identified the problems on TikTok’s platform related to triggering, toxic, or pro-eating disorder-related content, let's review what can still be done.

While TikTok has instituted various measures to reduce the prevalence of harmful content related to disordered eating, there are a variety of additional measures that can be taken.

To summarize, the following list of suggestions represents a great starting point of action.

Solution #1
TikTok should establish more robust filtering using its data

TikTok is more aware than anyone of the searches being utilized on its platform, and it can use this data for social good. TikTok has successfully built an impressive algorithm for recommending content, but it needs to create an equally impressive algorithm for identifying the relatedness of dangerous words.

TikTok could augment a block-list filter with an approved word list to allow users to continue to post pro-recovery content. Taking it a step further, TikTok could also provide users with guides and tips for the best ways to post pro-eating disorder recovery content, such as:

  • Which tags to use
  • How to avoid accidentally triggering others

Solution #2
TikTok should modify the way search works so that evasion

While attempting to help its users by showing correctly spelled words, despite incorrectly spelling a search query, TikTok can unintentionally exposing users to harmful eating disorder content. Here are ways to address this concern:

  • TikTok’s search results should be passed through the keyword filter, not just the initial user query.
  • If TikTok’s search engine can correctly identify related queries, it should also be able to be used to expand filtering. Terms closely related to harmful keywords should also be excluded from search results.

Solution #3
TikTok needs to establish an opt-out or “trigger warning” system

TikTok needs to create a warning system that allows users to easily avoid eating disorder content on their app. Currently, TikTok does provide a “sensitive content” overlay on some videos.

Eating disorder recovery content, while helpful and supportive for some, can be incredibly triggering for others. Users should therefore be empowered by TikTok to report any eating disorder content, including recovery content, as needing a “trigger warning” label.

  • TikTok should allow users to individually avoid videos that have been flagged as potential triggers. This feature would serve as an additional content filter that is unique to each user where users could “opt-out” of particular content.
  • TikTok should update their “self-harm” category of reported content to make it clear that it includes pro-eating disorder content.

Solution #4
TikTok should work collectively with other social media platforms to address harmful content

The issue of harmful eating disorder content being shared on social media is not TikTok’s alone. There seems to be significant crossover of eating disorder-related content on the other social channels, as popular TikToks are often repurposed as tweets, Tumblr posts, or Instagram reels. This crossover only increases user exposure to subversive content, and it needs to be addressed.

TikTok should work with other social networks to identify filter-evading hashtags, and problematic trends more quickly. By sharing filtering resources and knowledge collaboratively with sites like Twitter, Tumblr, and Instagram, TikTok could be the social media leader in creating a safe space for people in recovery.

Solution #5
TikTok needs to continue to monitor the use of sounds and trends

Much of TikTok’s success comes as a result of the integration of viral sounds within posts. The problem this creates in filtering out subversive content is that harmful trends can emerge using fairly innocuous sounds, or music. Once the specificity of a particular sound has been established, users tend to create similar videos with similar messaging and themes as they relate to an individual sound.

  • TikTok should continue to keep a close eye on trending sounds, and music, to intervene ahead of problematic trends.
  • Utilizing machine learning and clustering algorithms, it is possible for TikTok to flag sounds in need of human review and potential moderation.
  • Sounds being used on TikTok to create a toxic collective of problematic content should be identified and moderated.

How everyone can help

Hiring content creators in the eating disorder space, or crowdsourcing for feedback, would allow TikTok to continue to improve its handling of subversive content. Working together, creators, users, and TikTok should look to:

  • Identify innocuous seeming keywords that users will continue to utilize as filter evasion tactics.
  • Incorporate better homoglyph detection and variations into the filtering algorithm.
  • Filter blocked keywords more completely within video text overlays.
  • Identify videos for moderation based on all aspects of user input, including hashtags, video descriptions and text overlay.
  • Work with members of the eating disorder recovery community and medical professionals to identify possible signals seen in users with active eating disorders based on their in-app behavior and user bios.
  • Investigate opportunities to use machine learning to identify at risk creators and users. TikTok’s recommendation algorithm works by segmenting users based on their preferences. Working to identify behavioral profiles that are potentially dangerous would be an excellent way to put this shockingly accurate segmenting technology to good use.

How you can help

Contribute to our block list experiment

Help us create a crowd-sourced blocklist. Contribute keywords you believe should be blocked in an effort to provide social media with guidance on trending, obfuscated and dangerous keywords.

Submit Keywords