An exploration of eating disorder content on TikTok
and suggestions for improvements and interventions
Last year, the Wall Street Journal (WSJ) published numerous articles investigating how TikTok’s algorithm serves potentially harmful content to minors. As healthcare providers working to heal people impacted by disordered eating, we knew once we read the WSJ articles that we had no choice but to understand this problem more thoroughly.
So, our team of researchers went to work exploring TikTok's algorithm and trying to understand how harmful content is still finding its way into our feeds while at the same time TikTok is telling the world that they are solving these issues. Our aim? To see what TikTok is missing in its attempt to stop this problem. Below you will learn about what we did, what we found, and our suggestions and solutions to these problems. The best part? You can be part of the solutions too.
In their research, the Wall Street Journal focused on eating disorders, exploring how TikTok algorithmically pushes users down a content rabbit hole, which can include or even actively promote harmful video content. The WSJ then covered the ways TikTok has worked to reduce this cycle, as well as the prevalence and promotion of harmful content.
To its credit, TikTok has instituted measures to reduce the prevalence of harmful content related to disordered eating, but have these changes gone far enough? Content creators are employing devious methods to exploit TikTok’s algorithmic flaws and loopholes to “game the system” and promote their own videos, regardless of the consequences.
So far, it seems, TikTok has been unable to keep up.
Again, our investigation illustrates that there is still a long way to go to solve the problem but also presents a variety of solutions we hope TikTok will apply. These suggestions should theoretically benefit users across the board but are particularly vital to people with disordered eating.
There are three things at the heart of most of TikTok’s issues with harmful content:
There’s a whole host of reasons content creators and content consumers post the content they do. In most cases, those reasons hinge on a desire for attention, either positive or negative, or the fulfillment of a particular agenda. Putting aside the psychology that drives people to post what they do, we’ll focus on problems two and three as they pertain to TikTok and where we hope to help identify flaws and propose solutions.
The first way TikTok content creators — and content consumers — get around the current filters is through misspellings and misused keywords.
While TikTok has many eating disorder terms on a “block” list, that list is far from complete, is inconsistent (as we’ll soon see), and has a gaping hole when it comes to misspellings.
When TikTok’s algorithm is so advanced in serving content “inspired by” its users’ interests, it seems reasonable to expect more can be done to protect users against harmful content. Whether those users are stumbling on this content accidentally or intentionally using misspelled search terms to get around the filters, there is plenty of disordered eating content still being served up.
Suggested solution: TikTok could begin by updating blocklist filtering with a complete list of keywords directly related to disordered eating that are still live on-site.
This word cloud represents non-blocked hash tags related to eating disorders. The total volume of views associated with these hashtags is greater than 1.3 Billion.
As a result, many misspelled, problematic keywords still garner millions of views between them.
While the hashtag search for “anorexic” returns results within web searches, the general search for videos does not. This discrepancy makes it difficult to discern what is truly being covered or protected across the various search options. While hashtag searches seem the least stringent, it is not uncommon to uncover hashtags that are blocked while still being allowed in general or user searches.
Suggested solution: TikTok could apply blocklist filtering evenly across all search options, including video, tag, and user searches, as well as between web and app searches.
It is common to find videos that contain explicitly filtered keywords by performing a search for filtered keywords but using intentional misspellings or obfuscated keywords. An example of this can be seen in the data returned by TikTok’s server in response to a general search query for the keyword “anorexic.” You’ll notice the results set includes videos that include blocked tags (like “anorexia”).
This shows that TikTok’s search engine will often correctly match incorrectly spelled words with their correctly spelled counterparts. In many cases, this basic search function can be beneficial but can also expose users to harmful content when intentionally abused.
Each of TikTok’s primary search channels — user search, video search, and hashtag search — is potentially vulnerable to misspelling and homoglyph filter evasion because the algorithm either isn’t sophisticated enough to block these permutations or hasn’t been programmed to accommodate for them.
Suggested solution: Create a crowdsourced list of additional keywords to add to the TikTok blocklist based on iterations of misspelling techniques and homoglyph obfuscation.
Additionally, when users encounter filters searching for hashtags, they do not always encounter the same recommended support resources shown to users within user search or video search, at least not when performing searches on the web (as opposed to the app).
Suggested solution: We would like to see TikTok utilize one blocklist representing a single “source of truth.” Apply this blocklist within all search fields and provide the same messaging when triggered across all channels. Integrate blocklist recommendations into search results, not just queries. Have the same behavior within the app and web versions.
*Since removed by TikTok
*Since removed by TikTok
*Since removed by TikTok
*Since removed by TikTok
Twitter isn’t the only place users complain about ED content on TikTok. The popular subreddit /r/edanonymous, with over 74,000 community members, has many posts complaining about how triggering and toxic content on TikTok can be. The users complain about triggering trends, covert pro-ED behaviors, and more.
In much the same way search engines and email platforms must fight spam, social media apps and websites are challenged to identify and block unwanted content before it reaches their users. In an arms race of platform versus creator, the onus of responsibility still lies on the platform to shield its users from harm, especially the underage and most vulnerable.
While TikTok has instituted various measures to reduce the prevalence of harmful content related to disordered eating, there are a variety of additional measures that can be taken.
Proposed solution: TikTok should monitor social media and niche recovery communities to get a reading on how improvements and mitigating efforts are impacting users. Engage with outspoken critical users to crowdsource and engage creators in finding solutions together, such as community blocklist building and monitoring, which could be used by any social media application facing similar challenges.
Having identified the problems on TikTok’s platform related to triggering, toxic, or pro-eating disorder-related content, let's review what can still be done.
Hiring content creators in the eating disorder space, or crowdsourcing for feedback, would allow TikTok to continue to improve its handling of subversive content. Working together, creators, users, and TikTok should look to: