Contents in the digital age: The struggle against moderation and algorithms

Social media icons by Egbert .EGD @ #isthemessage in Kunstenlab” by HarcoRutgers is licensed under CC BY-SA 2.0.

Content Moderation in the Digital Age

In the contemporary digital landscape, social media platforms have emerged as the primary tool for human interaction, information dissemination, and self-expression (Gillespie, 2018). These platforms have quickly developed into places where individuals of all backgrounds and viewpoints group up to express their opinions, both positive and negative. This new digital frontier has given rise to unparalleled connection, but it has also created a breeding ground for chaos and conflict.

Social media platforms have put in place a vital system known as content moderation to address the challenges imposed by the unrestricted dissemination of user-generated information. Content moderation is the systematic review of content on the internet, a task that can occur both before and after content is posted (Roberts, 2019). Although this process is largely hidden from public view, it poses a significant influence over the quality of discourse on social media.

TikTok, one of the most well-known social media sites, has emerged as an icon of the current digital era where users post short video clips to engage and connect with a global audience (Clark, 2023). TikTok has experienced remarkable growth in popularity, however it is facing a number of challenges, particularly in the field of content moderation. This article will explore the complex world of TikTok content moderation, looking at the techniques employed, the loopholes that still exist, and the effects it has on users.

TikTok” by Solen Feyissa is licensed under CC BY-SA 2.0.

TikTok’s Content Moderation Methods

The key aspect to managing digital platforms is content regulation, and TikTok’s unusual user base presents unique governance difficulties. With more than 60% of its users from Generation Z (born between 1997 and 2012), TikTok has a remarkably young user base (Muliadi, 2020). TikTok has been under considerable scrutiny from international regulators due to its youth-centric user profile and popularity with young people.

To address these concerns and uphold the safety of its users, especially those under 18, TikTok has implemented specific guidelines and content moderation measures. Users must be at least 13 years old to create an account on the platform, with additional age restrictions in some regions. TikTok offers a separate experience for users under 13, enhancing safety measures tailored to their age group (Tiktok, 2023).

TikTok’s content guidelines strictly prohibit content that could harm or exploit young people. This includes measures against Child Sexual Abuse Material (CSAM), abuse, bullying, dangerous activities, and exposure to mature themes or substances. To create a safe online environment, TikTok enforces limitations on certain features and employs strict privacy settings.

In addition to these measures, TikTok provides tools and educational resources aimed at enhancing youth safety on the platform. These initiatives underscore TikTok’s commitment to protecting its younger user base and ensuring responsible content dissemination (Tiktok, 2023).

Loopholes and Exploitations on TikTok

TikTok’s algorithm helps with content moderation by using machine learning to filter and flag inappropriate content, considering user age for age-appropriate content, detecting restricted keywords, and analyzing user behavior to identify violators of community guidelines (Tiktok, 2023). It plays a crucial role in making the platform safer for users by automating content checks and assisting human moderators.

Woskerski Mural – Street Art : London” by Loco Steve is licensed under CC BY-SA 2.0.

While this automated system plays a crucial role in making the platform safer, it is not without its vulnerabilities. Users have discovered ways to exploit these content moderation mechanisms, allowing inappropriate content, including pornography, to infiltrate the platform. For instance, certain creators from platforms like OnlyFans have found a workaround to bypass TikTok’s guidelines and promote explicit content. They achieve this by using TikTok’s artificial intelligence art filter, originally designed to generate beautiful landscape paintings from users’ photos. However, this filter has been misused to create AI-generated paintings of explicit content, a trend that gained attention in late October 2022 (Sung, 2022). This exploitation enables creators to disseminate explicit material while escaping video removal.

TikTok’s algorithm tailors content recommendations based on a user’s past interactions, followed accounts, location, language preferences, and content creation history. This means that users who engage with inappropriate content inadvertently receive recommendations for similar explicit content on their “For You” page. This is particularly concerning given that a significant amount of TikTok’s user base are young individuals.

TikTok’s vulnerability to hosting graphic, violent, and pornographic content has resulted in regulatory actions in several countries. Countries like India, Indonesia, and Pakistan have temporarily or permanently banned TikTok due to its hosting of such content (Kaye et al., 2022). These actions highlight the seriousness of the problem is and how challenging it is for the platform to adequately regulate content when users have evolving tactics to bypass Tiktok’s regulation. 

The Impact on TikTok Users

TikTok’s diversified user base can experience severe and long-lasting consequences from poor content management. The exposure of users, especially young ones, to explicit or dangerous information is one of the most significant problems. The algorithm-driven TikTok content suggestion system, which is intended to maximize user interaction, occasionally directs users to inappropriate or distressing content. The user experience may be hindered by this exposure, especially for younger members of the diverse community of TikTok.

Unsettling reports and instances of TikTok users suffering from problems with content moderation are regretfully not uncommon. Cyberbullying, harassment, and hate speech are topics that regularly make headlines. Due to inadequate monitoring, users have reported receiving upsetting comments, personal attacks, and even doxxing (Echelon, 2022). These upsetting events can have a significant negative impact on mental health, lowering self-esteem and, in severe situations, inciting self-harming behaviors.

Furthermore, the consequences of insufficient moderation include the spread of false information and the growth of negative trends. The large user base of TikTok may unknowingly take part in dangerous challenges or follow false advise, endangering their physical and mental health. The negative effects of inadequate content moderation affect TikTok members individually, but they also bring into question the platform’s responsibility to preserve a safe and encouraging online environment. 

TikTok’s Ethical Dilemmas and Accountability

TikTok faces a number of ethical challenges in its content moderation decisions. Complex decisions are required due to the platform’s massive user base and the variety of everyday contents posted. An ongoing ethical challenge is finding the appropriate balance between protecting freedom of expression and upholding a safe and courteous environment.

To address these concerns, TikTok has faced the implications of implementing stricter content moderation policies. While tougher regulations may reduce the dissemination of harmful content, they may also have an impact on the user experience. Increased removal of content due to stricter regulation may lead to less users on the app. Maintaining the balance between preserving community safety and content creativity is difficult.

TikTok has made efforts to respond to content moderation challenges and user concerns. The platform continually refines its guidelines and policies, engages with users, and seeks feedback to improve its moderation processes. For example, it has published transparency reports, revealing the number of content removals and the reasons behind them (TikTok, 2022). Additionally, TikTok has established external advisory committees to provide independent insights into its content policies.

The challenges in moderating video content, as compared to text-rich or static image-centric platforms, have also been noted. The technological threshold for automating the tasks of detecting and removing videos is considerably higher than for text or images (Gray & Suzor, 2020). As a result, human moderators play a crucial role in evaluating the ‘appropriateness’ of videos on digital platforms, including TikTok (Shead, 2020).


Like other social media platforms, TikTok has difficulty finding the right balance between content moderation and freedom of expression. With a young user population, TikTok has put in place specific regulations to safeguard its users, yet vulnerabilities continue to exist, allowing inappropriate content to circulate. These difficulties with content filtering have real-world consequences, especially for young users who can be exposed to inappropriate or harmful content, cyberbullying, and false information. TikTok continually enhances its regulations and collects user feedback, while also acknowledging the vital role done by human moderators in preserving a safe online community.


Clark, L. S. (2023). Book Review: TikTok: Creativity and Culture in Short Video. New Media & Society, 25(9), 2541-2543.

Echelon, U. (2022, September 14). TikTok is Poisoning Society [Video]. YouTube.

Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. New Haven: Yale University Press.

NBC News. (2022). Some OnlyFans creators have found a loophole to put their nudes on TikTok.

Roberts, S. T. (2019). Behind the Screen: Content Moderation in the Shadows of Social Media. Yale University Press.

Smith, J. (2021). Legal and Regulatory Actions Against TikTok: A Global Perspective. Journal of Social Media Ethics, 3(2), 112-129.

Zeng, J., & Kaye, D. B. V. (2022). From content moderation to visibility moderation: A case study of platform governance on TikTok. Policy & Internet, 14, 79–95.