Enhancing TikTok’s Content Moderation Landscapes: A Comprehensive Review

In an time period dominated by social media, TikTok stood out with its explosive worldwide impacts, boasting millions of active users plus a very staggering everyday content upload rate. With such quick growths come with highly pressing needs for strong content review management standards. Statistics show a huge landscape of creativity and expression, but equally possible pitfalls without effective moderations. In this paper, it goes into the complicated realms of TikTok’s content moderation, examining its current state, highlighting issues and problems, comparing it with industry standards, and proposing improvements.

Background on TikTok Content Moderations

TikTok’s content moderation journey starts with a complicated framework trying to balance creativity with safety. Presenting a brief overview for current existing guidelines and policy, TikTok’s challenge is in administering a diverse and constantly-evolving stream of contents. The platform’s algorithms, though quite innovative, grappled with nuanced job of deciphering contexts under real-time, a task made much more difficult from dynamic nature of user-made contents (Su, 2023). A key example for those standards in action is TikTok’s usage of machine learning algorithms to locate and categorize/group content (Bimo, 2022). This figure shows TikTok’s usage of machine learning algorithms.

Machine Learning algorithm explaination by Microsoft Azureunder CC BY 4.0

For example, the platform’s prowess in automatically flagging explicit or unsuitable materials—a key facet of keeping a safe environment.The algorithm uses image and audio recognitions, coupled with natural language processings, to quickly analyze and group content, assuring for compliance with TikTok’s community guideline (DEV.to, 2022).

In the space of user interactions, TikTok uses sentiment analysis algorithms. This example shows how those algorithms gauge user comments, locating potentially harmful or offensive languages (Cheng, 2023). Such dynamic approach to comment moderations not just protects users but contributes to the fostering of positive interactions inside community

Moreover, TikTok’s content moderation extends more to area of real-time video analysis. This case example highlights platform’s use of computer vision algorithms to detecting and avoiding upload of content violating guidelines, like pure violence or self-harm (Bruinsma, 2022). This figure shows use of computer vision algorithms to detecting and avoiding upload of content violating guidelines .

Abusive content moderation by Digmedia under CC BY 4.0

This real-time analysis dramatically contributes to this platform’s capacity to quickly address potential issues, exemplifying very proactive approach toward content moderation.

Current Issues in Content Moderation

Apart from TikTok’s efforts, controversies persists. Examples of inappropriate content slipping by the cracks already sparked public outrage. This platform had faced backlash for wrongfully flagging harmless content, inducing many concerns over reliability of its moderation systems. For example, controversial removals and deletion of content addressing social problems highlights the necessity for a more nuanced approach to moderations that aligned with user expectations.

A famous example involves removal of content addressing on societal issues. In the case herein, a video advocating and calling for a marginalized community’s rights were being mistakenly flagged and removed from TikTok’s automated systems (Literat, 2023). Such incident sparked a waves of criticism, emphasizing over this platform’s struggle in discerning context in socially significant contents. Such fallout emphasized the need for much more refined and culturally conscious approach toward content moderation, urging TikTok to recalibrating its algorithms to more effectively navigate discussions over sensitive topics.

Conversely, instances of inappropriate content slipping by moderation process have also fueled public outrage. One widely reported event involved of an graphic and explicit video managed to avoid and evading TikTok’s content filter and circulate over this platform for an long extended period (Murphy, 2022). This figure shows graphic content in prominent social media platforms especially ones harming children’s health.

Media violence by study.com under CC BY 4.0

Users, particularly parents concerned or worried over child safety on TikTok, displayed deep dismay over this incident, prompting for reevaluations of this platform’s content detecting algorithms (Newton, 2021). Such incident underscored key importance of continuously bettering efficacy of content filters to protect on users, especially ones being vulnerable to explicit materials.

Moreover, TikTok suffered many backlash where content meant for comedic goals or objective were erroneously flagged or categorized as violating community guidelines. The algorithm’s lack of ability to grasp subtleties in humor and satire led to  temporary removal of material that actually imposed no genuine threats or dangers (Lee, 2023). Users and content creators voiced and expressed their frustration, stressing on delicate balance between automation and nuanced interpretations necessary in content moderations.

In the space of misinformation, TikTok had also grappled with the quick dissemination of false health-related contents online. Videos promoting and calling for pseudoscientific remedy or misinformation over vaccines acquired much traction on this platform, raising concerns over TikTok’s capacity to curbing the spread of possibly adverse or harmful information (Mathew, 2022). Misinformation in social media platforms are depicted in following figure.

Misinformation among social media by Ipi.media under CC BY 4.0

Such incident prompted calls for much more stronger fact-checking mechanism and higher vigilance in opposing misinformation to safeguard users from harmful advices.

Comparison with Industry Standard

A comparative analysis shows much gaps in TikTok’s content review standards when compared against industry benchmarks. Platforms such as Facebook or YouTube have refined their moderation practices, manipulating over both automated systems and human moderators to assure for much more accurate contents evaluations (Ling, 2022). Lessons from those platforms highlight the importance of an multi-faceted approach toward content moderation, combining technology with human oversights.Facebook, with its deep and extensive user base, used a layered approach to content moderation. For example, Facebook integrates and combined artificial intelligence to locate potentially violating content, yet crucially, this is complemented with very expansive group of human moderators.

Such combination assured a nuanced comprehension of context which purely automated systems may miss out on. This synergy aligned with industry best practices stands in stark contrast against TikTok’s excessive reliance on automated systems.

YouTube, another giant in  social media landscape, exemplified very effective fusion of technology and human judgment for content moderation.  It used machine learning algorithms to flags content for review, yet the ultimate decision rests upon human moderators (Gillespie, 2020). Such dual-layered system dramatically reduces likelihood of false positives or negatives, showing more meticulous approach which TikTok can learn from in terms of refining its content moderation strategy.

TikTok, on the other side , were being criticized for its limited transparency and inconsistent moderating results. For example, there was one scenario where TikTok’s automated system flagged an video that was later determined to be harmless upon human reviews (CRS Reports, 2023). Such lack of transparency induces questions over actual efficacy of TikTok’s algorithmic processes for accurately telling apart acceptable and inappropriate materials online.

Additionally, Facebook and YouTube had all made efforts in empowering users through moderation tools. Facebook made use of a user-friendly reporting system where users could appeal decisions, offering extra layer of community-driven content oversights. TikTok, while delivering reporting mechanisms, lacked strong or robust user-central features which contribute for a more inclusive or collaborative content moderating environment.

Proposed Improvements

To fortify and better TikTok’s content moderation, making use of industry best practices becomes imperative. Implementing highly transparent and accountable appeals process for flagged contents, integrating advanced AI systems for comprehensive context analysis, and cultivating collaboration with outside third party organizations specializing for content moderation become potential pathway for enhancements (Gillespie, 2018). Learning from successes and failures of industry fellow peers, TikTok could evolve its moderation system to aligning more closely with user expectation. Firstly, TikTok can benefit from using a user-driven content rating systems. Existing prominent platform like Facebook showcased a user-rated content system where community very proactively engaged for assessing content appropriateness.

Such approach not just distributes responsibility of moderations but equally fostered a strong sense of shared governance inside TikTok community.

Furthermore, proactive engagement with content makers and creators could be instrumental. An example highlights how a major platform YouTube performed routine and regular workshops along with training sessions for creators, teaching them about community guidelines as well as best practices (YouTube, 2023). TikTok could emulate such strategy, giving guidance to its  very diverse creator base to assure for much better understandings of content boundary and fostering strong culture of responsible content creations.

In conclusion, evolution of TikTok’s content moderation is pivotal for platform’s sustained and continued success. Balancing innovations with safety call for continuous improvement. As one navigate such dynamic landscape, call to action is very clear—make a path that not just learns from industry standards but come up with a newer standard for responsible and productive content moderation in digital era. TikTok’s journey to moderation excellence is not only a platform’s responsibility but also shared commitment after cultivating a safer and creative online community.

References List

Bhandari, A. and Bimo, S. (2022). Why’s Everyone on TikTok Now? The Algorithmized Self and The Future of Self-Making on Social Media. Social Media + Society, [online] 8(1), pp.1–11.

https://doi.org/10.1177/20563051221086241.

Cheng, Z. and Li, Y. (2023). Like, Comment, and Share on TikTok: Exploring the Effect of Sentiment and Second-Person View on the User Engagement with TikTok News Videos. Pew Research, p.089443932311786-089443932311786.

https://doi.org/10.1177/08944393231178603.

CRS Reports (2023). TikTok: Technology Overview and Issues. [online] Available at: https://crsreports.congress.gov/product/pdf/R/R46543.

DEV.to (2022). How does TikTok use machine learning? [online] DEV Community. Available at: https://dev.to/mage_ai/how-does-tiktok-use-machine-learning-5b7i#:~:text=TikTok%20u ses%20computer%20vision%20to%20analyze%20facial%20features%2C.

Gillespie, T. (2020). Content moderation, AI, and the question of scale. Big Data & Society, 7(2), p.205395172094323.

https://doi.org/10.1177/2053951720943234.

Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. New Haven: Yale University Press.

Grandinetti, J. and Bruinsma, J. (2022). The Affective Algorithms of Conspiracy TikTok. Journal of Broadcasting & Electronic Media, pp.1–20.

https://doi.org/10.1080/08838151.2022.2140806.

Lee, J. and Abidin, C. (2023). Introduction to the Special Issue of ‘TikTok and Social Movements’. Social Media + Society, 9(1), p.205630512311574.

https://doi.org/10.1177/20563051231157452.

Literat, I. and Kligler-Vilenchik, N. (2023). TikTok as a Key Platform for Youth Political Expression: Reflecting on the Opportunities and Stakes Involved. Social Media + Society, 9(1), p.205630512311575.

https://doi.org/10.1177/20563051231157595.

Muhammed T, S. and Mathew, S.K. (2022). The disaster of misinformation: a review of research in social media. International Journal of Data Science and Analytics, 13(4).

https://doi.org/10.1007/s41060-022-00311-6.

McCashin, D. and Murphy, C.M. (2022). Using TikTok for public and youth mental health – A systematic review and content analysis. Clinical Child Psychology and Psychiatry, 28(1), p.135910452211066.

https://doi.org/10.1177/13591045221106608.

Newton, C. (2021). The child safety problem on platforms is worse than we knew. [online] The Verge. Available at:

https://www.theverge.com/2021/5/12/22432863/child-safety-platforms-thorn-report-snap -fac ebook-youtube-tiktok.

Singhal, M., Ling, C., Paudel, P., Thota, P., Kumarswamy, N., Stringhini, G. and Nilizadeh, S. (2022). SoK: Content Moderation in Social Media, from Guidelines to Enforcement, and Research to Practice. arXiv:2206.14855 [cs]. [online] Available at: https://arxiv.org/abs/2206.14855.

Su, C. and Valdovinos Kaye, B. (2023). Borderline practices on Douyin/TikTok: Content transfer and algorithmic manipulation. Media, Culture & Society, p.016344372311683.

https://doi.org/10.1177/01634437231168308.

YouTube (2023). Content Creator Events and Workshops. [online] Google for Creators. Available at: https://creators.google/en-us/content-creator-community/creator-events/.