Navigating Verbal Harassment on TikTok: Safeguarding User Well-being while Upholding Free Expression

TikTok on iPhone
TikTok on iPhone” by Nordskov Media is marked with CC0 1.0.

In the dynamic landscape of social media, TikTok has emerged as a global powerhouse with over 1 billion users, fostering a diverse and vibrant online community. However, this surge in popularity has come with a concerning rise in verbal harassment on the platform, spotlighting the need for a meticulous examination of TikTok’s policies and community standards. The prevalence of hate speech, derogatory comments, and personal attacks directed at users, particularly those from marginalized communities, has sparked a crucial debate. Striking a delicate balance between user safety and the preservation of free speech principles becomes paramount as TikTok grapples with the challenge of curbing verbal harassment. As online interactions seamlessly transition from the digital realm to tangible consequences in individuals’ lives, the urgency to address this issue intensifies. This essay explores the multifaceted dimensions of verbal harassment on TikTok, delving into the potential repercussions on mental health and the complexities surrounding reporting mechanisms. It advocates for proactive measures by TikTok, emphasizing the establishment and stringent enforcement of transparent community guidelines, improved reporting mechanisms, careful moderation algorithms, and innovative approaches like Content Warning Labels to foster a safer online environment without compromising the essence of free expression.

The increasing incidence of verbal harassment on TikTok necessitates a thorough examination of the platform’s policies and community standards. With over 1 billion users, the platform’s vast and diverse user base means that verbal harassment is rampant. Users from marginalized communities frequently encounter hate speech, derogatory comments, and personal attacks. As Weimann & Masri (2023) note, while the platform can be a source of entertainment and connection, “it is being used as a platform for spreading hate and extremist content.” The authors also point out the alarming infiltration of Far-right extremism on TikTok, highlighting the platform’s vulnerability due to its young, impressionable user base and its relatively recent inception, which may have contributed to its lag in implementing protective measures seen in more established platforms.

Online Hate Speech
Online Hate Speech” by iPredator is marked with CC0 1.0.

The repercussions of verbal harassment extend beyond the digital realm, manifesting as tangible and distressing consequences in individuals’ lives. Online harassment has been linked to serious mental health issues, including anxiety, stress, depression, and even suicidal thoughts (Raypole, 2019). The ubiquity of such harassment, coupled with its severe consequences, underscores the urgent need for action. TikTok must proactively implement measures to combat verbal harassment, prioritizing user well-being while upholding the principles of free expression. However, reporting online harassment remains challenging due to belief-related issues and legal limitations (Raypole, 2019).

The negative impacts of verbal harassment online are not limited solely to the virtual world but reverberate into real-life situations, leading to palpable and often traumatizing after-effects in the lives of those targeted. Research has consistently demonstrated that individuals subjected to online harassment can experience a range of serious mental health complications. Among these are heightened levels of anxiety, stress and depression with some even exhibiting suicidal tendencies (Raypole, 2019). Given the pervasive nature of this form of harassment and the gravity of its repercussions, it’s evident that there is a dire need for immediate and concrete interventions. Platforms like TikTok have a responsibility to actively introduce strategies that curb verbal harassment. While they need to ensure the mental and emotional well-being of their users, they must also remain committed to preserving the core values of free speech and expression. Yet, the process of reporting such harassment on these platforms remains problematic. Users often face hurdles, both in terms of having their experiences believed and navigating the intricacies of the law, making the reporting process less straightforward and efficient (Raypole, 2019).

To ensure a safer and more inclusive environment on platforms like TikTok, several crucial steps must be undertaken:

1. Community Guidelines: One of the foundational actions TikTok can take is to formulate comprehensive community guidelines that are transparent and clear. These guidelines should take a firm stance against hate speech, cyberbullying, and other forms of verbal harassment. For users to be genuinely engaged and aware, the guidelines need to be not only accessible but also detailed with illustrative examples that help clarify ambiguities. As pointed out by Milosevic (2016), the mere existence of guidelines is not enough. Their effective enforcement plays a pivotal role in ensuring that digital spaces are secure and respectful environments.

2. Enhanced Reporting Mechanisms: An area that has often been a pain point for users is the reporting mechanism. It’s imperative that users have a straightforward and effective way to report content that violates the community standards. Studies like the one from PEN America and Meedan (2023) underscore the existing frustrations users encounter when trying to report offensive content on platforms. The emphasis should be on refining this system because, as current data suggests, users who actively report misbehavior or harmful content tend to experience more satisfactory resolutions. In addition to optimizing the reporting process, platforms should explore the possibility of incorporating anonymous reporting options to safeguard users who are concerned about potential repercussions. Moreover, enhancing the transparency of the reporting system by offering clear, timely feedback on the status and resolution of reported issues can foster greater trust among users. By consistently refining these reporting mechanisms, platforms can contribute to the creation of a safer and more accountable online environment that benefits all users.

3. Moderation Algorithms: In a significant development, Oracle has announced its intention to carry out an audit of TikTok’s algorithms and content moderation models. TikTok known for its short-form video content. It has been under the spotlight due to various geopolitical and privacy-related concerns. TikTok’s utilisation of machine learning algorithms for content moderation presents both advantages and challenges. Although these algorithms possess the capability to rapidly analyse large volumes of data, their decision-making processes can at times exhibit excessive rigidity or leniency. An algorithm that’s too strict might erroneously remove valid content, while a more permissive one could let harmful content slip through. Hence, it’s crucial to strike a balance. One effective solution could be pairing these algorithms with human moderators. These moderators can provide the nuanced understanding and context that machines sometimes overlook. Such a dual system should operate under transparent guidelines that reflect TikTok’s dual commitment: protecting its users and preserving their right to free expression. This audit aims to shed light on TikTok’s internal processes and also underscores the importance of scrutinizing the algorithms and content moderation mechanisms used by social media platforms, especially when they operate on a global scale.

4. Content Warning Labels: Total content removal can sometimes be counterproductive, especially when the lines between harmful content and freedom of expression get blurry. A viable alternative is the use of content warning labels. By tagging content with potential triggers or misleading information, TikTok gives its users the agency to choose what they want to engage with. As noted by Hutchinson (2021), such a system is not new and has been effective in alerting users about content with unverified or controversial information. Adopting this approach can serve as a middle ground, respecting both user protection and freedom of speech. It is a mechanism that encourages responsible content creation while safeguarding the principles of free expression that underpin social media platforms.

In conclusion, TikTok must address the pervasive verbal harassment within its community even though it offers a tremendous platform for creativity and interaction. TikTok may strike a balance between user safety and freedom of expression by adopting transparent guidelines, improving reporting methods, streamlining moderation procedures and implementing content warning labels.

Bibliography

Weimann, G., & Masri, N. (2023). Research Note: Spreading Hate on TikTok. Studies in Conflict and Terrorism, 46(5), 752–765. https://doi.org/10.1080/1057610X.2020.1780027

Raypole, C. (2019, May 29). Trolls and Toxicity: Surviving Online Harassment. GoodTherapy®. https://www.goodtherapy.org/blog/trolls-and-toxicity-surviving-online-harassment-0529195

PEN America & Meedan. (2023, June 29). Shouting into the Void. https://pen.org/report/shouting-into-the-void/

Milosevic, T. (2016). Social media companies’ cyberbullying policies. International Journal of Communication, 10, 5164–5185.

Hutchinson, A. (2021, February 3). TikTok Adds New Video Warning Labels to Stop the Spread of Misinformation. https://www.socialmediatoday.com/news/tiktok-adds-new-video-warning-labels-to-stop-the-spread-of-misinformation/594481/

Dhapola, S. (2022, August 18). Explained: Why Tiktok’s algorithms, content moderation models are being audited. The Indian Express. https://indianexpress.com/article/explained/explained-sci-tech/tiktok-algorithms-content-moderation-model-audit-explained-8095749/