“A global regulatory institution and platform oversight would help curb harassment.”

"File:Online harassment of women journalists.png" by UNESCO is licensed under CC BY-SA 4.0.

File:Online harassment of women journalists.png” by UNESCO is licensed under CC BY-SA 4.0.

Introduction

With the advent of the Internet, it has enabled people to connect and share information worldwide. Through the Internet, individuals can access websites using various devices and platforms. Whether it’s on social media, via email, through online news sources, or within applications, diverse viewpoints often emerge on any given issue. These differing perspectives can lead to opposition, and in some cases, this opposition can escalate into hatred, with harassment emerging as one of the detrimental behaviors stemming from such animosity. Harassment encompasses cyberbullying, hate speech, personal information disclosure, the spread of false information and rumors, as well as sexual harassment, among other forms. Particularly on social media platforms, community guidelines and content management policies have been established to combat harassment. These policies frequently prohibit hate speech, harassment, and other harmful content. However, despite these efforts, they are not always entirely effective in preventing malicious incidents. Platform oversight holds the responsibility of reviewing and managing content on their platforms. The question remains: How can harassment be more effectively prevented or stopped?

How to Beat Cyberbullies/watchwellcast/2013


98.6万位订阅者

Harassment is a crime” by StacieBee is licensed under CC BY-NC-SA 2.0

The occurrence of harassment

Online platforms serve as outlets for entertainment and emotional release, where people vent their frustrations and displeasures from real life. Applying real-world moral standards to restrain online platforms is a challenge. In real life, when someone utters insults, threats, or humiliations towards others, they face condemnation and, in severe cases, even the risk of imprisonment. However, due to the low cost and high anonymity of online speech, there are often limited consequences for bad behavior. Furthermore, the internet’s widespread coverage and unrestricted access facilitate the rapid and extensive dissemination of information, including harmful content. Unlike face-to-face interactions in the real world, information on the internet can swiftly spread globally, making harmful information and harassment easier to propagate on a large scale. Harassment through social media is very common. Since we cannot physically prevent everyone from speaking or typing, the question arises: How can we effectively curb inappropriate behavior?

TikTok app” by Solen Feyissa is licensed under CC BY-SA 2.0.

Online platform regulation

Typically, online platforms establish clear content policies and community guidelines that explicitly prohibit certain behaviors and content. These prohibitions may encompass acts such as threats of violence, malicious harassment, hate speech, and methods aimed at protecting minors. Additionally, platforms often employ content moderation teams and automation tools, such as machine learning algorithms, to detect and identify violating content. These tools assist platforms in recognizing and removing harmful content, thereby reducing the need for manual oversight. In response to user violations of rules or policies, platforms take actions such as removing harmful content, suspending or banning offending accounts, and even cooperating with law enforcement to address legal violations. Meanwhile, platforms face a significant challenge in balancing freedom of speech with preventing abuse, as users are highly sensitive to issues of free expression. Regulatory approaches may vary based on platform type, user demographics, and content types, influenced by the legal, cultural, and value differences in various regions and countries. However, protecting user safety and maintaining a respectful online environment remain central objectives of regulation. Building a comprehensive online platform regulatory system is a complex task, and it may be beneficial to begin by establishing a global consensus regulatory body and creating universal rules for online platforms.

Different countries often have diverse regulations, policies, and cultural norms, which can lead to variations in the way the same app is regulated. These differences can make it challenging for regulatory bodies to collaborate and share best practices and advanced technology tools for content moderation. Such tools play a crucial role in improving the accuracy of content moderation and efficiently identifying instances of misuse. For instance, let’s consider TikTok, where the regulation of platform content differs between the United States, China, and other countries. In some countries like India, Indonesia, and Pakistan, TikTok has faced temporary or permanent bans. These bans may stem from cultural sensitivities in these countries, where explicit images, violence, and adult content distributed through short videos are seen as more concerning. Additionally, moderating video content comes with higher technical demands and is more time-consuming compared to text or images. This difference in technical requirements results in a higher threshold for automated detection and removal of video content, making policy enforcement more effective in certain regions. Moreover, variations in cultural attitudes can be observed. For example, users in the United States, with more open popular beliefs, might notice that TikTok’s content moderation is less stringent compared to “douying,” the Chinese version of TikTok. TikTok tends to have relatively more open content, and douying’s built-in automated tools for detecting sensitive words are stricter. However, whether the approach is stricter moderation or offering users more freedom, neither has proven highly effective in preventing or stopping harassment. A complete ban on the app would not only stifle its benefits but also hinder the potential for cross-border communication and information flow, which have been fundamental components of the ‘digital dividend’ since the mid-1990s. Additionally, it could weaken opportunities for a more open and democratic political discourse.

Why is having an international regulatory body and providing consistent guidance and standards essential for addressing harassment, abuse, and harmful content?

Global rules can explicitly define unacceptable behavior, encompassing actions like harassment, malicious conduct, and the spread of false information. These rules can help platforms create clearer content policies and improve their ability to identify and address harassment. Moreover, international collaboration among nations can foster cooperation, ensuring that those who engage in harassment cannot evade legal responsibility, thereby fortifying the fight against international harassment.

Shared technological advancements, particularly in the field of AI, can play a pivotal role in this endeavor. AI can preliminarily detect illegal or inappropriate content within images or text, reducing the human cost associated with ensuring safety. Furthermore, coordinated data sharing and the exchange of technological tools related to harassment can enhance accuracy and efficiency in identification, refining the precision of AI tools.

Conversely, global rules can present a unified stance to the international community, urging different countries to bolster legal measures against online violence and hate speech. This, in turn, can increase the legal consequences for those responsible. As highlighted by Waldron (2014) and Brown (2015), protection against incitement to racial hatred or violence aims to shield individuals from the harm inflicted by inflammatory speech, such as hate speech. Such speech can lead to acts of discrimination, overt hostility, harassment, persecution, and physical violence based on religion, race or ethnicity, sexual orientation, and other characteristics.

Conclusion

Much like the mixed reception the WTO has faced since its inception, a new multinational cooperation body is sure to encounter numerous challenges. Among these challenges, several potential obstacles loom, including the novelty of business models, significant challenges to freedom of speech stemming from certain forms of government intervention, a lack of meaningful policy experiments and precedents to guide decisions, and concerns about stifling future innovation. Furthermore, despite arguments suggesting that policies should extend beyond domestic media and that a shift towards global media policy could better facilitate platform regulation, the likelihood of global regulatory coordination remains low. While there may be a consensus that ‘internet governance requires institutional structures transcending borders,’ there is no viable agreement on who is best suited to assume such a role. A global regulatory body could wield immense power, necessitating effective oversight and mechanisms for checks and balances to prevent potential abuses of power.

Future collaboration will likely involve multiple stakeholders, including governments from various countries, private institutions, individuals, nonprofit organizations, and more. However, it is crucial to emphasize that addressing harassment is a complex, multifaceted task. It cannot rely solely on rule-making to permanently eliminate harassment. Instead, it requires a comprehensive approach that incorporates various means, such as user education, technological innovation, societal engagement, and government cooperation.

This work is licensed under a Creative Commons Attribution 4.0 International License.

References:

Criddle, C. (2023). TikTok accused of mishandling sexual harassment allegations. FT.com.

Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal of Digital Media & Policy, 10(1), 33–50. https://doi-org.ezproxy.library.sydney.edu.au/10.1386/jdmp.10.1.33_1

Gorwa, R. (2019). The platform governance triangle: conceptualising the informal regulation of online content. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1407

Picard,R.(2017).Essential Principles for Contemporary Media and Communications Policymaking. Reuters Institute for the Study of Journalism. https://reutersinstitute.politics.ox.ac.uk/our-research/essential-principles-contemporary-media-and-communications-policymaking

Yenala, H., Jhanwar, A., Chinnakotla, M.K. et al. Deep learning for detecting inappropriate content in text. Int J Data Sci Anal 6, 273–286 (2018). https://doi.org/10.1007/s41060-017-0088-4

Zeng, J., &  Kaye, D. B. V. (2022).  From content moderation to visibility moderation: A case study of platform governance on TikTok. Policy & Internet,  14,  79–95. https://doi.org/10.1002/poi3.287