Balancing Content Moderation and Free Speech

Introduction

In the digital age, the way people communicate is no longer limited to offline face-to-face dialogue. Thousands of apps, millions of video content, and billions of search results make up a vast digital ecosystem that allows for unprecedented opportunities and conveniences for social interaction, expression of speech, and the availability and discussion of scholarly resources online across the globe (Gillespie, 2018). However, the content users post on social media platforms are not always truthful, positive, or valuable. Many users have a tendency to publish inaccurate information, inappropriate opinions, pornography, naked images, and acts of cyberbullying and harassment, which can mislead and hurt the public. Therefore, platform administrators have developed specific content review policies and terms of service to remove or moderate content posted by users who violated the rules. However, content moderation policies have sparked controversy in the community regarding free speech and bias infringement. As a result, an important and complex question emerges: How can content on social media platforms be effectively moderated while protecting freedom of expression? This paper proposes that under content moderation policy, communities can provide feedback and raise demands through platforms to improve and optimise content vetting policies. This paper proposes that under a content moderation policy, the public can not only provide feedback and raise demands through the platform to ensure the legitimacy and fairness of the content review method. They can also report inappropriate content from other users. This promotes the dissemination of civilised and meaningful content publish on the platform while ensuring right to free speech.

The concept of content moderation and why

Content moderation is critical to maintaining public positive online experience by prohibit certain contents post on platforms. Such as hate speech, obscene, cyberbullying, intimidation of vulnerable groups, and misinformation. For example, according to the research (Ahmed & Rasul, 2022) during COVID-19, we can see that some users posted misinformation to disrupt public order. Such misinformation may include exaggerated data on infections or deaths, inaccurate information on preventive measures, medication recommendations and vaccines. Although there was no serious impact on citizens, it led to a degree of public panic, fear, anxiety and social disruption. From the perspective of the epidemic period, if platforms do not moderate and remove such content, this is not only a weakening of accurate information post by government, but also a potential threat to social stability and public health.

The video provides valuable insights to help us understand the concept of content moderation. The speaker also mentioned two key factors: content moderation policies/standards, and artificial intelligence. These factors are the controversies in the current society on content moderation, such as the right to free speech and bias arising. Internet platforms deal with large amounts of content and user requests; they have developed artificial intelligence to automate the decision-making process for content removal, such as Algorithms and Content ID technology (Oliva, 2020). The content moderation standards and policies of each particular platform determine how human reviewers or intelligent machines moderate, remove, or suspend posted content but also the way content is categorized, highlighting specific posts, and the development of special rights granted to its financial partners on the platform to post advertisements (Gillespie, 2018). This has led to some content being removed, blocked, or restricted because it is not aligned with the moderate values set by the platform, even if it does not violate the law.

Free speech as a human right

“Social Media Engagement” by  Mohamed Hassan is marked with CC0 1.0.

According to international human rights standards, article 19 of the International Covenant on Civil and Political Rights (Australian Human Rights Commission, 1976) “Everyone shall have the right to freedom of expression; this right shall include freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers, either orally, in writing or print, in the form of art, or through any means. Receive and impart information and ideas of all kinds, regardless of frontiers, either orally, in writing or print, in art, or through any other media of his choice”. However, enforcing content moderation policies and terms often contradicts this human rights standard. For example, even when sharing conservative images, expressing polite speech, or defending the collective rights of specific races or genders on platforms, there is still a risk of being blocked or even penalized by the platforms (Oliva, 2020). leading to a growing problem of prejudice and discrimination based on race, color, and gender, as well as exacerbating the marginalization of groups in society. This poses complex ethical and legal challenges for platform management and content moderation.

Freedom of speech is also subject to several limitations. Firstly, it must respect the rights and reputations of others, as well as the need to preserve national security, public order, public health, or morals (Australian Human Rights Commission, 1976). The Government has, therefore, indicated that members of the public are free to speak on any social platform as long as their content does not relate to these aspects. Nor do platforms have the right to impose restrictions on specific groups or content that does not violate the law.

“The free high-resolution photo of computer, technology, play, social, internet, communication, security, website, social media, children, fun, media, police, facebook, multimedia, www, communicate, social networking, social network, internet page, networking, networks, enlighten, enlightenment, human action“ by Mohamed Hassan is marked with CCO 1.0.

However, content moderation policies and human rights standards have been developed with a degree of self-interest, and not enough attention has been paid to the protection and impact of public groups. Specifically, international human rights standards may appear to protect public freedom of speech. However, their primary purpose seemss to ensure that the content discussed does not put the state’s interests at risk. Similarly, as mentioned before, platforms have moderation policies that remove content not aligned with the platform’s values/benefits, even if it does not violate the law. Therefore, we must think about how to protect the public’s freedom of speech better while avoiding communities being exposed to harmful misinformation. And also compliance with legal standards.

How to balance between them

Firstly, transparency in the policy formulation process. Policymakers can publicly post the criteria for judging content audits and the reasons for their formulation on specific social platforms for public access. This practice improves public understanding and support for the content moderation policy. It allows the public to provide feedback and raise demands through these platforms to promote the legitimacy and fairness of the content moderation methodology (Kozyreva et al., 2023). For example, the public can provide feedback on issues through the backend when platforms are biased and misjudged. Sensitivity issues such as culture or race are pointed out so policymakers can use this feedback to understand cultural differences and adjust policies to avoid offense or discrimination.

“Free Images: collaboration, cooperation, teamwork, work, strategy, development, concept, gear, team, workflow, business, technical, cog, support, hands, success, partnership, agreement, solution, union, businessman, help, unity, african, caucasian” by Mohamed Hassan is marked with CCO 1.0.

In addition, ensuring that platforms deliver positive content while protecting the human right to freedom of speech, monitoring, and moderation among users is also essential (Kozyreva et al., 2023) because the public is usually the first to notice harmful information. When someone posts inappropriate comments or inappropriate images, other online users can report them, like the COVID misinformation incident mentioned above. The public can cooperate in the fight against them by reporting misinformation and re-tweeting accurate government releases to weaken misleading ones. Typically, if a user’s account is reported multiple times, his or her account may be frozen and the content he or she posted removed. This mechanism not only follows the standards of human rights and freedom of expression but also helps eliminate harmful information’s negative impact on the public.

Bibliography:

Ahmed, S., & Rasul, M. E. (2022). Social Media News Use and COVID-19 Misinformation Engagement: Survey Study. Journal of Medical Internet Research, 24(9). https://doi.org/10.2196/38944

Oliva, T. D. (2020). Content Moderation Technologies: Applying Human Rights Standards to Protect Freedom of Expression. Human Rights Law Review, 20(4), 607–640. https://doi.org/10.1093/hrlr/ngaa032

Gillespie, T. (2018). All platforms moderat. In Custodians of the Internet : Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. New Haven: Yale University Press. https://doi.org/10.12987/9780300235029

Australian Human Rights Commission. (1976, March 23). International Covenant on Civil and Political Rights – Human rights at your fingertips – Human rights at your fingertips | Australian Human Rights Commission. Humanrights.gov.au. https://humanrights.gov.au/our-work/commission-general/international-covenant-civil-and-political-rights-human-rights-your

Kozyreva, A., Herzog, S. M., Lewandowsky, S., Hertwig, R., Lorenz-Spreen, P., Leiser, M., & Reifler, J. (2023). Resolving content moderation dilemmas between free speech and harmful misinformation. Proceedings of the National Academy of Sciences, 120(7). https://doi.org/10.1073/pnas.2210666120

Alfino, M. (2023). Free Speech Issues in the Regulation of Social Media Platforms. Journal of Information Ethics, 32(1). https://doi.org/10.2307/JIE.32.1.10