Free speech and content moderation on social media

As of August 2023, there are 4.88 billion active social media users worldwide, accounting for 60.6% of the global population. The rise of social media has given people a platform to express their opinions and participate in discussions on various topics. As a powerful communication tool, it’s a great way to gain freedom of information and freedom of expression. However, while social media gives users the power to express themselves, the rise of a large amount of misinformation, hate speech, self-harm content, fake news and other harmful content has drawn attention to the need for content regulation. Responsible content moderation on the platform requires a balance between preventing harmful content from having a negative impact on society and safeguarding citizens’ rights to free speech to create a more harmonious online environment. This article mainly discusses that while there should be the right to free speech on social media, effective content moderation on the platform is also indispensable.

Why is it crucial to have the right to free speech on social media platforms?

Free speech is the right to express ideas, beliefs, thoughts, and opinions without restriction by the government or other authorities. It guarantees people’s rights to freely express and receive information and is protected by the constitutions and laws of many countries. Freedom of expression is also a basic right of citizens and the cornerstone of a democratic society (Giovanni, 2020). On social media, freedom of expression is a basic right for all users, and users can express both positive and negative opinions. The benefits of free speech on social media can be seen in all aspects of society. Firstly, freedom of speech plays a vital role in the functioning of a democratic society. Social media provides a platform that enables citizens to criticize government decisions, participate in political discussions, and hold public officials accountable. Without free speech, citizens would be unable to express their concerns, monitor public authority, and promote positive social change. This freedom of expression enables citizens to actively participate in social processes, which is crucial to promoting democracy. Secondly, free speech empowers marginalized voices. Free speech on social media provides an opportunity for socially marginalized people to share their life experiences. It provides an opportunity for these voices to be heard by a wider audience, thereby raising awareness of social injustice and allowing their stories to contribute to social change. Thirdly, free speech on social media facilitates rapid access to information. Social media has changed the traditional way the public obtains information and news. It provides a platform for citizen journalism where each user can share eyewitness accounts and report news events in real time. This challenges authoritative news reporting, provides different perspectives, and allows news to spread faster. Moreover, free speech on social media promotes global connectivity and mutual understanding. Social media connects people around the world across borders, allowing people from different cultures to communicate and learn from each other. Free speech on social media promotes cross-cultural understanding, resulting in a more international society.

The negative impact of free speech

However, free speech on social media is a double-edged sword for society. While it brings benefits to society, it also brings threatening, offensive, illegal, violent and pornographic posts. Firstly, free speech can lead to cyberbullying and harassment. Cyberbullying refers to behaviors that intimidate, harass, or threaten individuals on social media platforms. The anonymity of social media allows people to hide behind fake accounts and harm others without accountability. This can have a negative impact on the victim’s mental health. For instance, a girl named Amanda Todd mistakenly showed her breasts while video chatting with an anonymous person. The anonymous person screenshotted the video and used it to harass and blackmail her, and sent the photos to her family and friends, which led to her being severely bullied online and offline and eventually committing suicide. Secondly, free speech can lead to the spread of misinformation. Free speech on social media platforms allows anyone to make up lies and spread false information quickly. This has resulted in a large amount of false news being circulated on social media platforms leading to confusion or harmful behavior. For instance, during the COVID-19 pandemic, various false information about the virus’s origin and potential treatments were often spread on media platforms causing panic. There have been rumors on social media platforms that injecting disinfectant can treat COVID-19. Thirdly, free speech can lead to hate speech. Hate speech on social media platforms refers to the use of defamatory or derogatory language against a specific ethnicity, religion, race, gender or sexual orientation. Hate speech not only perpetuates stereotypes and discrimination but may also make social platforms a place to spread hateful messages and incite violence. For instance, three black players from England’s national football team have been subjected to racial slurs and abuse on social media platforms following their defeat in the European final.

What-is-content-moderation. (18 C.E.). https://puremoderation.com/what-is-content-moderation

Content moderation methods to reduce the negative impact of free speech

Content moderation on social platforms is an important way to reduce the negative impact of free speech. Moderation refers to the evaluation, classification, screening, approval, deletion, or hiding of content posted by users on the platform according to relevant rules and policies (Giovanni, 2020). It functions to support positive communication and minimize the spread of offensive remarks and anti-social behavior (Giovanni, 2020). Here are some common content moderation methods used by various social media platforms. Firstly, social media content is moderated using artificial intelligence and machine learning. Artificial intelligence and machine learning can automatically detect and filter violating content. These algorithms can identify and flag content that violates platform guidelines. For example, YouTube’s AI algorithms can identify harmful content by analyzing text and audio in comments and videos and flag it for human review. It can automatically detect 94% of violating content and be able to remove 75% of violating content before it is viewed less than 10 times (O’Connor, 2021). Secondly, user reporting is an important way to moderate content on social media platforms. Social media platforms allow users to report content they deem inappropriate and take steps to remove or restrict reported content. When reporting, users usually need to provide a description of the information related to the harmful content and select the option that best matches the harmful content. Reporting options may include hate speech, violence, harassment, pornography or copyright infringement. For instance, the popular social media platform Instagram uses user reporting as its method of content moderation. If the reported content is found to be in obvious violation, action will be taken to delete the reported content, issue a warning, and limit the functions of the reported account. Instagram also allows users to provide feedback on the results of its reports. If the reported user thinks the result is unfair, they can appeal and reconsider the content moderation results. Thirdly, human moderation plays a vital role in social media content moderation. Social media employs human moderation teams to manually evaluate and review user-uploaded content and make decisions based on platform guidelines. In addition to reviewing user-reported content, auditors search for tags, keywords, or specific types of content to identify content that may have been missed by user reports or automated review systems. For example, because users upload a large number of content every day, YouTube employs thousands of human moderators to review the comments and video content the users upload.

In conclusion, balancing free speech and content moderation on social media platforms is a goal that social media platforms are all trying to achieve. Although freedom of speech is a basic right of every citizen, it is necessary to maintain the harmonious atmosphere of social media platforms, combat harmful content, and protect users’ sense of experience. By using the common auditing methods, an artificial intelligence automatic auditing system, user reporting, and human moderation can solve most of the negative effects of free speech on social media platforms. Content moderation can reduce the negative impact of free speech on social media such as hate speech, misinformation, cyberbullying and harassment. Only through the cooperation of social media platforms and users can a healthy network environment be created to maintain social stability and harmony.

Reference list:

A&E Television Networks. (n.d.). Freedom of speech – origins, first amendment & limits. History.com.

https://www.history.com/topics/united-states-constitution/freedom-of-speech

Admin. (2023, April 16). Legal implications of social media & free speech: Balancing Rights & Responsibilities. The Law Communicants.

De Gregorio, G. (2020). Democratising online content moderation: A constitutional framework. Computer Law & Security Review, 36, 105374. https://doi.org/10.1016/j.clsr.2019.105374

Dean, M. (2012, October 18). The story of Amanda Todd. The New Yorker. https://www.newyorker.com/culture/culture-desk/the-story-of-amanda-todd

Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation AS media policy: Rethinking the question of Digital Communication Platform Governance. Journal of Digital Media & Policy, 10(1), 33–50. https://doi.org/10.1386/jdmp.10.1.33_1

Goldman, E. (2021). Content moderation remedies. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3810580

Kozyreva, A., Herzog, S. M., Lewandowsky, S., Hertwig, R., Lorenz-Spreen, P., Leiser, M., & Reifler, J. (2023). Resolving content moderation dilemmas between free speech and harmful misinformation. Proceedings of the National Academy of Sciences, 120(7). https://doi.org/10.1073/pnas.2210666120

Sullivan, B. (2021, July 12). Three black soccer players are facing racist abuse after England’s Euro 2020 defeat. NPR. https://www.npr.org/2021/07/12/1015239599/prince-william-and-boris-johnson-denounce-the-racist-abuse-of-englands-soccer-te

Thebusinessanecdote.com. (2022, December 29). Is free speech truly free?: The impact of free speech on social media users. Business Anecdote. https://www.thebusinessanecdote.com/post/is-free-speech-truly-free-the-impact-of-free-speech-on-social-media-users

Wang, S. (2023). Factors related to user perceptions of artificial intelligence (ai)-based content moderation on social media. Computers in Human Behavior, 149, 107971. https://doi.org/10.1016/j.chb.2023.107971

Be the first to comment on "Free speech and content moderation on social media"

Leave a comment