Content Moderation on Social Media Platforms – Benefits, Challenges, and Responsibilities arise in the Digital Age

Social Media Icons With Paint Splash Effect
Social Media Icons With Paint Splash Effect by Lewis Ogden

The practice of content moderation has become a prominent process that many platform companies use before publishing materials online. However, it is now under threat due to the frequent appearance of inappropriate content, questioning the necessity of the process. Despite the growing worries, content moderation is still essential to highlight Internet users’ experiences, and requires collaboration between governments, platform companies, and Internet users to minimise challenges and maintain a free and safe Internet space.

Why is content moderation essential for Internet participation?

Content moderation plays a crucial role in the improvement of the public discourse’s quality and users’ experience. In other words, the platform, which is designed and orchestrated (Gillespie, 2018), utilises content moderation to select types of information presented online, thus altering users’ understanding of specific topics.

  • Content moderation can prevent vulnerable audience from approaching toxic and offensive content throughout its rules and policies.

Many Internet users have formed a new everyday routine of visiting the Internet, especially social media sites, to seek information, communication, entertainment, etc. as a result of the perception of the Internet as “a space of flows” (Dutton, 2009). However, this leads to a relatively large likelihood of viewing inappropriate content due to their lack of ability to notify illegal and harmful information. Understanding that many media platforms have published several regulations and policies to ban or remove improper content, such as sexually explicit material, misleading information, and violent or terrorist content. Despite the frequency of new policy introduction, the amount of this type of content is still unpredicted and hard to control. In 2019, millions of Facebook and Instagram posts violating the rules, such as containing adult nudity, sexual activities, suicide, and self-injury content, were removed, raising concerns for users about the quality of content they view everyday. Therefore, without implementing the content moderation process beforehand, the digital environment may grow more chaotic and unhealthful, distorting individuals’ knowledge and perceptions, especially children and teenagers who are easily impacted by other points of view.

  • Content moderation can support enhancing individuals’ experience on social media.

The Internet has been viewed as a place that is open for everyone to join, express their thoughts, as well as create content, hence tightening the network of Internet users regardless of race, nationalities, or other factors. Further, thanks to the operation of algorithms in running the Internet, users can easily engage with their desired and recent information. Therefore, the freedom in sharing and the support of algorithms in approaching content lead to the advancement in their experiences demonstrated via their interactions with the platforms and other users. Content moderation, in other words, can be seen as a light torch that illuminates the optimal ways for the online community to function.

Additionally, content moderation also promotes equality in performance on social media sites despite their occupation. With the emerging engagement with social media platforms, many political activists gradually appeared online, building a modern and friendly frame in the citizens’ eyes for boosting mutual interactions. However, this can threaten the audience’s freedom of speech and expression since politicians can take advantage of their power to control the type of content they can share online. At this stage, content moderation is used to limit political activities and keep the digital sphere balanced. Take Donald Trump’s social media appearance ban as an example. He is a notorious politician who usually uses social media to demonstrate his own opinions. After the failure in re-election in 2020, Trump posted several provocative tweets describing his waves of anger online, creating mixed creations and inciting riots among his supporters, thus culminating in the attack on Capitol Hill in 2021. As a result, several tech companies ban his online appearance, with the permanent suspension of his Twitter account, and the two-year suspension of his Facebook account, to reduce his negative effects on the real world. These punishments show that the content generated by Trump violates the rules and threatens the security of the Internet while demonstrating the platform companies’ equal treatment between authorities and ordinary users on the Internet. Hence, the moderation of content is important to build a safe digital environment for everyone to join.

Challenges proposed due to the application of content moderation

Even though the content moderation is important to maintain the freedom and secure of social platforms, there are many challenges that need to be tackled.

  • The incapability of reviewing and sorting content accurately in a large scale

The implication of AI and machine systems may result in some failure of content moderation. because of the pressure to restrict all user-generated content that complies with platform standards while abiding by customer rights, algorithms are utilised to speed up the moderation process (Flew et al., 2019). However, content human-created content may have multiple layers of meanings and unpredicted consequences, promoting the challenges for software and algorithms to sort out whether the content is acceptable or unacceptable (Roberts, 2019). The Christchurch terrorist attack livestream in 2019 describes the negative consequences of the failure in content moderation and slow in tackling the issue as well as the trauma of several Facebook users when they were watching live footage of crime. Besides the sadness feelings of the majority of Facebook users, some individuals expressed anger towards the quick delete of less controversial content, thus questioning the efficiency of content moderation on the platform.

  • The limitation in platform access and freedom of expression due to the government intervention

The extreme intervention of the government in moderating content can lessen topics and content that citizens can demonstrate expression on. According to Samples (2019), there are four types of users on social media, and all groups share the same rights and responsibilities online. In other words, governments can implement laws to moderate inappropriate behaviors and offensive content presented online, whereas Internet users can have the right to select the types of information they want to view without being strictly controlled. The new censorship rule released in China to reduce the spread of protests further worsened the situation in the country. In addition, the lack of clarity about what kind of content would be considered illegal or harmful further demonstrates the expansion of the power of governments and disrespect for freedom of expression, leading to content regulation failing to fulfill its mission.

New Internet censorship rules worsened people’s anger in China

Besides, the geopolitical tensions between countries, as evidenced by the increasing regulations and restrictions on specific social media platforms, can weaken the connection between Internet users around the world, and promote the concept of “Splinternet”. The ban on TikTok in the US shows that the bias of the US government limits users’ access to information, as well as gives them a bad impression of the role of content moderation as a tool to emphasise the government’s power (Gray, 2021). Moreover, long-term effects of the Internet’s fragmentation could harm the development of online globalisation (Lemley, 2021) and exacerbate the difficult position of content moderation due to the rising number of newly enacted laws.

The necessity of Internet users in the content moderation process

Hence, Internet users should also take part in thecontent moderation procedure when collaborating with governments and platform companies to enhance the quality of content appearing online. This is due to the fact that they are the central target audience of the content moderation process on social media platforms, and consuming content generated, making them well aware of what types of information and materials are appropriate for all users. Moreover, Internet users’ involvement can balance the role of platform companies and governments, promoting a healthier online environment and a free space for everyone which are akin to the initial idea of the Internet as a no-content-limitation space (Abbate, 2017). Nevertheless, Internet regulations and media policies should be updated frequently for a better content moderation procedure, thus improving and maintaining the freedom and safety of the Internet.

In conclusion, content moderation will improve Internet users’ experiences by enforcing laws and norms that eliminate offensive information and improper conduct while treating everyone equally, regardless of social status. However, the capability of moderating content on a large scale and the extreme intervention of the government in free expression and platform access threaten the function of content moderation, necessitating user involvement. Once the collaboration between Internet audiences, governments, and platform companies is successful, content moderation may create and maintain the safest Internet space for everyone. 

Reference list

Abbate, J. (2017). What and where is the Internet? (Re)defining Internet histories. Internet Histories, 1(1–2), 8–14. https://doi.org/10.1080/24701475.2017.1305836

Bond, S. (2021, June 21). Trump Suspended From Facebook For 2 Years. NPR. https://www.npr.org/2021/06/04/1003284948/trump-suspended-from-facebook-for-2-years

CNN. (2022, December 1). Chinese police use extreme censorship tactics to prevent spread of protests. [Video]. Youtube. https://youtu.be/wUoSbETkS2Q?si=m271U9MAaG03en_B

Collins, B. & Zachdrozny, B. (2021, January 9). Twitter permanently suspends President Donald Trump. NBC News. https://www.nbcnews.com/tech/tech-news/twitter-permanently-bans-president-donald-trump-n1253588

Dutton, W. H. (2009). The Fifth Estate Emerging through the Network of Networks. Prometheus (Saint Lucia, Brisbane, Qld.), 27(1), 1–15. https://doi.org/10.1080/08109020802657453

Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal of Digital Media & Policy, 10(1), 33–50. https://doi.org/10.1386/jdmp.10.1.33_1

Gillespie, T. (2018). All Platforms Moderate. In Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (pp. 1-23). Yale University Press. https://doi.org/10.12987/9780300235029-001

Gray, J. E. (2021). The geopolitics of ‘platforms’: the TikTok challenge. Internet Policy Review, 10(2). https://doi.org/10.14763/2021.2.1557

He, L. (2022, November 30). China to punish internet users for ‘liking’ posts in crackdown after zero-Covid protests. CNN Business. https://edition.cnn.com/2022/11/30/media/china-new-internet-rule-punish-liking-posts-intl-hnk/index.html

Lemley, M. A. (2021). THE SPLINTERNET. Duke Law Journal, 70(6), 1397-. https://link.gale.com/apps/doc/A666103930/AONE?u=usyd&sid=bookmark-AONE&xid=86275a95

Ogden, L. (2018). Social Media Icons With Paint Splash Effect [Photo]. flickr, https://www.flickr.com/photos/bitsfrombytes/43617178595

Roberts, S. T. (2019). Understand Commercial Content Moderation. In Behind the Screen: Content Moderation in the Shadows of Social Media (pp. 33-72). Yale University Press, 2019. http://ebookcentral.proquest.com/lib/usyd/detail.action?docID=5783696.

Samples, J. (2019, April 9). Why the Government Should Not Regulate Content Moderation of Social Media. CATO Institute. https://www.cato.org/policy-analysis/why-government-should-not-regulate-content-moderation-social-media

Smith, A. (2019, March 15). New Zealand Terrorist Attack Live Streamed on Facebook. PCMag. https://www.pcmag.com/news/new-zealand-terrorist-attack-live-streamed-on-facebook

Wong, Q. (2019, November 13). Millions of Facebook, Instagram posts removed for violating rules. CNET. https://www.cnet.com/tech/mobile/millions-of-facebook-instagram-posts-removed-for-violating-rules/