Issues, Controversies, and Government Involvement in Social Media Content Moderation

Group 10


Content moderators: the gatekeepers of social media | Gianluca Demartini | TEDxUQ

Source: Youtube –


Content moderation refers to a practice that monitors and applies a pre-determined set of rules and guidelines for activities that are performed on social media communities to determine whether such activities are permissible or not. It implies that user-generated content on social channels like Facebook, Instagram, and Twitter, among others, are moderated according to rules and regulations of social media moderation. According to Roberts (2019), “The activity of reviewing user-generated content may take place before the material is submitted for inclusion or distribution on a site, or it may take place after material has already been uploaded” (p. 33). The international legal order recognizes the need for states to protect, respect, and promote human rights. While the concept is well developed in other areas such as the extractive and garment sectors, online platforms notably receive less attention due to various issues that arise with content moderation on such social communities in the digital age. In this regard, we will discuss what legislative actions should do for platform content moderation.

 What issues arise for digital platform content moderation?

Figure 2. “Top 12 Most Popular Social Media Sites In 2021” by Jeremy Collins is licensed under CC BY 2.0

Figure 1. “Top 12 Most Popular Social Media Sites In 2021” by Jeremy Collins is licensed under CC BY 2.0

One major issue for digital platforms with content moderation is the failure to address the practice adequately. Digital platforms fail to respect their users’ freedom of expression. In that light, digital platforms fail to adequately handle harmful and unlawful content posted on them, such as harassment and hate speech, among others (Gillespie, 2018). While online platforms play a significant role in the freedom of expression in the digital world, most fail to fulfill their content moderation responsibilities. Billions of people rely on search engines such as Google daily to search for and receive data and ideas. More importantly, people build relationships, share ideas, and network with others on social media platforms. Similarly, people use social media platforms to organize dissent and mobilize, for instance, the Arab Spring and #MeToo. Therefore, when digital platforms fail to closely monitor or repress-public discourse, issues like harassment and hate speech arise because online platforms act as daily lifelines for billions of people using them.

Another critical issue that digital platforms face enforcing content moderation is the absence of actionable legal guidance. As a result, digital platforms engage in overzealous content moderation to adhere to governments’ strict legislative responses. The trends reveal that there lacks consensus over digital platforms’ specific roles in content moderation and how to undertake it. For instance, Facebook deleted a photo of a victim of the Vietnam War in 2016 (Kleinman, 2016). Soon after, YouTube pulled down several videos with evidence of atrocities and war crimes in Syria (Browne, 2017). Online platforms lack specific activities to deal with content moderation despite UN’s Guiding Principles that provide critical frameworks to define their responsibilities. In other words, digital platforms lack crucial knowledge on how to develop and implement terms of service when it comes to content moderation.

why does massive controversy appear from platform content moderation?

At long last, the first issue of the ‘Journal of Controversial Ideas’!  

Figure 2.At long last, the first issue of the ‘Journal of Controversial Ideas’! by Michael Cook is licensed under CC BY 2.0 

Apart from the Facebook and YouTube cases of war victims discussed above, there are several other attempts from digital platforms to exercise content moderation that result in controversies. For instance, in 2017, Facebook deleted posts with violence targeted at the Rohingya Muslims in Myanmar (Wong et al., 2017). In another example, Facebook removed content, suspended profiles, and groups linked with queer people. In so doing, the digital platform censored LGBTI individuals terming themselves dyke or queer. Facebook was widely criticized for such moves despite its attempt to ensure content moderation. The controversies arise from how the digital platforms implement their terms of service to enhance transparency in the number of contents they remove. Tech companies face challenges regulating online content and respecting people’s right to freedom of expression. In that light, Reddit finds itself in the middle of a controversy because the platform’s “design, algorithm, and platform politics implicitly support these kinds of cultures” (Massanari, 2017, p. 329). The platform promotes anti-feminist and misogynistic activism, which infringes on the right to freedom of expression. It implies that various actors need to do more work in the business and human rights community.

Current arrangements to moderate content are flawed, causing a massive controversy to the practice. Digital platforms increasingly recognize their global footprint and respond how they know best. For instance, digital platforms introduce new features and interventions to moderate harmful content and comply with formal and informal demands from states. Digital platforms gradually move towards an industrial approach as far as content moderation is concerned. In so doing, they contract several moderators and deploy identification software. The current arrangement is a large-scale and complex system, presenting sensitivity challenges to context, culture, and language. Therefore, it raises controversy on how to moderate the speech of the many groups earlier mentioned. For instance, controversies arise on the kind of resources platforms use to moderate speech of the LGBTI community versus the rest of the public. If digital platforms prioritize specific users, the outcome when the influential few partake in hate speech or incitement behaviors remains unclear. Therefore, the censored or moderated content becomes as significant as those left on the platforms, causing massive controversy.

Additionally, most digital platforms are still in infancy raises controversy on content moderation because online speech is a relatively recent development. This is especially true because social and legal norms concerning speech continually evolve throughout time within various cultural contexts. Digital platforms grapple with issues and challenges that took several years to resolve or remain unclear in other media fields. “Things aren’t great, Internet. Actually, scratch that: they’re awful. You were supposed to be the blossoming of a million voices. We were all going to democratize access to information together. But some of your users have taken that freedom as a license to victimize others. This is not fine” (Gillespie, 2018, p. 9). Many influential digital platforms like Facebook are hinged on the American free speech tradition hence do not require strict content moderation. However, inadequate preparations for applying the approach for various legal, social, and political, or even conflicting environments raise tensions in various aspects. There currently lacks appropriate approaches to content moderation that cannot cause controversy in digital platforms.

What government should do to restrict social media?

Figure 3. “Why Facebook Is Banned in China & How to Access It” by KRISTINA ZUCCHI is licensed under CC BY 2.0

Figure 3. “Why Facebook Is Banned in China & How to Access It” by KRISTINA ZUCCHI is licensed under CC BY 2.0

Consequently, it is logical to ask whether governments should have a more significant role in enforcing content moderation restrictions on social media. Some tech companies take proactive steps to enhance their content moderation when explicit, actionable guidance is lacking. They provide more detailed terms of service and how to implement them to increase transparency. However, while such efforts are promising, governments must play a crucial role in enforcing content moderation to address the issues and controversies discussed above. In that light, governments “need to ensure that national legal and regulatory frameworks support, rather than undermine, online platforms in respecting the right to freedom of expression” (Wingfield, 2018, para. 7). For instance, governments can include issues about the tech sector and freedom of expression in their action plans at the national level to align with the UN Guiding Principles. Moreover, governments can consider the processes that ensure that digital platforms respect human rights in their content moderation, matching those at the global level.


In summary, governments can examine the roles that digital platforms play in disseminating (mis)information. Governments can enact laws that affect interactive computer services in content moderation, including social media operators (Gallo & Cho, 2021). When governments introduce such legislation, they are most likely to consider public and private sectors’ roles to address (mis)information through social media communities. However, governments must be careful because the reality of such incentives, policies, or regulations might affect one information category, limiting the initiative. Therefore, while governments take such measures, they must consider what the legislative actions pose for content moderation. Any effort to deal with content moderation issues and challenges can have unintended economic, legal, and social implications that governments may fail to foresee.




Browne, M. (2017, August 22). YouTube removes videos showing atrocities in Syria (Published 2017). The New York Times – Breaking News, US News, World News and Videos.

Gallo, J. A., & Cho, C. Y. (2021). Social Media: Misinformation and Content Moderation Issues for Congress. Congressional Research Service Report46662.

Gillespie, T. (2018). All Platforms Moderate. In Custodians of the Internet (pp. 1-23). Yale University Press.

Kleinman, Z. (2016, September 9). Fury over Facebook ‘Napalm girl’ censorship. BBC News.

Massanari, A. (2017). # Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New media & society19(3), 329-346.

Roberts, S. T. (2019). 2. Understanding Commercial Content Moderation. In Behind the Screen (pp. 33-72). Yale University Press.

Wingfield, R. (2018, August 1). Risks and responsibilities of content moderation in online platforms. OpenGlobalRights.

Wong, J. C., Safi, M., & Rahman, S. A. (2017, September 20). Facebook bans Rohingya group’s posts as minority faces ‘ethnic cleansing’. The Guardian.