
The increased openness of the internet supports the freedom of communication and opinion expression. However, this can lead to the circulation of problematic content on digital platforms such as explicit, harmful, and offensive material. To ensure users are protected from these types of content, it is crucial that digital content moderation should be conducted. As defined by Gillespie et al. (2020) content moderation is the practice of detecting content or behaviour deemed unacceptable by platforms or other information intermediaries. Despite the importance of content moderation, the process of censoring and removing inappropriate content constantly remains under debate. This debate involves governments facing the challenge of conducting regulation which does not infringe upon freedom of expression. According to Gillespie (2018) debates regarding acceptable and prohibited content traces back to century-old debates about the proper boundaries of public expression. Thus, this essay examines the circulation of problematic content on the internet and offers suggestions on how the government should deal with this issue.
Content regulation comes with challenges
The regulation of content on the internet renders the content or user invisible on a platform. This creates debate regarding whether the removal of content is a form of “censorship” and an infringement on free speech under the US Constitution. Whilst illegal or abusive content is regulated, there is a lack of clear definitions and boundaries for any speech that lies outside this category. This makes it difficult for platforms to determine what should and shouldn’t be regulated.
Decisions about what should be deemed “problematic” or “harmful” are very complex, and our societies haven’t yet agreed on the types of content they expect social media platforms to either curate, or moderate (Wardle, 2019)

In 2016, Facebook faced criticism after they removed a post depicting the photo of “Napalm Girl.” The photograph was taken in 1972 and depicts a naked and injured nine-year-old girl running to soldiers for help during the Vietnam war. The photo was uploaded by Norwegian newspaper editor, Tom Egeland, and when he went to re-post it, his account was suspended by Facebook. When the photograph was banned, Facebook cited it as a violation of its “community guidelines,” and reduced it to an image of pornography (Ibrahim, 2017). This example reveals the controversies of content moderation, as different people share different views regarding what is appropriate to post on social media.

A more recent example of the controversy of content moderation can be seen in the banning of Donald Trump from Facebook and Twitter in response to the January 6, 2021 Assault on the US capitol by Trump supporters. Both companies took this step after Trump used the platforms to spread misinformation about election fraud. Twitter was the first platform to act and announced on January 8 that after a close review of Trump’s recent tweets it had decided to permanently suspend his account due to the risk of further incitement of violence. Facebook also stated that they would be placing a block on Trump’s “Facebook and Instagram accounts indefinitely and for at least the next two weeks until the peaceful transition of power is complete,” (Ketchell, 2021). This regulation posed questions as to whether online platforms have the right to censor individuals. Additionally, many believed that the platform’s decision to ban Trump was a form of censorship and a violation of his rights. Twitter and Facebook also faced criticism for banning Trump despite failing to censor other harmful content.
Methods of moderation

Social media platforms conduct various methods of moderation, “platforms vary, in ways that matter both for the influence they can assert over users and for how they should be governed,” (Gillespie, 2018, p. 256).
Platforms such as Twitter, Facebook, and Instagram use a combination of manual and automated moderation. Policymakers and tech industry players often use artificial intelligence (AI) as a solution to complex challenges surrounding online content (Llansó, 2020). Moderators regulate inappropriate material defined by general site guidelines, or under the law by engaging computational tools. These include automated text searches for banned words, “skin filters” to determine if a large proportion of an image or video shows bare flesh, or tools designed to match, flag, or remove copyrighted material (Roberts, 2019). Machine-automated detection presents technological challenges, making it computationally and financially infeasible to implement across the board and at scale in many content moderation environments (Roberts, 2019).
The complexity of automated content moderation has forced humans to play a role in manually moderating digital material by intervening and employing their own decision-making process. Platforms have implemented manual moderated mechanisms to discourage anti-social behaviour including trolling and harassment. This includes human moderators who manually remove abusive posts, moderate through a voting mechanism where registered users up-vote or down-vote each submission, and flag offensive content online (Jhaver et al., 2018). Additionally, platforms such as Twitter rely on providing its users with the ability to mute, bloc, or report users. As the process of manually regulating content is multitudinous, it is difficult to locate and identify commercial content moderation workers. Additionally, content moderation tasks are demanded of workers who are low-status and low-wage in relation to others in the tech industry.
Whilst regulation initially focused on illicit content such as sexually explicit and graphically violent images, it has expanded to include categories including hate speech, self-harm, and extremism. For example, countries such as Germany and France now have laws that prohibit the promotion of Nazism, anti-Semitism, and white supremacy. According to Gillespie (2018), the French law produced one of the earliest online content cases, where Yahoo was compelled to prevent French users from accessing online auctions of Nazi memorabilia. Additionally, countries work together to remove offensive content. For example, Facebook works with Pakistan to censor online blasphemy, as well as Vietnam to remove anti-government content, and Thailand to remove criticisms of the royal family (Gillespie, 2018).
Content moderation: the government’s responsibility?
The backlash faced by platforms to moderate content has pressured governments to enforce their own regulatory policies. This has caused governments to push for increased national control and a greater voice in the governance of the internet. Many believe that government regulation has the power to maintain fairness, balance, and other values. Additionally, it is argued that platforms lack the legitimacy to govern speech on their forums and that the government has the authority to suppress speech related to violence (Samples, 2019). Government regulation in countries has increased significantly, with Australia, Singapore, Europe and China among some of the countries that regulate social media.
However, there are a number of people who argue against government intervention. One of the primary arguments against government intervention is that it is a form of censorship. Many believe that government regulation can limit freedom of expression by suppressing dissent or disfavoured speech. This can be seen in the US where conservative voters argue that their beliefs are being censored by social media platforms. Trump has also criticised methods of regulation by complaining that platforms have political agendas. For example, in 2019 he complained that Google searches were biased against Republicans and conservatives (Samples, 2019). Government censorship can also be seen in China where the Chinese government monitor and restrict certain media. For example, the Chinese government removes political criticism and blocks websites and keyword searches automatically (Gillespie, 2018).

Another argument against government regulation is that it can stifle innovation and create monopolies (Kumar, 2019). The negative impact government regulation can have on innovation can be communicated by the head of the Federal Communications Commission (FCC) Ajit Pai (O’Hara & Hall, 2018). Additionally, the government can lack the technological means to detect and remove social media content in comparison to platforms.
Looking forward
It is evident that the question of who is responsible for the regulation of social media is highly complex and there is not one single answer. Whilst the government should undoubtedly play a large role in regulating content, platforms should not wait for governments to impose controls. Platforms must take responsibility and become more aggressive at self-regulation. This refers to the steps companies or industry associations take to follow governmental rules and guidelines. Self-regulation ranges from self-monitoring for regulatory violations to proactive corporate social responsibility (CSR) initiatives (Cusumano et al., 2021). Platforms must also take a closer look at how they regulate content and the definition of what is defined as harmful content. Overall, governments and platforms will need to work together more closely to ensure that regulation is achieved without destroying freedom of speech.
References:
Cusumano, A. M., Gawer, A., & Yoffie, B. D. (2021, January 15). Social Media Companies Should Self-Regulate. Now. Harvard Business Review. https://hbr.org/2021/01/social-media-companies-should-self-regulate-now
Gillespie, T., Aufderheide, P., Gerrard, Y., Gorwa, R., Matamoros-Fernandez, A., Roberts, T. S., Sinnreich, A., & Myers West, S. (2020). Expanding the debate about content moderation: scholarly research agendas for the coming policy debates. Internet Policy Review, 9(4). https://doi.org/10.14763/2020.4.1512
Gillespie, T. (2018). Regulation of and by Platforms. In A. Marwick., T. Poell., & J. Burgess (Eds.), The SAGE Handbook of Social Media (pp. 254-278). SAGE Publications.
Ibrahim, Y. (2017). Facebook and the Napalm Girl: Reframing the Iconic as Pornographic. Social Media + Society, 3(4). https://doi.org/10.1177/2056305117743140
Jhaver, S., Ghoshal, S., Bruckman, A., & Gilbert, E. (2018). Online Harassment and Content Moderation: The Case of Blocklists. ACM Transactions on Computer-Human Interaction, 25(2), 1-33. https://doi.org/10.1145/3185593
Ketchell, M. (2021, January 9). Twitter permanently suspends Trump after U.S. Capitol siege, citing risk of further violence. The Conversation. https://theconversation.com/twitter-permanently-suspends-trump-after-u-s-capitol-siege-citing-risk-of-further-violence-152924
Kumar, R. (2019, November 14). Government should not regulate social media. Observer Research Foundation. https://www.orfonline.org/expert-speak/government-should-not-regulate-social-media-57786/
Llansó, E. J. (2020). No amount of “AI” in content moderation will solve filtering’s prior-restraint problem. Big Data & Society, 7(1). https://doi.org/10.1177/2053951720920686
O’Hara, K., & Hall, W. (2018). Four Internets: The Geopolitics of Digital Governance. Centre for International Governance Innovation.
Roberts, T. S. (2019). Understanding Commercial Content Moderation. In S. T. Roberts (Ed.), Behind the Screen: Content Moderation in the Shadows of Social Media (pp. 33-72). Yale University Press.
Samples, J. (2019, April 9). Why the Government Should Not Regulate Content Moderation of Social Media. Cato Institute. https://www.orfonline.org/expert-speak/government-should-not-regulate-social-media-57786/
Wardle, C. (2019, June 27). Challenges of Content Moderation: Define “Harmful Content.” Institut Montaigne. https://www.institutmontaigne.org/en/analysis/challenges-content-moderation-define-harmful-content