Photo-illustration by Sam Whitney. Source: Getty Images. All rights reserved.

Content moderation is essential to a safe and enjoyable social media experience. But, do moderators on digital platforms have too much control over what is shared online? Have they become arbiters of truth, restraining free speech? Is it time for governments around the world to step in?

What is content moderation?

“Social Media Icons on an iPhone 7 Screen” by Stacey MacNaught is licensed under CC BY 2.0.

Content moderation is the practice of reviewing content posted by users on internet websites, social media, and other online outlets to determine whether it should be removed (Roberts, 2019). Usually, it involves a combination of human labour and machine-automated detection which uses an algorithm to spot potentially infringing content (Roberts, 2019).

The purpose of content moderation is to comply with laws, positively contribute to user experience by removing offensive material, and keep the company’s image intact (Roberts, 2019). This purpose is more important than ever as the scale of user-generated content continuously grows (Roberts, 2019).

Whilst digital platforms, such as Twitter, post community guidelines that prioritise freedom of information sharing (Twitter, 2021), those running these platforms are responsible for restricting and removing content to ensure a safe experience for users (Gillespie, 2018). 

“Having in many ways taken custody of the web, [platforms] now find themselves its custodians. The challenge for platforms… is exactly when, how, and why to intervene.” (Gillespie, 2018, p. 5).


Is content moderation doing more harm than good?

Content moderation is necessary. However, to preserve the liberating nature of the internet, a balance must be struck between content moderation and the rights of the individual to freely voice their opinion.


The importance of free speech online

An integral aspect of the internet is that individuals can express their beliefs without fear (Barlow, 1996). But, the rules imposed by digital platforms are essential to responding to contemporary concerns, such as online sexual predation and terrorism (Gillespie, 2018).

However, there is a fine line between platforms using moderation to protect users against harmful content and using moderation to facilitate censorship. This is why many platforms in the modern age are reluctant to over-moderate.

Facebook’s CEO Mark Zuckerberg stated they “should not be the arbiters of truth. That would be a bad position for us to be in and not what we should be doing” (Rabin-Havt, 2020).

Facebook, Twitter and Instagram’s banning of Trump

Whilst digital platforms should not be the arbiters of truth, content moderation practices, such as the banning of former US President Donald Trump, have led to controversy. 

In the wake of violent protests in Washington, D.C. in January 2021, following Trump’s online allegations of voter fraud in the US election, Twitter, Facebook, Instagram and Snapchat blocked Trump from posting (“Twitter, Facebook, Instagram and Snapchat”, 2021). After reviewing all recent tweets and their context, Twitter permanently suspended Trump’s account, @realDonaldTrump, determining that suspension was necessary due to the risk of further incitement of violence (Twitter Inc, 2021). Facebook’s Oversight Board also upheld the decision to ban Trump’s Facebook and Instagram accounts in May of 2021. As an “irrepressible, agenda-setting force” on social media who influences a wide audience (Smith, 2021, p.1), these actions seemed necessary.

However, Trump subsequently filed a lawsuit against Facebook, Twitter and Google, claiming he was subject to censorship, infringing his right to freedom of speech (“Trump sues Twitter”, 2021).

“Internet is freedom of speech” by BEE FREE – PGrandicelli [the social bee] is licensed under CC BY-NC-SA 2.0.
Whilst a ban on any account for an indefinite period suggests an attempt to control speech and distort users’ reality, if content is banned on one platform it will almost always appear on another (Diaz & Hecht-Felella, 2021). 

Also, though users may have a right to see obscenity online when it accurately represents history, the line should be drawn where posts have the power to incite violence (Gillespie, 2018). 

Freedom of expression is integral to feeling heard online, but the words of influential figures can have adverse real-life consequences when they are permitted to circulate on platforms. 


All digital platforms apply different moderation practices

Another issue with content moderation by digital platforms is that no blanket rule exists, with each applying their own level of moderation. Subsequently, many platforms are criticised for their lack of moderation. 

Australian Prime Minister Scott Morrison criticised social media platforms last week for allowing trolls to run rampant online, describing social media as a “coward’s palace” for cyber bullies (McLeod, 2021).

Broadly, platforms are criticised for taking an economic approach to content moderation by prioritising interaction with their site over safety (Gillespie, 2018). This is especially problematic on platforms with growing user bases as policing large volumes of content becomes more difficult (Gillespie, 2018).


“reddit sticker – 2” by Eva Blue is licensed under CC BY 2.0. is a social media platform where users submit posts and others either ‘upvote’ or ‘downvote’ the content based on whether they enjoy it (Boyd, 2018). When a post attains many upvotes, it is visible to more users (Boyd, 2018). This algorithm, in combination with Reddit’s lack of moderation, provides the basis for a toxic online space with hateful content that spreads rapidly.

A major failure of Reddit’s content moderation was seen with ‘The Fappening’ in August of 2014 (Massanari, 2015). Here, illegally acquired nude photographs of celebrities were distributed on Reddit (Massanari, 2015). They circulated on the subreddit, /r/thefappening, which discussed the images and celebrities, such as Jennifer Lawrence (Massanari, 2015). These posts were not removed from Reddit for more than a week, likely due to the exponential rise in new subscribers: 100,000 (Massanari, 2015). It was only until Reddit became aware that photographs of underaged gymnast McKayla Maroney were posted that the subreddit was banned for violating the prohibition of sexualised images of minors (Massanari, 2015).


Is it time governments play a greater role in content moderation?


  • The government ensures moderation is in the public interest
  • Legislation keeps platforms accountable and punishes them when needed
  • Some governments are already doing it

One of the government’s main roles is to regulate in accordance with the public interest. Social media platforms have power in curating, deleting and censoring content (Samples, 2020). The government has the responsibility to at least provide a system of checks and balances so that these platforms cannot abuse their power (Samples, 2020).

Greater involvement by governments should be considered, as both the United Kingdom and France have engaged in legal reform on this issue. The French courts recognised the need to block site access for certain content within the 2000 case of LICRA v Yahoo. Here, the Court decided French users would be blocked from accessing Nazi memorabilia on Yahoo!. Further, in 2020, France passed a new law requiring social media companies to remove content, like hate speech, within one hour or face fines of up to £1m (Cuthbertson, 2020). Also, in 2020 the UK government announced they would enact legislation to penalise social media platforms that do not remove or block harmful content (Dawson, 2020).


  • Laws are a slippery slope towards censorship
  • Greater involvement puts private companies in charge of policing speech
  • Governments will inadvertently further enable the monopolisation of social media platforms

A primary argument against greater government involvement is the fear of minority voices being censored (Samples, 2019). The government in power may adjust laws to ensure their agenda is freely expressed online, neglecting other voices (Crews, 2020). Consequently, biased content moderation is bound to occur, effectively censoring large parts of the global community (Samples, 2019).

If governments intervene in the same way France has, there is a fear private companies will take on too great of a role in policing speech (Cuthbertson, 2020). The new French law has been criticised for being unrealistic and incentivising platforms to over-remove content to ensure they are not infringing the law (Cuthbertson, 2020).

Lastly, as most social media platforms cannot match the financial resources of global giants Twitter, Google and Facebook, penalties that would be imposed may cause smaller platforms to shut down (Dawson, 2020). Smaller platforms have less resources for content moderation and if they do breach laws, the penalties will further entrench their status below social media giants (Dawson, 2020). Effectively, this will also limit free speech as less platforms will be available online.

“woman, icon, networks, internet, social, social network, logo, marketing” by Piqsels is licensed under CC0.

Content moderation by digital platforms is an integral aspect to allowing a safe online environment for users. Whilst digital platforms have been criticised for their inadequate moderation, it could be a mistake to allow governments to heavily intervene on such a complex issue.

Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.


Reference List:

Barlow, J. P. (2019). A declaration of the independence of cyberspace. Duke Law Technology Review, 18(1), 5-7. 

Boyd, J. (2018, February 28). What is Reddit? Brandwatch. 

Crews, C. W. (2020, June 28). Regulating social media content moderation will backfire and make bigtech more powerful. Forbes. 

Cuthbertson, A. (2020, May 15). France orders social media firms to delete certain content within one hour or face fines. The Independent. 

Dawson, M. (2020, February 6). Why government involvement in content moderation could be problematic. Impakter. 

Diaz, A. & Hecht-Felella, L. (2021). Double standards in social media content moderation. Brennan Centre for Justice. 

Donald Trump’s Facebook ban upheld but review is called for before former president lashes out. (2021, May 5). ABC News. 

Gillespie, T. (2018). All platforms moderate. In T. Gillespie (Ed.), Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media (pp. 1-23). New Haven: Yale University Press. 

Massanari, A. (2015). #Gamergate and the fappening: How Reddit’s algorithm, governance and culture support toxic technocultures. New Media & Society, 19(3), 329-346.

Mcleod, C. (2021, October 7). Prime Minister Scott Morrison says social media a ‘cowards palace’ for cyberbullies. The Australian. 

Rabin-Havt, A. (2020). Mark Zuckerberg is right: Facebook should not be the arbiter of truth. Jacobin Magazine. 

Roberts, S. T. (2019). Understanding commercial content moderation. In S. T. Roberts (Ed.), Behind the screen (pp. 33-72). Yale University Press.  

Samples, J. (2020). Why the government should not regulate content moderation of social media. Cato Institute Policy Analysis, 865(1), 1-32. 

Smith, D. (2021, July 8). Trump says he will sue social media giants over ‘censorship’. The Guardian. 

Trump sues Twitter, Google and Facebook alleging ‘censorship’. (2021, July 7). BBC News. 

Twitter, Facebook, Instagram and Snapchat lock US President Donald Trump’s accounts in wake of ‘violent’ protests in US Capitol. (2021, January 7). ABC News. 

Twitter Inc. (2021). The Twitter rules.

Twitter Inc. (2021, January 8). Permanent suspension of @realDonaldTrump. [Blog post]. 

United Nations Office on Drugs and Crime. (2000, November 20). UEJF and Licra v Yahoo! Inc and Yahoo France. Case Law Database. 

About Claudia Sobral 1 Article
Claudia is a second-year Law and Arts student from the University of Sydney, majoring in Media Studies.