The Role of Content Moderation in the Regulation of Digital Platforms: Its Strengths & Limitations

"Social Media" by Adem AY, https://unsplash.com/photos/Tk9m_HP4rgQ. Licensed under CC BY-SA 2.0. View license at https://creativecommons.org/licenses/by-sa/2.0/

Over the past decade, the world has witnessed a proliferation of social media platforms. Driven by the great freedom that the Internet promised, people created social media in hopes to mediate a space for personal expression, social connections, and online participation (Gillespie, 2018, p.5). Whilst seemingly utopic at first glance, social media platforms also pose significant perils; harmful and abusive contents such as racism and sexism are widely posted, shared, and circulated on social media (Gillespie, 2018, p.5). As social media continues to play an intrinsic role in our daily lives, it is crucial to regulate and implement content moderation on these platforms in order to mitigate the damaging effects of harmful contents and in turn protect users online.

What is Content Moderation?

Content moderation refers to the practice of screening user-generated contents against a set of rules and guidelines in order to determine whether the content is inappropriate and if it needs to be restricted or removed from the platform (Roberts, 2019). Despite the protection of safe harbour policies such as ‘Section 230’ which specifies that digital platforms cannot be held liable for their users’ contents and behaviours (Gillespie, 2017, p.257), researcher Tarleton Gillespie highlights why it remains crucial for digital platforms to moderate their content;

“Platforms must, in some form or another, moderate: both to protect one user from another, or one group from its antagonists, and to remove the offensive, vile, or illegal—as well as to present their best face to new users, to their advertisers and partners, and to the public at large.” (Gillespie, 2018, p.5).

The Paradox of Content Moderation

Although content moderation does indeed help to eliminate illegal and unethical contents as well as foster an enhanced user experience, it is also important to acknowledge that the very essence of content moderation goes against the internet’s founding principles of freedom and openness (Castells, 2001). Therein lies the paradox of content moderation; whilst on one hand content moderation facilitates the alleviation of potentially harmful and toxic contents present on social media, on the other hand the practice of regulating and restricting contents contradicts the idea of the internet as a free speech zone (Roberts, 2019, p.71). This urges the need for digital platforms to strive for a balance between the two opposing phenomena in order to grant users both a safe online experience, and the freedom of opinion and expression.

Some platforms, however, have controversially opted to conceal the appearance of moderating contents in an attempt to market their platforms as one that wholly embodies the notions of free speech (Gillespie, 2018). This is exemplified in 2010 when former Twitter CEO, Dick Costolo, described Twitter as “the free speech wing of the free speech party” (Christensen, 2021, para.4). In reality, Costolo’s statement ironically diverts from the truth as Twitter is notoriously known to have one of the most rigorous content moderation and community standards (Christensen, 2021). Such need to censor content moderation practices in order to uphold a brand image that celebrates free speech reveals that moderating content does in fact come at the cost of freedom of speech to a certain extent.

Content Moderation Practices & its Controversies

computer coding screengrab“Data” by Markus Spiske. https://unsplash.com/photos/hvSr_CVecVI. Licensed under CC BY-SA 2.0. View license at https://creativecommons.org/licenses/by-sa/2.0/

Content moderation is conducted through two main methods: artificial intelligence (AI) moderation and manual human-led moderation. While AI moderation relies on computer based automated flagging systems, manual content moderation involves humans deciding what stays up and what needs to be taken down. Most social media companies today employ a combination of both AI and human content moderation to strengthen its effectiveness and optimise results.

AI moderation has proven to be particularly helpful in the age of big data. Every minute, Instagram users share 65,000 photos, YouTube users upload more than 500 hours of videos, and Twitter users post 575,000 tweets (Domo, 2021). With significant amounts of user-generated contents being produced across various social media platforms, it is crucial to implement AI moderation to filter through and remove all harmful and illegal contents (Gorwa, Binns, & Katzenbach, 2020). AI utilises computational flagging tools to check for any harmful texts, pictures or videos. In her paper Understanding Commercial Content Moderation’, social media theorist Sarah Roberts provides examples of such AI computational tools which include automated text searches for banned words and TikTok’s ‘skin filters’ which allows AI to identify nudity and potential pornographic materials (Roberts, 2019, p.37). Although the use of AI to moderate contents offers a myriad of benefits such as being efficient, time-saving and low-cost, automated content moderation is also challenged by its susceptibility to mistakes and errors (Roberts, 2019). For instance, inconsistencies surrounding definitions makes it difficult for automated systems to determine whether a content is truly harmful; what constitutes ‘fake news’? Are war photographs shared on social media considered educational or violent? (Gillespie, 2018, p.10). Moreover, the incapability of AI systems to understand context also leads to inaccurate flagging of contents. Therefore, instead of policing contents solely through AI moderation, human intervention must be enacted to appropriately screen contents (Roberts, 2019, p.35).

Manual, human means content moderation is led by a team of people known as content moderators. Content moderators are tasked with monitoring content and deciding if flagged or reported contents need to be removed. This involves spending hours reviewing thousands of disturbing contents including images and videos of animal abuse, self-harm, and graphic violence (Gilbert, 2021, para.2). As Gillespie notes, these contents depicted;

“the worst humanity has to offer” (Gillespie, 2018, p.11).

VICE. (2021, July 22). The Horrors of Being a Facebook Moderator: Informer [Video file], Retrieved October 18, 2021, from: https://www.youtube.com/watch?v=cHGbWn6iwHw&t=306s&ab_channel=VICE

The traumatic psychological effects of undertaking such work, coupled with the fact that content moderators are severely underpaid and exploited has caused wide controversy and criticism in recent years. In 2021 Vice conducted an interview with an ex-Facebook moderator who shared his experiences and insights into the realities of being a content moderator. He highlights that repetitively “looking at all the worst things people post online”, had left him with PTSD and discusses how Facebook’s wellness support team were “totally inadequate” in dealing with the distressing psychological consequences of content moderation (Vice, 2021). Dubbed as the ‘dirty work of Silicon Valley’ (Roberts, 2016), big tech companies often outsource their content moderation to India and Philippines where they can exploit lower wages and legally distance themselves from moderators through contract labour arrangements (Gillespie, 2017, p.266). In response to such controversies, Facebook announced plans to automate most of its moderation in order to alleviate the burden that content moderators have to bear (Gorwa, Binns, & Katzenbach, 2020). Nevertheless, during a Washington Post interview, Facebook’s Zuckerberg stated that inevitably “there will always be people” involved in content moderation (Dwoskin, Whalen & Cabato, 2019, para.52). With human content moderation here to stay, it is thus imperative for tech companies to provide greater mental health support and improved work conditions for their content moderators.

Governments’ Role in Enforcing Content Moderation

Governmental approaches to moderate contents on social media platforms greatly differ from country to country. In 2017 Germany enacted its Network Enforcement Act, also known as ‘NetzDG’, which commands social media companies to remove online hate speech within a 24-hour window (Claussen, 2018). While the legislation was established with the aim of countering illicit contents online, civil rights activists have argued that the practice of restricting and removing contents inherently violates freedom of speech (Dias Olivia, 2020). This reflects the aforementioned notion of the paradox of content moderation. Similarly, the UK has also proposed laws to regulate online contents and in turn ameliorate internet safety (DCMS, 2021). Published in May 2021, UK’s Online Safety Bill, outlines a duty of care whereby digital platforms are required to protect users from harmful and illegal contents DCMS, 2021). A major difference, however, between Germany’s Network Enforcement Act and UK’s Online Safety Bill is that while the former dismisses issues surrounding the encroachment of freedom of speech, the latter defends free expression. By employing human moderators to assess complex cases, the Online Safety Bill permits appeals for unfairly flagged contents to be reinstated and in turn, preserves freedom of expression (Lomas, 2021). In this manner, the Online Safety Bill epitomises how governments can successfully ensure both a safe online environment and freedom of speech. Ultimately, it is in the best public interest for governments to play a greater role in enforcing content moderation regulations on social media.

In conclusion, content moderation practices are essential to the regulation of digital platforms. Despite its limitations, content moderation remains necessary as it prevents exposures to harmful, illegal and unethical contents and provides safer online user experiences.

Reference List (APA 6th)

Castells, Manuel. (2001). The Culture of the Internet. In M. Castells (Ed.)., The Internet Galaxy: Reflections on the Internet, Business and Society (pp. 36-64). New York: Oxford University Press.

Christensen, G. (2021, February 2). Whatever happened to the free speech party? Spectator Australia. Retrieved October, 20, 2021, from: https://www.spectator.com.au/2021/02/whatever-happened-to-the-free-speech-party/

Claussen, V. (2018). Fighting hate speech and fake news: The Network Enforcement Act (NetzDG) in Germany in the context of European legislation. Rivista Di Diritto Dei Media3(1), 1-27.

DCMS Department for Digital, Culture, Media and Sport. (2021). Draft Online Safety Bill. Retrieved October 18, 2021, from: https://www.gov.uk/government/publications/draft-online-safety-bill

Dias Oliva, T. (2020). Content Moderation Technologies: Applying Human Rights Standards to Protect Freedom of Expression. Human Rights Law Review20(4), 607-640. https://doi.org/10.1093/hrlr/ngaa032

Domo. (2021). Data Never Sleeps 9.0. Retrieved from: https://www.domo.com/learn/infographic/data-never-sleeps-9

Dwoskin, E., Whalen, J, & Cabatao, R. (2019, July 25). Content Moderators at YouTube, Facebook and Twitter See the Worst of the Web – and Suffer Silently. The Washington Post. Retrieved October 17, 2021, from: https://www.washingtonpost.com/technology/2019/07/25/social-media-companies-are-outsourcing-their-dirty-work-philippines-generation-workers-is-paying-price/

Gillespie, T. (2017). Regulation Of and By Platforms. In J. Burgess, A. E. Marwick, & T. Poell (Ed.), The SAGE Handbook of Social Media (pp. 254-278). London: SAGE Publications.

Gillespie, T. (2018). All Platforms Moderate. In T. Gillespie (Ed.), Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (pp. 1-23). New Haven, CT: Yale University Press. https://doi-org.ezproxy.library.sydney.edu.au/10.12987/9780300235029

Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society7(1), https://doi.org/10.1177/2053951719897945

Lomas, N. (2021, May 12). UK publishes draft Online Safety Bill. Tech Crunch. Retrieved October 18, 2021, from: https://techcrunch.com/2021/05/12/uk-publishes-draft-online-safety-bill/

Roberts, S. T. (2016). Commercial content moderation: Digital laborers’ dirty work. The University of Western Ontario Media Studies Publications, 105(1), 1-11.

Roberts, S. (2019). Understanding Commercial Content Moderation. In S. T. Roberts (Ed.), Behind the Screen: Content Moderation in the Shadows of Social Media (pp. 33-72). New Haven, CT: Yale University Press. https://doi-org.ezproxy.library.sydney.edu.au/10.12987/9780300245318

VICE. (2021, July 22). The Horrors of Being a Facebook Moderator: Informer [Video file], Retrieved October 18, 2021, from: https://www.youtube.com/watch?v=cHGbWn6iwHw&t=306s&ab_channel=VICE

About Hannah Sung 2 Articles
Hi I'm Hannah! I'm double majoring in Digital Cultures and Marketing at The University of Sydney!