The complexity of content moderation

Sidewalk express - Internet point
“Sidewalk express – Internet point” by nicolasnova is licensed under CC BY 2.0.

An introduction

The infrastructure of the internet allows users to upload and consume content that follows their interests, however the ‘unfiltered’ nature of it can prove to be consequential to individuals. The moderation of online content is crucial to ensuring the internet is a safe place, however the difficulty of managing this is evident. Moderation conducted by people can be difficult to implement not only due to the human cost of it, but also due to innate biases leading to less consistent moderation. Furthermore, the government’s powers can enable a non-democratic and almost totalitarian hold on the content that goes online. The nature of content moderation online is a difficult and complex matter, so what’s the best possible solution for a safe internet?

Algorithms for Moderation

Tech companies bear the responsibility of their platforms and the content posted on it, however growing concerns have been expressed over the use of said responsibility. The internet has done indescribable damage to online communities, providing a channel for racism, sexism, homophobia and other harmful content to be publicly posted (Gill, 2021). This is reflected through YouTube’s automated moderation system, which has deleted almost 3.88 million videos in the first quarter of 2022. However, only 338,000 of these videos were deleted according to human review. YouTube isn’t the only platform that relies on automated content moderation, as other websites such as Tumblr, Facebook & Twitter similarly employ algorithmic moderation systems.

YouTube video Brandweer Nederweert
“YouTube video Brandweer Nederweert” by mauritsonline is licensed under CC BY 2.0.


So, how does this reflect on the users of the platform? 48% of the US population believe that content moderation systems have failed them, with 74% of them feeling a sense of injustice at the hands of online moderation. Taking into account the focus on automated moderation, user dissatisfaction can be attributed to the algorithms used to monitor content. An example of this is Tumblr’s newly employed anti NSFW rule in 2016 that followed an automated flagging system to delete rule breaking content. However, this system ended up being faulty and dysfunctional, resulting in safe for work content being deleted, leading to user outrage (Pilipets, Paasonen, 2020).

The human solution to moderation

In this case, why not instead employ human moderators to keep consistent, empathetic control of content on these social media platforms? This leads to another issue: the human cost. The content that is uploaded on major websites such as YouTube, Twitter and Facebook can include gratuitous imagery such as shock content, child pornography and hate-filled messages (Arsht, 2018). Overseas workers are employed to regulate this content with little regard of their own wellbeing as a result of being exposed to such content. A documentary titled “The Cleaners” looks into moderators who were subcontracted from the Philippines to moderate content, who expressed discomfort, stress and disgust from working said job. Furthermore, a lawsuit from a former Microsoft employee states the company failed to warn him of the psychological risks of the job (Levin, 2017). There is an identifiable trend in the consequences of human moderated content, and while it may potentially be more trustworthy for the people, the human cost of it is detrimental.

Even disregarding the human cost of moderation, there are still complications with human moderators that lead to dissatisfaction in the management of online platforms. The GamerGate incident reflects the ineffectiveness of human moderation. Gamergate involved the ongoing harassment of female game journalists based on their sex, primarily done on Reddit and 4chan. The heavy focus on “geek/nerd” culture meant that other voices were shut out in favour of like-minded individuals on the platform (Meyerhoff, 2015). As a result, negative discussion and harassment conducted on Reddit were encouraged on certain subreddits (eg. r/kotakuinaction), and with inaction from Reddit’s administrators, this continued to happen (Massannari, 2017).

The government and content moderation

Beyond tech companies exists the government, who maintain overwhelming authority on online platforms as well as the owners of the platforms themselves. In that regard, they also possess their own right of moderation. However, these moderation powers can be unethically used in order to maintain an agenda. During the 2020 US elections, the idea of foreign influence led to the censorship of foreign news outlets and determinative content control. Constant social media disinformation was circulating, with the University of Wisconsin-Madison reporting that Russian trolls were spreading Instagram posts in order to sow division amongst Americans (Seitz, Ortutay, 2020). The course of action to prevent this was silencing Russian who spoke on the election online, shown through the indictment of 13 Russians by Robert J Mueller, a special counsel of the US Department of Justice (Samples, 2019).

The US government’s decision is indicative of online censorship under the guise of democracy. This is a result of the selectivity of what foreign sources are permitted to speak and what aren’t, a line that isn’t clearly delivered by the government. This isn’t the only instance of this, with the reality of government censorship on these platforms becoming more prevalent. For example, China employs low paid commentators to monitor blogs and chat rooms and spin issues in favour of the Chinese government. As well as this, major websites such as Facebook, Google and Instagram are banned in order to prevent foreign information spreading to citizens (Warf, 2010). This leads to the more incessant spread of the government’s agenda, and mitigating any possibility of speaking out against them. With this level of influence over the people, it allows the government unprecedented jurisdiction on what can go on the internet and what can’t.

So, what can be done for platform moderation? There are complications with both algorithmic and human control over platforms, as well as governments maintaining control of content. It’s a complex matter that doesn’t have a concrete solution. While there are evident faults in each system, there aren’t any proper solutions in which one party can effectively manage content. The forces responsible for moderating their respective platforms should maintain jurisdiction, as it continues to be a solution that works, even when inconsistent.

References:
Arsht, A., & Etcovitch, D. (2018, March 2). The Human Cost of Online Content Moderation. Harvard Journal of Law & Technology. https://jolt.law.harvard.edu/digest/the-human-cost-of-online-content-moderation

Ashton, N. A., & Cruft, R. (2022, April 27). Social media regulation: why we must ensure it is democratic and inclusive. The Conversation. https://theconversation.com/social-media-regulation-why-we-must-ensure-it-is-democratic-and-inclusive-179819

Cusumano, M. A., Gawer, A., & Yoffie, D. (2021). Can Self-Regulation Save Digital Platforms? SSRN Electronic Journal, 30(5). https://doi.org/10.2139/ssrn.3900137
Holmes, J., & Meyerhoff, M. (2005). The handbook of language and gender. Blackwell Pub.

Levin, S. (2017, January 12). Moderators who had to view child abuse content sue Microsoft, claiming PTSD. The Guardian. https://www.theguardian.com/technology/2017/jan/11/microsoft-employees-child-abuse-lawsuit-ptsd

Massanari, A. (2016). #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329–346. https://doi.org/10.1177/1461444815608807

Pilipets, E., & Paasonen, S. (2020). Nipples, memes, and algorithmic failure: NSFW critique of Tumblr censorship. New Media & Society, 24(6), 146144482097928. https://doi.org/10.1177/1461444820979280

PricewaterhouseCoopers. (2020). The quest for truth: Content moderation. PwC. https://www.pwc.com/us/en/industries/tmt/library/content-moderation-quest-for-truth-and-trust.html

Samples, J. (2019, April 9). Why the Government Should Not Regulate Content Moderation of Social Media. Cato Institute. https://www.cato.org/policy-analysis/why-government-should-not-regulate-content-moderation-social-media

Seitz, A., & Ortutay, B. (2021, April 20). Report: Russian social accounts sow election discord – again. AP NEWS. https://apnews.com/article/madison-social-media-election-2020-russia-elections-0db953743c56cd6fd6e4ef73e02f120c

Warf, B. (2010). Geographies of global Internet censorship. GeoJournal, 76(1), 1–23. https://doi.org/10.1007/s10708-010-9393-3