Into the grey area: Automated content moderation in controversy

Illustration of AI moderators. Lumina Datamatics. All Rights Reserved.

At present time, encountering with a blocked post on content-sharing platforms is not a rare incident anymore. This web essay offers a critical analysis of automated content moderation, suggesting that it is operating with both remarkable benefits and problems.

The genesis of automated content moderation along with its transformative benefits will be examined in section one. In section two, problems that come with the moderation system will be explained. A mini case study to address the social controversy of moderation will be conducted in section three. Eventually the essay will conclude that while automated content moderation helps to maintain a heathy and ethical communication sphere, it may also become an amplifier to inequality within the society.


A brief history on automated content moderation

Automated content moderation refers to the system of identifying and blocking destructive user-generated content at online platforms via machine moderators. To indicate its technical reliance on computing science, automated content moderation is also termed as “algorithmic content moderation” (Gorwa, Binns and Kartzenbach, 2020). It derives from the broader definition of content moderation, which is “governance mechanisms that structure participation in a community to facilitate cooperation and prevent abuse”, according to Grimmelmann (2015, p.47). In other words, automated content moderation is the gate keeper to a heathy public sphere for online communication.

The origin of automated content moderation is closely related with the historical development of content-sharing platforms, especially social medias. At the earlier stage of online communities, content moderation could be managed by human moderators since the volume of communication was modest (Gillespie, 2020). However, the human moderation system was challenged when social media platforms were attracting massive participants and the number of posts soared. In fact, more than half of the global population has already become social media users.

For human moderators, to match with the velocity and quantity of user-generated content has become an impossible task. Consequently, the need for a moderation at scale, or a “platform moderation” as Roberts (2019) described arose. The solution introduced by social media companies was automated content moderation.

A graph to illustrate the massive content production happening in one internet minute
A simulator of what is going on in one internet minute. Influencer Marketing Hub. All Rights Reserved.

Ever since the birth of automated content moderation, it is constantly evolving because of technical improvement. Initially, the automated moderation was merely a manually encoded filter which required engineers to encode a blacklist of words (Binns et al., 2017). Overtime, moderation system became smarter and more comprehensive with its updated machine learning capacity. This means that automated moderation can learn from manual practices, so it is able to detect and delete certain content following a human moderator’s action (Gorwa, Binns and Kartzenbach, 2020).

The latest trend in automated moderation is the adoption of artificial intelligence (AI). It is the ideal tool for social platform giants since the cost is ten times less than human moderators. However, contemporary moderation system tends to be an integration of both automated and human moderators, maintaining both the flexibility of human inspection as well as the ultimate speed and coverage of algorithmic moderation (Gillespie, 2020).

 

A graph showing how AI and human moderators work together
A common co-working process for automated and human moderation. Clarifai. All Rights Reserved.

 

Problems arise: Simply inferior in performance? A pawn of structural inequalities?

While automated moderation system presents unique advantages to content-sharing platforms, it also brings alarming problems. The most obvious flaw would be its unsatisfactory accuracy, which could have poisonous results like the growth of online terrorism. In fact, social media has become the crucial promotion tool for ISIS, with over a thousand posts of their “propaganda events” being released in one month (Brooking and Singer, 2016). With clever wording, sometimes terrorism content could escape from automated moderation, and online terrorists even take pride in their posts being taken down since it is a proof of successfully creating fear and intimidation (Ganesh and Bright, 2020).

Aside from the online terrorism with blunt intention, the subtler type of posts with extremism emotions could be a difficult task for automated moderation as well. For instance, Russia’s extreme nationalism users tweet about anti-America content that may infuriate U.S. citizens while the tweets themselves contain no aggressive phrases but a strongly negative sensation (Brooking and Singer, 2016). Because of the ever-changing disguise, human moderation may be the better option to combat with such content, which leads to much greater capital investment.

A statistic emphasizing the increasing adoption of human moderators
Human content moderators have been experiencing a jump in salary and population. StatistaCharts. All Rights Reserved.

The dullness in automated content moderation also implies the deeper issue within it, which is the human bias imbedded in the algorithm. It should be noticed that the privilege of owning and ruling the moderation system is always exclusive to platform corporations. Thus, automated content moderation is put into a grey area, where the system merely carries the values and beliefs of its creators instead of its diverse users (Gillespie, 2018). Take nudity as an example, Facebook engineers have assumed that themselves and users are in the same cultural context, which means intolerance to public nudity. Therefore, a picture of half-naked aboriginal women at their traditional ceremony was deleted by Facebook’s AI moderators, even though within the aboriginal context nudity is ethical.

Moreover, the biased algorithmic moderation system empowers users from the mainstream society while users from social minorities fall victim to it. It should be noticed that a large number of content-sharing platforms are private-owned and hence, they will follow and cater to the majority’s interest like any other capitalist markets (Gillespie, 2018). This means that long-term injustice in social power relation, including structural racism and patriarchy, could be reinforced into automated content moderation.

Censorship always impacts those with the least access to power, first and hardest.

Nora Pelizzari

A mini case study: The struggle of marginalized groups Vs. The president’s attempt to alter the law

A recent complain about unfair content moderation is pushed forward by queer and plus-sized users of Instagram. They criticize the image-sharing platform for ‘shadow banning’ their content which means that the content is not deleted but becomes invisible in others’ daily feed. The reason of moderation responded by Instagram moderators was that their posts were “inappropriate” or “sexually suggestive”, while nearly the same type of content from heterosexual and medium body users is not moderated.

Since the machine learning ability of AI moderation would quickly mimic a human moderator’s practice (Gorwa, Binns and Kartzenbach, 2020), users are afraid of getting more posts automatically banned in the future. Meanwhile, because the reason of moderation is so blurred, these users can hardly find an effective approach to stop further banning. Many believes that Platform corporations are harming the democracy with their exclusive control over user-generated content. (Binns, Veale and Shadbolt, 2017).

 

 

This news video presents Trump signing an executive order in hopes of holding tech companies for lawful accountability on content moderation. His motivation is believed to be the recent event that Twitter has marked one of his posts as ‘fact checked’, which means the content could be potentially misleading. But the paradox is, while Trump describes platform companies as “Editor with a view point”, whether he himself is trying to become another administrator of social media? By asking for legal amendment, the president is utilizing his social superiority to intervene the public communication sphere, which is an example of how the powerful may control the moderation system for their benefits.

Conclusion

In short, it should be recognized that automated content moderation as a governance mechanism to online content-sharing platforms fulfils its responsibility to an extent. Its working speed and coverage of content are undoubtedly advanced. However, automated content moderation also provides an opportunity for modern terrorism and extremism, as well as reinforces the structural inequality. Furthermore, practicing biased judgments is a nature of automated content moderation because the capacity to alter its algorism is reserved to the platform companies.

Looking back at the earlier days of online community, that it was expected to be an embodiment of fairness and freedom (Barlow, 1996), the present situation poses such a contrast. Nevertheless, one thing for sure is that letting content-sharing platforms, which attract more than half of the world population, continue to be under monopolistic control of a small group of Silicon Valley elites cannot be a bright future for the public.


Resources:

Ables, K. (2019). The Photographers Fighting Instagram’s Censorship of Nude Bodies. Artsy. Retrieved from https://www.artsy.net/article/artsy-editorial-dominic-chamberss-magical-realist-paintings-capture-sanctity-black-leisure

Barlow, J. P. (1996). A Declaration of the Independence of. Cyberspace. Electronic Frontier Foundation. Retrieved from: https://www.eff.org/cyberspace-independence

BBC News. (2020). Twitter tags Trump tweet with fact-checking warning. Retrieved from https://www.bbc.com/news/technology-52815552

Brooking, E., & Singer, P. (2016). WAR GOES VIRAL. The Atlantic Monthly318(4).

Binns, R., Veale, M., Kleek, M. V., & Shadbolt, N. (2017). Like trainer, like bot? Inheritance of bias in algorithmic content moderation. arXiv.org. DOI: 10.1007/978-3-319-67256-4_32

Dean, B. (2020). Social Network Usage & Growth Statistics: How Many People Use Social Media in 2020? BackLinco. Retrieved from https://backlinko.com/social-media-users#social-media-usage-stats

Ganesh, B., & Bright, J. (2020). Countering Extremists on Social Media: Challenges for Strategic Communication and Content Moderation. Policy & Internet12(1). DOI: 10.1002/poi3.236

Gandhi, A. (2018). Content Moderation in 2020: Human vs AI. Nanonets. https://nanonets.com/blog/nsfw-content-moderation-in-2020-humans-vs-ai/

Gillespie, T. (2018). All platforms moderate. In Custodians of the internet: platforms, content moderation, and the hidden decisions that shape social media. New Haven: Yale University Press.

Gillespie, T. (2020). Content moderation, AI, and the question of scale. Big Data & Society7(2). DOI: 10.1177/2053951720943234

Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society7(1). DOI: 10.1177/2053951719897945

Grimmelmann, J. (2015). The virtues of moderation. Yale Journal of Law & Technology, 17(1). DOI: 10.1002/poi3.236

Joseph, C. (2019). Instagram’s murky ‘shadow bans’ just serve to censor marginalised communities. The Guardian. Retrieved from https://www.theguardian.com/commentisfree/2019/nov/08/instagram-shadow-bans-marginalised-communities-queer-plus-sized-bodies-sexually-suggestive

Roberts, S.T. (2019) Behind the Screen: Content Moderation in the Shadows of Social Media. New Haven: Yale University Press.

Handan Yan
About Handan Yan 5 Articles
Hola! I'm Alice currently working towards a Bachelor of Arts, majoring in Digital Culture&Marketing. If you wanna chat about social media, Netflix, traveling or the Chinese culture, feel free to contact me!