Humans vs Machines: Automated Content Moderation

Will automated content moderation ever fully replace human moderators?

Media Command Center
"Dell's Social Media Listening and Command Center" by Geoff Livingston is licensed under CC BY-SA 2.0

Automated content moderation is a prevalent and vital resource across all online participatory media platforms. It helps platforms cope with the scale of digital media that facilitates instantaneous sharing of public and private life and tackles issues such as terrorism, violence and hate speech. Automated content moderation can be defined as the “systems that classify user generated content based on either matching or prediction, leading to a decisions and governance outcome” (Gorwa, Binns, & Katzenbach, 2020, p. 3). To better understand how automation works, watch this short video by Accenture that provides a visual illustration of how automated content moderation is like a cleaner, but for the internet.

This essay will explore the genesis of automated content moderation by drawing on the changes that have happened within participatory media. It is still something that does not involve a universal formula, and rather individual and private companies, such as Facebook, have the power to dictate and control this innovation for their platforms. Moreover, the phenomenon of automated moderation entails great political, economic and social effects on our society and particularly benefits some stakeholders while inhibiting others.

 

The Genesis of this Internet Innovation

Automated content moderation originated out of the need to manage the scale of content distributed within participatory media. The rise of social media, has enabled the expression of free speech through user produced content and has facilitated a limitless platform for public discourse, communication and news sharing (Gillespie, 2018). Considering the scale of such a trend in which digital media operates, pure human or community-based moderation are unfeasible. Read the Twitter tweet below to see how important businesses believe AI is for scale.

Previously content moderation was facilitated through a human labour force. The history of social media shows a growth of participatory media beginning from the late 1990s with sites such as Friendster, Twitter and MySpace that required a resource that could moderate at scale (Morrison Foerster, 2011).  The infographic below provides a basic understanding into the scale of what happens on the internet daily.

Infographic from Accenture
Infographic from Accenture report (2017) All Rights Reserved

Early forms of automated moderation involved manually encoded rules that evaluated expressions by identify words on black lists focusing on violent and hostile content (Binns, Veale, Van Kleek, & Shadbolt, 2017). Further, the infiltration of digital media into everyday lives called for consistent and fair moderation as the consequences of online use now stem into the offline world (Gillespie, 2020). This can be seen through Reddit’s Gamergate saga, social campaigning of 2016 US Presidential Election and the live streaming and sharing of the 2019 Christchurch terrorist shootings (Gillespie, 2020). These events catalysed the need for secure automated content moderation as they reveal the sheer intensity in which digital media can affect lives today through its rapid, automatic and global reach.

 

Power, Control & Business

It is each individual private platform that controls the key business operations under algorithmic moderation and thus they control much of the economic and cultural power that stems from such services. Gillespie’s opinion tweet below illuminates this situation further and its relevance today.

The immense influence that social networks have today has forced public bodies, such as governments, to collaborate and enforce control with private media companies to establish automated content moderation techniques that will ultimately keep society safe and benefit economic, social and political situations. For example, the Australian government passed the Criminal Code Amendment (Sharing of Abhorrent Violent Material) Bill 2019 in response the Christchurch shootings which legislates that they can punish internet providers like Facebook if they fail to take down such material (Parliament of Australia , 2019).

As Common (2020) identifies, the climate of participatory digital media today has blurred the lines between traditional boundaries of public and private body responsibility which has pressured governments to intervene. While private tech companies control much of the automated content moderation, they still outsource to other companies in which humans filter and review the decisions made by artificial intelligence. For example, Facebook works with consultancy company Accenture who believe that the future of automated and human moderators will involve a collaborated response as discussed in their report.

 

Political Effects

Automated content moderation can have far reaching political effects by ultimately controlling free speech and the visibility and spread of content. Specifically, platforms have collaborated to increase industry commitment to preventing terrorism and hate speech as seen within the Global Internet Forum To Counter Terrorism founded by Facebook, Twitter, Microsoft and YouTube (GIFCT, n.d.). The forum involves sharing their “best automated practices for developing their automated systems and operate a secretive ‘hash database’ of terrorist content, where digital fingerprints of illicit content (images, video, audio and text) are shared” (Gorwa, Binns, & Katzenbach, 2020, p. 2).

Pivoting to other political effects, it is vital to recognise that these algorithms, which are largely controlled by private institutions, can be coded with bias and over relied on because of their efficiency (Common, 2020), often resulting in failure to recognise contextual cues of speech resulting in discrimination of content. In particular, automation has been heavily relied upon which has revealed its faults. The tweet below discusses the impacts, many political, of the overreliance of automation today.

Moreover, algorithms are not objective causing social inequality and discrimination to be encoded into biased data which as caused critics to particularly question the rationalities of political campaigning and corporate surveillance (Katzenbach & Ulbricht, 2019). Algorithmic transparency is fundamentally the key concept that controls the political effects that moderation can have as it the key to democratic legitimacy and something that is worthy of deeper exploration into how private platform’s automation techniques influence public opinion (Katzenbach & Ulbricht, 2019).

 

Economic Effects

The major economic benefit of automated content control is that this innovation enables companies to utilise a cost-efficient resource to manage a large scale of content alongside human moderators. Without automated content regulation, human moderators would be unable to cope with the amount of content circulating and successfully screen against public laws and platform codes of conduct (Katzenbach & Ulbricht, 2019).

Solutions that utilise artificial intelligence carry strong economic interests for companies as they provide a feasible solution for content moderation (Katzenbach & Ulbricht, 2019). Unfortunately, due to the fact that private companies control much of the business in this field, if automation wasn’t an option it is a possibility that they would not properly invest in consistent moderation at all.

 

Social Effects

In comparison to political and economic impacts, automated content moderation has profound transformative socio-cultural effects. The ever-present and unpredictable nature of participatory digital media challenges the feasibility of automated content moderation in protecting users from exposure to abhorrent material leaving us to question, like this tweet and article, whether automation is really the answer.

Additionally, users can have multiple accounts which ultimately weakens the power of automation as live streamed suicides migrate across platforms and expose innocent users to some of the most disturbing acts with no oversights (Common, 2020). The failure of automated content moderation in these situations can have tremendous impacts on the mental health of users. However, while it is easy to blame the failure of platform’s moderation tactics, it’s vital to recognise that digital media is still a developing field whose growing scale requires and calls for continual reform of automated moderation techniques.

Automation has also shown to discriminate between cultural groups such as automated hate speech classifiers which have “disproportionately flagged language used by a certain social group, thus making that group’s expression more likely to be removed” or even ignored (Gorwa, Binns, & Katzenbach, 2020, p. 11). As a result, platforms such as Facebook’s AI Research division have been working on improving technology, like Deep Text, that recognises languages other than English and Portuguese in order to properly combat hate speech after pressure and controversy surrounding the prolonged instances of Myanmar hate speech that was hosted on Facebook (Gorwa, Binns, & Katzenbach, 2020).

 

What now?

While automated content moderation is vital in tackling the scale of content that is online today, it will require continual modification in order to keep up with trends across the media and communications field. Automated moderation techniques have proven to struggle with effectively protecting users from harmful material and also have shown to struggle to equitably discriminate between content. With that said, the limitations of such an innovation outweigh the benefits but is certainly something that will require more research and improvements to truly understand the impact that it will have on future society.

 

 

References

Abdulkader, A., Lakshmiratan, A., & Zhang , J. (2016). Introducing DeepText: Facebook’s text understanding engine. Retrieved from Facebook Engineering: https://engineering.fb.com/2016/06/01/core-data/introducing-deeptext-facebook-s-text-understanding-engine/

Accenture. (2017). Content Moderation: The Future is Bionic.

Binns, R., Veale, M., Van Kleek, M., & Shadbolt, N. (2017). Like trainer, like bot? Inheritance of bias in algorithmic content moderation. 9th International Conference on Social Informatics. 10540, pp. 405-415. https://doi.org/10.1007/978-3-319-67256-4_32. Springer International Publishing.

Common, M. (2020). Fear the Reaper: how content moderation rules are enforced on social media. International Review of Law, Computers & Technology, 34(2), 126-152. doi:10.1080/13600869.2020.1733762.

GIFCT. (n.d.). Global Internet Forum to Counter Terrorism: Evolving an Institution. Retrieved from GIFCT: https://www.gifct.org/about/

Gillespie, T. (2018). All Platforms Moderate. In Custodians of the internet: platforms, content moderation, and the hidden decisions that shape social media (pp. 1-23). New Haven : Yale University Press.

Gillespie, T. (2020). Content moderation, AI, and the question of scale. Big Data & Society, 7(2), 1-5. doi: 10.1177/2053951720943234.

Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society, 7(1), 1-15. doi: 10.1177/2053951719897945.

Katzenbach, C., & Ulbricht, L. (2019). Algorithmic governance. Internet Policy Review Journal on internet regulation, 8(4), 1-18. doi: 10.14763/2019.4.1424.

Meade, A. (2019). Australian media broadcast footage from Christchurch shootings despite police pleas. Retrieved from The Guardian: https://www.theguardian.com/world/2019/mar/15/australian-media-broadcast-footage-from-christchurch-shootings-despite-police-pleas

Morrison Foerster. (2011). A Short History of Social Media. Retrieved from Morrison Foerster: https://media2.mofo.com/documents/A-Short-History-of-Social-Media.pdf

Parliament of Australia . (2019). Criminal Code Amendment (Sharing of Abhorrent Violent Material) Bill 2019. Retrieved from Parliament of Australia : https://www.aph.gov.au/Parliamentary_Business/Bills_Legislation/Bills_Search_Results/Result?bId=s1201

Robertson, A. (2014). What’s happening in Gamergate? Retrieved from The Verge: https://www.theverge.com/2014/10/6/6901013/whats-happening-in-gamergate

Solon, O. (2018). Facebook struggling to end hate speech in Myanmar, investigation finds. Retrieved from The Guardian: https://www.theguardian.com/technology/2018/aug/15/facebook-myanmar-rohingya-hate-speech-investigation

The Verge. (2016). How social platforms influenced the 2016 election. Retrieved from The Verge: https://www.theverge.com/2016/11/14/13626694/election-2016-trending-social-media-facebook-twitter-influence

 

 

 

Avatar
About Lucy Cousins 2 Articles
Located in Sydney and studying a Bachelor of Arts in Digital Cultures and Marketing at the University of Sydney.