Automated Content Moderation: Machines versus social governance

Will automated content moderation ever fully be capable to replace human moderators?

Automated content moderation has played a key role in shaping media communications history, whilst only emerging in recent years. The introduction of automated systems has shifted the way individuals can utilize the internet and its platforms. As well as this, it has significantly altered the way information and online content is managed; with hopes to protect moderators and internet users. The idea of shifting towards automating content moderation is “often justified as a necessary response to scale” (Gillespie, 2020).

This essay will discuss how automated content moderation originated, and why the shift from social to machine content moderating has been reasoned for online platform as a way to manage the lasting effects that digital media can have on political, economic and social structures. Also, this essay will highlight those who have benefited from this progressive transition and those who have not.

How has this innovation shaped the historical trends of communication media?

In the context of online content moderation, the practise can be defined as the “monitoring of submissions and applying a set of rules to a community which define what is acceptable and what is not” (Gillespie, 2018). From this notion, what is not acceptable is then removed. The key operators in this business are the individual private platforms. They control the operations for algorithmic moderation and therefore, control how much economic and cultural power derives from such services.

In the early stages of social media, “the aim of having moderators was to attract users, spark online discussion, and organically guide participation: to build and curate the discussion and communications culture” (Gillespie, 2018). Many successful companies, who own these platforms did not think about “the lingering effects of speech on their platforms and this speech to represent them” (Klonick, 2018). Consequently, as the demand and use of these platforms increased it “generated an upsurge in inappropriate posts. From an economic view, the desirability of a platform was dependant on keeping conversations clear of spam and offensive material” (Gillespie, 2018). With the growing popularity of online discussion platforms expanding beyond reach; companies were facing challenges that had never seen before to such a scale.

Artificial intelligence (AI) systems seemed to be the “perfect response to the growing challenges that content moderators have to deal with on social media platforms”. The machine governed programs “have the capability to exhibit human life performance of defined tasks” (Ofcom, 2019) such as “visual perception, speech recognition, decision making and translation of code” (Ofcom, 2019).  For a program to assist in handling the immense scale of data, the intense degree of the violations and handle “the need for human involved judgements without wanting humans to have to make them” (Gillespie, 2020) is largely appealing for companies. It would be a challenge for any team of people, no matter how big, to be able to handle such a vast degree of information.

Attached below are some capabilities of automation services when comparing against human governance.

To list some differences between social versus machine governance
Screenshot of ARIN 2610 Week 6 lecture slide 30. University of Sydney. Canvas

The latest advances in AI have mainly “enabled computer systems to make decisions and predict outcomes without being explicitly programmed to perform these tasks” (Ofcom, 2019). However, some concerns do arise as “society has not built up the same level of trust in AI systems as in humans, when making complex decisions” (Ofcom, 2019).

“Many platforms rely on automated tools for information management however, the reliability of these tools to identify content across a range of platforms is limited; and the transparency of its management is often unknown” (Gillespie, 2018).

There is possibility of bias being introduced into AI systems, due to training data having programmer’s unconscious bias being threaded into what is machine learnt. Media organisations have previously acknowledged that “moderators could bring their own bias to the evaluation of standards” (Binns et al, 2017). This increases concerns over how these systems could alter the cultural and social spheres for individuals.

Insight into the history of this internet innovation is important towards understanding how it has shaped media communications history and the way social and cultural factors have influenced this decision. Following on, outlining who benefits from this service and who doesn’t explores the questions of power.

Who has benefited from the inclusion of automated content moderation?

“Workers say they are burning out as they moderate vast flows of violent content under pressure, with every changing guidelines. Moderators battle, depression, addiction and PTSD from endless parade of horrors they consume” (Beckett et al, 2019).

 The quote above highlights the challenges moderators face. However, change is being pushed onto platforms to improve the conditions of their workers and this societal pressure is has resulting in the appeal for AI. When questioning who benefits in a social sense from the transition to machine governance, the role of the human moderators has the greatest shift in a positive direction. Automated systems have great chances of reducing the physical hardships of moderator’s role in having to be constantly ‘active’. As well as this, it may lessen the emotional toll of detecting disturbing and harassing content. Following on, Gillespie highlights “the link between platforms, moderation, and AI is quickly becoming self-fulfilling: platforms and moderation has reached a scale where only AI solutions seem viable; AI solutions allow platforms to grow further” (Gillespie, 2018).

Furthermore, even though automated content moderation from reducing their work load, “the need for human reviewers is still required to determine the all-important context of an image or videos just not to the same extent” (Leetaru, 2019). This is an important point to note for the benefits of human moderators as their role has become less demanding yet still crucial to making particular decisions that require human care (Beckett et al 2019). Machine governance reduces moderator’s exposure to harmful content and reduces their need to be ‘active’ 24/7.

Who does not reap the benefits?

It is widely known that platforms and their owners are in direct control in the field of content moderation; they are able to exercise power in terms of removing or blocking content considered unacceptable as they decide what stays and goes. The two main reasons why platforms do not seek much benefit from investing in automated content moderation is firstly, that algorithmic systems that involve ‘decision making’ often make incorrect decisions. Which could damage a platforms credibility and reputation and fail to protect its users, which causes a copious amount of backlash (Roberts, 2019). Secondly, it is politically and ethically challenging to decide how to manage and neutralise the harmful content without deterring users from the platforms (Leetaru, 2018).

As authors Binns, Veale, Van Kleek and Shadbolt (2017) share “censorship and free speech are the issues at the heart of communication politics” (Binns et al, 2017) and “inductive systems that seek to automatically reduce the visibility of certain contributions are unlikely to escape the scrutiny of those worried about ‘algorithmic bias” (Binns et al, 2017). It has always been a great debate for what and if anything should be removed. Bins furthers this argument by stating that “content considered ‘abusive’ by some might be to other just partisan disagreement to others” (Binns et al, 2017).

The transparency surrounding content moderation has been largely criticised and “the mechanics of content moderation is often left unclear by companies; even though it is a central part to understanding what is happening when sites decide to keep or take down content” (Klonick, 2018). The uncertain nature that is projected from the algorithmic systems don’t present well for the platforms, the “complicated detection methods of language, context dependant material or content without unknown features is alarming” (Gillespie, 2018). In New Zealand in March of 2019, a man killed 51 people in a mosque whilst live streaming the events on Facebook. The platforms algorithm failed to detect the video and “by the time Facebook removed the video, it has been viewed roughly 4000 times” (Flynn, 2019).

Screenshot from Facebook live video. Taken March 2019 © 2019 CBS Interactive Inc. All Rights Reserved.

The algorithm failed to pick up on this, as it was taken from a first-person viewpoint, and didn’t match any of the training data. The current machine learning techniques are to “identify new circumstances of harassment and comparing the new content to a blacklist of already known examples (Gillespie, 2020). The New Zealand attack prompted users to target Facebook to change their platform and social governance. Gillespie follows this with “one failure may incur considerable public outrage to overshadow a million previous successes in content moderation” (Gillespie, 2018).

Moreover, automated content moderation poses political challenges for platforms as there is an expectation to draw the line between what is deemed appropriate and inappropriate. Platforms need to decide and consider the extent to which ‘freedom’ online should be regulated to avoid marginalising others (Ruckenstien, 2019). Gillespie argues that it is the solo platforms values and objectives that determine what content will be acceptable (Gillespie, 2018). If a platform was to become overly restrictive with what content is and is not allowed, as well as become too harsh on handling them, backlash will prevail. Author Binns corroborates this point by depicting that “over-restrictive moderation can similarly send users elsewhere. Consequently, platforms’ terms of use, content policies and enforcement measures often attempt to bound acceptable discourse” (Binns et al, 2017).

 To summarise

With the rise of online platforms, and the overwhelming amount of data, companies have sought out for alternative measures to human moderating such as automated content moderation systems. This has been introduced with hopes to handle the management of information in this era by providing an advanced way to do it. In question of power and benefit, human moderators are mostly at advantage from this transition to machine governance. It reduces their exposure to harmful content and the psychological burdens that present. Alternatively, the platforms themselves suffer from this introduction as the need for human intervention in moderating is still present; and human logic of care, is hard to decipher for machines (Ruckenstein, 2019). Not only has automated content moderation shaped the way users can utilize online platforms, but it has altered the way users and moderators can be protected.

 

References

Beckett, J., Martin, F., Paech, V. (2019). Revenge of the moderators: Facebook’s online workers are sick of being treated like bots. The Conversation. Retrieved from, https://theconversation.com/revenge-of-the-moderators-facebooks-online-workers-are-sick-of-being-treated-like-bots-125127

Binns R., Veale M., Van Kleek M., Shadbolt N. (2017) Like Trainer, Like Bot? Inheritance of Bias in Algorithmic Content Moderation. Social Informatics, Vol 10540. Springer Publishers https://doi.org/10.1007/978-3-319-67256-4_32

Flynn, M. (2019). No one who watched New Zealand Shooter’s video life reported it to Facebook, Company says. The Washington Post. Retrieved from, https://www.washingtonpost.com/nation/2019/03/19/new-zealand-mosque-shooters-facebook-live-stream-was-viewed-thousands-times-before-being-removed/

Gillespie, T. (2018) Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. New Haven, CT: Yale University Press.

Gillespie, T. (2020). Content moderation, AI, and the question of scale. Big Data & Society. https://doi.org/10.1177/2053951720943234

Klonick, K. (2018) Why the History of Content Moderation Matters. It’s Not a Grand Plan Debt. Retrieved from, https://www.techdirt.com/articles/20180129/21074939116/why-history-content-moderation-matters.shtml

Leetaru, K. (2019). The problem with AI-Powered Content Moderation Is Incentives Not Technology. Forbes. Retrieved from, https://www.forbes.com/sites/kalevleetaru/2019/03/19/the-problem-with-ai-powered-content-moderation-is-incentives-not-technology/#53f72b0d55b7

Ofcom. (2019). Use of AI in Online Content Moderation. Cambridge Consultants. Retrieved from, https://www.ofcom.org.uk/__data/assets/pdf_file/0028/157249/cambridge-consultants-ai-content-moderation.pdf

Roberts, S. (2019). Behind the screen: Content Moderation in the Shadows of Social Media. (pp.1-69). New Haven. Yale University Press.

Ruckenstein, M., Turunen, L. (2019). Rehumanizing the Platform: Content moderators and the logic of care. New Media and Society, Vol 22 (6) 1026-1042. https://doi.org/10.1177/1461444819875990

Lucy Cunich
About Lucy Cunich 2 Articles
Second year university student studying internet transformations and digital influence through social media.