Can platforms fully rely on automated content moderation?

Businessman vs Robot - stock vector
"Human and Droid Businessman vs Robot - stock vector" illustrated by Sorbetto from Getty images. All rights reserved

Introduction

In this digital age, people are obsessed with social media in everyday life. Almost everyone uses social media to share information or personal lifestyle. Since a large volume of user-generated content keeps uploading online, inappropriate content including toxic speech, terrorist propaganda, and misinformation increasingly appears online and becomes problematic (Cambridge Consultants, 2019). Therefore, automatic content moderation is a necessary technology to help govern the platforms. Although automatic moderation provides many advantages, it makes mistakes in detecting harmful content and raises cultural issues. This web essay will first explore the genesis of the innovation and its brief historical development based on the evolution of media. Then, it will critically analyse from a political economic perspective about who benefits from the technology and who is excluded. At last, potential issues that affect cultural groups will be discussed.

The genesis of automated content moderation

  • Historical trends in communications media

In the previous generation, traditional media technologies including television, radio, and newspaper, were the main approaches for people to communicate. But people were unable to share information and interact with one another. Up till Web 1.0, the first stage of the internet, people still passively read information from the webpage rather than actively produced content (Peters, 2020). After the creation of Web 2.0 in 1999, people began to generate content and interact online (Peters, 2020). The development of digital platforms provides an opportunity for people to spread information easily and immediately. In the early phase of social media, content moderators supposed to build an online community and attract new users to engage in online discussion (Ruckenstein & Turunen, 2019). However, this open online environment had induced the emergence of cyberbullying, toxic speeches, and misinformation which shifted moderators’ job towards surveillance of online content (Ruckenstein & Turunen, 2019). They had to govern and control UGC to protect minorities and build a positive community (Ruckenstein & Turunen, 2019). Nevertheless, the amount of violating content started to intensify since more and more people posted online, it was difficult for human moderators to govern a large scale (Cambridge Consultants, 2019). Digital companies considered hiring a great number of human moderators was costly (Cambridge Consultants, 2019). Therefore, automated content moderation was developed to assist human moderation.

Figure 1. “Web 1.0 vs. Web 2.0” by Dion Hinchcliffe. All rights reserved
  • Development of Artificial Intelligence

Artificial Intelligence (AI) technology plays a pivotal role in advancing the functionality of digital platforms. From the 20th century, scientists began to design machines that were capable to mimic human problem-solving skills by learning algorithms (Anyoha,2017). For example, in 2012, Google and Stanford researchers taught a neural network to identify a cat in unlabeled YouTube videos by using a “deep learning” algorithm — a technique that is trained to learn specific characteristics of data and apply the learning to new data (Cambridge Consultants, 2019). Through the growth of AI in recent years, machines can learn more complex algorithms and help humans to solve problems. This set the basis for developing automated content moderation. AI-based moderation has successfully reduced the workload of human moderation and increased productivity (Cambridge Consultants, 2019). Most digital companies start to widely use automated content moderation instead of human moderators.

What is automated content moderation?

Grimmelmann (2015, p.47) defines content moderation as

“the governance mechanisms that structure participation in a community to facilitate cooperation and prevent abuse”.

Thereupon, automated content moderation is the AI-based technology that promotes and censors user-generated content to improve platform performance and eliminate inappropriate content in order to build a healthy cyber community (Cambridge Consultants, 2019; Glipse, 2018; Grimmenlmann, 2015).

Who benefits?

  • Human moderators are the major group benefit from this technology in terms of workload and psychological health. Before the automated system did not widely use in the process, the job of content moderators was very severe. It was an enormous amount of work because they had to watch thousands of videos and photos every day (BBC News, 2018). Moreover, reviewing a large amount of abusive content including pornography, child abuse, and terrorism causes psychological damage (Cambridge Consultants, 2019). Many moderators report they have experienced post-traumatic disorders and other mental health issues (Cambridge Consultants, 2019). Hence, the implementation of automatic content moderation not only reduces the workload of human moderators, but also decreases the possibility of experiencing mental disorders (Cambridge Consultants, 2019).

Check the short video below created by BBC News to get a sense of how content moderation job negatively impacts human moderators’ mental health.

 

  • law enforcement agencies take advantage of the technology as it automatically removes terrorist content on social media (Gorwa et al., 2020). Automated moderation is very necessary for shaping a safe community and prevent terrorist propaganda goes viral. For example, in the Christchurch incident, a terrorist live streamed his murder on Facebook that allowed thousands of people to view, download, and repost (Gorwa et al., 2020). According to Facebook representatives, the shooter’s video had been posted 1.5 million times in the first 24 hours and 80% were automatically blocked before users could upload it (as cited in Gorwa et al., 2020). The data illustrates that automated moderation is effective in removing terrorist content and facilitating police to govern the community. Also, terrorist information is important to the government department, like intelligence agencies. Global Internet Forum to Counter Terrorism (GIFCT) organization secretly created a shared database of “digital fingerprints of illicit content” which provides a resource for government agencies to investigate terrorist incidents (Gorwa et al., 2020, p.2).
  • Digital platforms also benefit from the technology because they have the strongest power to control it. As digital platforms are advertising companies, they use moderation as a tool to make profits (Gillespie, 2018). Platforms can design algorithms to optimise influential content that would increase users’ engagement and attract new users which reinforces advertising companies consider the platforms are a valuable marketplace (Gillespie, 2018; Cambridge Consultants, 2019 ). In order to gain views, likes, comments, and reposts, platforms encourage appropriate content to go viral because reaching a vast audience generates high revenue (Cambridge Consultants, 2019).

Who does not benefit?

In contrast, original users are the group who do not have the power because they have to passively follow the rules and have a minimum understanding of the automatic algorithms (Gorwa et al., 2020). The platforms cannot accurately explain the decision-making of automatic moderation and particular functionalities are ambiguous (Gorwa et al., 2020). Lack of transparency causes users cannot completely understand the content policies (“CDT”, 2018). Therefore, users are excluded from the benefit of automatic moderation.

Issues:

Despite AI-based moderation has many advantages, it still presents some limitations. Firstly, eliminating harmful content is not always accurate, the technology needs a broader understanding of the sociocultural context to correctly moderate content (Cambridge Consultants, 2019). According to the iconic example of “Napalm Girl”, the image breached Facebook standards because a naked Vietnamese girl is depicted (Gillespie, 2018). The post was removed by a Facebook moderator, however, the public criticized Facebook’s action was wrong because the image captured an important historical event (Gillespie, 2018). So contextual and cultural understanding is very necessary when moderating complex content (Cambridge Consultants, 2019).

“Kim Phuc – The Napalm Girl In Vietnam” by e-strategyblog.com is licensed under CC BY 2.0
The image satirizes censorship does not consider cultural context
Figure 2. “Vitruvian” by Mr.Enjoy is licensed under CC BY 2.0

(The image above satirizes censorship does not consider cultural context. “Censored” are labeled on the naked body although it is an artwork.)

Moreover, cultural difference is a challenging barrier for automated moderation (Cambridge Consultants, 2019). The major social media platforms are mostly managed by White men, thus western culture is applied when moderating. But people from different countries and backgrounds use these social media, it becomes difficult for automated moderation to make correct decisions on cross-cultural content (Cambridge Consultants, 2019). For example, a YouTube user has posted a clip about the King of Thailand with feet on his head which is illegal under Thai law and is very offensive in Thai culture, so the Thai government asked to delete the content (Cambridge Consultants, 2019). In the western context, the moderation system does not recognize this content as illegal. In this case, users from other ethnic cultures could be vulnerable and hurt by unintentional disrespectful posts. Cultural difference is a potential issue on multinational platforms and automated moderation should aware of different “cultural beliefs, political views, historical events, and law of each country” (Cambridge Consultants, 2019, p.42).

Conclusion

Overall, this web essay first examines the brief genesis of automated content moderation from traditional media to Web 2.0 and how the accomplishment of AI innovation sets the basis of automated moderation. Also, it discusses how content moderators, law enforcement agencies, and digital companies benefit from the technology; whereas users are the people who do not have the control and are excluded from the benefit. Additionally, inaccurate removing and lack of cultural awareness are the issues that need to be solved in the future. Although content moderation has many useful aspects, it still requires human moderators when reviewing complex and nuanced content. Digital companies cannot fully rely on automated moderation yet.


References:

Anyoha R. (2017, August 28). The History of Artificial Intelligence [Blog post]. Retrieved from http://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/

BBC News. (2018, April 26). ‘It’s the worst job and no-one cares’ – BBC Stories [Video file]. Retrieved from https://www.youtube.com/watch?v=yK86Rzo_iHA

Cambridge Consultants. (2019). Use of AI in Online Content Moderation. Retrieved from https://www.ofcom.org.uk/__data/assets/pdf_file/0028/157249/cambridge-consultants-ai-content-moderation.pdf

CDT, coalition urge internet platforms to provide more transparency and accountability in content moderation. (2018, May 07). Targeted News Service. Retrieved from http://ezproxy.library.usyd.edu.au/login?url=https://www-proquest-com.ezproxy2.library.usyd.edu.au/docview/2035626410?accountid=14757

Gillespie, T. (2018). All platforms moderate. In Custodians of the internet: platforms, content moderation, and the hidden decisions that shape social media (pp. 1–23). New Haven: Yale University Press. ISBN: 030023502X,9780300235029

Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society. 7(1), 1-15. https://doi.org/10.1177/2053951719897945

Grimmelmann, J. (2015). The virtues of moderation. Yale Journal of Law & Technology, 17(1), 42+. https://link.gale.com/apps/doc/A420050553/AONE?u=usyd&sid=AONE&xid=6135f6e7

Peters, K. (2020). Web 2.0. Retrieved October 29, 2020, from https://www.investopedia.com/terms/w/web-20.asp

Ruckenstein, M., & Turunen, L. L. M. (2020). Re-humanizing the platform: Content moderators and the logic of care. New Media & Society22(6), 1026–1042. https://doi.org/10.1177/1461444819875990

Avatar
About Fuyao Xie 2 Articles
Hi! My name is Fuyao Xie, you can call me Fiona :) I'm a second-year university student majored in Digital Cultures.