To Whom Do We Pass The Baton?

Sep 28th, 2023

An Analysis of Ethical Dilemmas of Human Employment, Process Outsourcing and AI Dependence in Digital Content Moderation

By: Kate Edmonds

Of the 5.47 billion people that access the internet each day, most peruse their social platforms sound in the knowledge that content depicting heinous crimes, live streams of terrorism or graphic self harm footage will not feature in their feed. Lulled into this sense of protection, little thought is given to measures that enable this, even less to the ethical issues surrounding them (Stackpole , 2022). In the past public debate has largely centered around concerns regarding content removal – the deletion of media deemed inappropriate and suspension of users that refuse to comply with platform guidelines (Gillespie, 2022) and the consequences of this for freedom of speech in a contemporary society. Meanwhile, the ethical issues pertaining to the employment of human content moderators, outsourcing of this work to developing nations and artificial intelligence are seemingly lost amongst the noise. Drawing on the work of Gillespie (2019; 2022), Roberts (2019) these ethical dilemmas are critically evaluated to reveal the need for the development and implementation of novel content moderation strategies. 

Exposure and Non-Disclosure: Ethical Dilemmas in the Employment of Human Content Moderators

“You are seeing dead bodies, murders or people celebrating having killed somebody, dogs being barbecued alive”

(Vice Informer, 2021)

While these words have the potential to incite mentally tormenting imagery themselves, they bear truth to a traumatic reality experienced by many content moderators on a daily basis. 

Mask-over face, a former employee of Facebook’s content moderation department sits in front of a camera in the studio of The Vice Network as he details coming to terms with PTSD induced by his time in the department, his inability to divulge his struggle with those close to him due to non-disclosure agreements and the inadequacy of the mental health services provided to him by Facebook. 

Vice Informer, 2021

In an era of instant digital publication, the role of content moderators has evolved dramatically. Once moderated merely by volunteer students from Harvard University in their free time, as the expectancy of instant publication characteristic to the contemporary digital age grew, so did the reliance on mass-scale human resources to moderate content (Gillespie, 2018).

Despite being employed to serve the greater internet community, the guidelines followed by content moderators are crafted without public deliberation or consideration (Gillespie, 2018), and are largely restricted by confidentiality agreement from public knowledge (as is the content moderators are exposed to). Thus, content moderators are required to actively engage with content flagged by artificial intelligence measures or platform users to assess whether it meets these platforms’ community guideline standards. Often these standards are very explicit yet of obscure origin, and the engagement with such content in the process of its evaluation occurs at the detriment of moderators mental health. Yet, given the scale of content, and consequent scale of the moderation departments they require, social media platforms invest sparingly in their employees, particularly their provision of mental health support services (Gillespie, 2018). Licensed health professionals such as psychologists and psychotherapists, rarely constitute the services provided by the platforms, and fear of breaching non-disclosure agreements leave moderators with limited outlets to discuss the impact of their work and the traumatic imagery they consume on a daily basis on  their mental health (Vice Informer, 2021). 

Posted By: J4p4n June 18 2018

In the absence of the infrastructure necessary to prepare and support content moderators, such work is simply unsustainable. Not only are moderators exposed to extremely distressing content on a regular basis, they are required to actively engage with it to discern whether or not it meets community guidelines without adequate support to process the inevitable mental toll this takes in a healthy way. 

‘Not Our Problem’: Ethical Dilemmas in The Outsourcing of Content Moderation to The Phillipines

In recent years, the responsibility of content moderation has become increasingly globalised. 

While outsourcing content moderation to developing nations is economically beneficial to digital platform companies, it presents a series of ethical concerns around inconsistent moderation standards, censorship and again, worker welfare.

Due to governmental support, reduced cost of labour and lesser amount of regulatory systems, The Philippines has gained immense popularity as a location for major digital platforms to outsource content moderation work (Carlos, 2023). However, in the name of economic gain and efficiency the influence of religious and cultural discrepancies are often overlooked. 

The pressure to process content in up to 10 languages foreign to Filipino workers has negative consequences for both the quality of content accepted and the mental state of the workers themselves, as they are required to engage with the content for greater periods of time than those in the western world that better understand it and are able to move through it relatively fast (Dowskin; Whalen & Cabato 2019). 

Posted By: Óscar Javier García Baudet  22 May 2005

In an article by the Washington Post  (Dowskin; Whalen & Cabato 2019) , a Fillipino worker was reported to have attempted suicide under such pressure, while others were said to have suffered serious mental breakdowns as a result of the prolonged exposure to such distressing content.

Moreover, bestowing this level of power upon individuals of differing cultural and religious beliefs to those submitting or flagging content for evaluation can be harmful to the established ‘democracy’ large platforms attempt to maintain, as expected by users.

Gillespie warns of this, writing:

“social media platforms may present themselves as universal services suited to everyone but when rules of propriety are crafted by small teams of people that share a particular worldview, they aren’t always well suited to those with different experiences, cultures of value systems”

(Gillespie p.8)

Though instructed to comply with a specific set of guidelines, differences in Filipino concepts of morality, compromise, and benevolence to Western ideals in particular do still have the potential to manifest in the selection of content ((Mulder, 1994; Roberts, 2019). 

So Why Not AI?

Given the ethical dilemmas of in-person content moderation – the mental implications and cultural discrepancies, the employment of artificial intelligence appears to present as the most viable solution.

However, artificial intelligence systems are not exempt from cultural and social biases either.

Predominantly programmed by white men of similar economic and socio-political demographic, these factors have been found to influence the AI content moderation systems – specifically their concept of ‘appropriateness’, gender and sexuality. An investigation conducted by journalists Mauro and Schellman at The Gaurdian found that photos of women in underwear were more likely to be flagged as ‘racy’ and ‘sexually suggestive’ than those of men in the same attire and pose. The pair revealed that footage published by the US National Cancer Institute was further classified by AI as ‘sexually explicit’ despite it depicting a clinical demonstration of how to conduct a breast examination. Alarmingly, AI detection systems such as these already serve as the frontline of content moderation processes, flagging potential breaches of community guidelines for human moderators to to subsequently investigate (Gillespie, 2018; Roberts 2019; Mauro & Schellmann, 2023).

The problematic nature of these AI systems serve as a demonstration of the possible detriment encountered by complete reliance on AI, and reinforces the need for re-evaluation of content moderation methods. 

Conclusion

This examination of the flawed ethics of current content moderation systems poses the question: to whom do we pass the metaphorical baton – that is the task of content moderation?

In more recent work, Gillespie outlines alternatives to a removalist approach to content moderation and instead emphasises the benefits of content ‘reduction’ (Gillespie, 2022). Gillespie argues that the implementation of age restrictions, geo-blocking and introducing temporary holds of potentially problematic content are just some of the many, less intensive methods digital platforms should seek to employ. Demonetisation and user-strike systems are also positioned as successful disincentives to raise the quality of digital discourse (Gillespie, 2022). This analysis considers current ethical dilemmas surrounding in person content moderation, outsourcing of moderation work and pitfalls of AI systems in supplementation. Drawing on the recent work of Gillespie, a series of solutions are highlighted to incite reconsideration of current content moderation systems. 

Reference list

Carlos, |By Fern. (2023, March 22). Top outsourced content moderation companies in the Philippines. Sourcefit BPO Philippines: Custom Offshore Staffing Solutions. https://sourcefit.com/outsourcing-blog/top-outsourced-content-moderation-companies-in-the-philippines/ 

García Baudet, Ó. (2005). File:Linares Party 2005.jpg. Wikimedia Commons. Wikimedia Commons. Retrieved September 26, 2023, from https://www.google.com/search?q=computer%20lab%20philippines&tbm=isch&hl=en&tbs=il:cl&rlz=1C5CHFA_enNZ885NZ886&sa=X&ved=0CAAQ1vwEahcKEwjQ04aZ2MSBAxUAAAAAHQAAAAAQAw&biw=1296&bih=682#imgrc=73ZqUjmjw5Z-EM

CC By 3.0

Gillespie, T. (2019). Custodians of the internet. Platforms, Content Moderation, and the Hidden Decisions That Shape Social Mediahttps://doi.org/10.12987/9780300235029 

Gillespie, T. (2022). Do not recommend? reduction as a form of content moderation. Social Media + Society8(3). https://doi.org/10.1177/20563051221117552 

J4p4n. (2018). PTSD . Openclipart . Openclipart . Retrieved September 26, 2023, from https://www.google.com/search?q=trauma%20and%20PTSD&tbm=isch&tbs=il:cl&rlz=1C5CHFA_enNZ885NZ886&hl=en&sa=X&ved=0CAAQ1vwEahcKEwiYpprD4MeBAxUAAAAAHQAAAAAQAw&biw=1296&bih=682#imgrc=uD7eLYd2jhZucM

CC0 1.0

Mauro , G., & Schellmann , H. (2023, February 8). “there is no standard”: Investigation finds AI algorithms objectify women’s bodies. The Guardian. https://www.theguardian.com/technology/2023/feb/08/biased-ai-algorithms-racy-women-bodies 

Minaev, A. (2023, March 22). 108 internet statistics & facts (2023). FirstSiteGuide. https://firstsiteguide.com/internet-stats/#:~:text=5.47%20billion%20people%20use%20the%20internet%20daily.&text=The%20global%20population%20in%202022,66%25%20of%20the%20entire%20population

Mulder , N. (1994). Filipino culture and social analysis – JSTOR. JSTOR. https://www.jstor.org/stable/42633420 

Roberts, S. T. (2019). Behind the screen: Content moderation in the shadows of social media. Yale University Press. https://www.jstor.org/stable/j.ctvhrcz0v 

Stackpole , T. (2022, November 9). Content moderation is terrible by design. Harvard Business Review. https://hbr.org/2022/11/content-moderation-is-terrible-by-design 

Vice Network . (2021). The Horrors of Being a Facebook Moderator Youtube. Retrieved September 26, 2023, from https://www.youtube.com/watch?v=cHGbWn6iwHw&t=370s .

WP Company. (2019, July 25). Content moderators at YouTube, Facebook and Twitter see the worst of the web – and suffer silently. The Washington Post. https://www.washingtonpost.com/technology/2019/07/25/social-media-companies-are-outsourcing-their-dirty-work-philippines-generation-workers-is-paying-price/