Automated Content Moderation: A Blessing or a Curse for Facebook’s Community?

"Computer AI - Why I Keep Losing". By: Si-MOCs, CC BY-NC-SA 2.0

Automated content moderation has emerged in recent years as a solution to the sheer scale of user generated content and is therefore part of a longstanding trend in the leading use of AI in the digital media industry (Cambridge Consultants, 2019, p. 27). Owned and managed by private social media platforms that host user interaction, content moderation consequently gives such platforms immense power over the nature of public discourse. Whilst those who benefit from automated moderation are human content moderators, general social media users, and corporate social media platforms; those who do not benefit are minority groups in society who are subject to the biases and cultural de sensitivity in algorithms. In examining the case of Facebook’s banning of Modibodi’s period positive ad, this essay explores how automated content moderation techniques have the ability to reinforce social inequities and discourage conversations around important societal issues.

Why Moderate?

Upon its intersection with society, the internet was largely perceived as a platform to foster free speech, with one of its central claims being its resistance to censorship (Roberts, 2017, p. 1). However, the ever-increasing participatory culture of Web 2.0 has made the ability of social media platforms to circulate problematic content “painfully apparent” (Gillespie, 2018, p. 5). Social platforms inherit ‘toxic techno cultures’ (Massarani, 2015, p. 333), whereby trolls, spammers and malicious hackers have the ability to deter or frustrate speech (Langvardt, 2018, p. 1358). Hence, such platforms have a responsibility to moderate content to protect users from “the obscene, violent and the illegal” (Gillespie, 2018, p. 5).

Roberts (2017) defines content moderation as “the organised practice of screening user generated content… to determine its appropriateness for a given site, locality or jurisdiction” (p. 1). If inappropriate, the content is removed from the platform entirely.

Various forms of content moderation occurred voluntarily in early online communities (Roberts, 2017, p 1). However, the large-scale adoption of social media has led to content moderation today being undertaken by those major private companies such as Google, Facebook and Twitter who “operate authority outside the purview of public control” (Land, 2019, p 285).

These companies employ humans to rely on their own linguistic and cultural competencies (Roberts, 2017, p. 3) in undertaking the content moderation process. A significant portion of the content moderator’s role is viewing the most violent, disturbing and exploitative content on the internet (Chotiner, 2019), subjecting them to serious mental health issues.

“It’s the first job where I interviewed people where several people told me they would be happy if A.I. took over their job” (Dwoskin, Whalen, Cabato, 2019).

The emergence of Automated Content Moderation

As social media platforms have grown, the quantity, velocity and variety of content has become stratospheric (Gillespie, 2020, p. 1). The immense scale of user generated content has made it impossible to identify and remove harmful content using traditional human moderation (Cambridge consulatants, 2019, p. 4). Hence, companies have been forced to develop more sophisticated forms of moderation  (Gerrard, Thornham, 2020, p. 1269).

Facebook, for example, has 2.7 billion monthly active users and is required to review three million posts a day (Barret, 2020, p 10). In his 2017 ‘Building Global Community’ report, Mark Zuckerberg stated that:

 “There are billions of posts across our services each day, and since it’s impossible to review all of them… There have been tragic events… These stories show we must find a way to do more.”

Figure 1: Facebook’s most common moderation issues. Source: (Barrett, 2020)

Significant advances in machine learning over the past decade have provided a technological solution for companies who now utilise automated tools to fulfil important moderation functions (Gorwa, Binns, 2020, p. 2). Automated content moderation works through systems that map onto existing data (Gerrard, Thornham, 2020, p. 1269) to classify content based on matching or prediction, ending in a decision and outcome. This implementation of AI into content moderation techniques has accelerated in 2020 due to COVID 19, with many social media platforms forced to send their human moderators’ home (Gillespie, 2020). In May 2020, Facebook released a statement:

“We have temporarily sent our content reviewers home… We have increased our reliance on these automated systems… There may be some limitations to this approach, and we may see some longer response times and make more mistakes”.

Who benefits and who does not?

Those who benefit most from the transformative effects of AI in the moderation of user generated content on social media are:

  1. Human moderators who are no longer required to endure the emotional toll of viewing violent and graphic content. AI has the potential to take over their jobs entirely, or to make their role less jarring by detecting the most disturbing content.
  2. The potential of AI to detect content at a greater speed and scale, provides users of social media sites with extra protection from being exposed to harmful content. According to Langvardt (2018) “without it, social media users would drown in spam and disturbing imagery” (p. 1353).
  3. And those major social media platforms who have potentially found a solution to the overwhelming amounts of content to be moderated. Content moderation is a crucial practice in protecting a corporate platforms brand, ensuring compliances with laws, and maintaining a community of users willing to view and upload content on their sites (Roberts, 2019, p. 23).

However, it has become evident that through such moderating techniques, major social media platforms acquire unprecedented power over the global flow of information and thus the nature of public discourse. This power lies in the decisions that platforms make about what gets moderated and why (Gerrard, Thornham, 2020, p. 1276).

The rules, or community guidelines that shape content moderation are often crafted by small teams of people who share a particular worldview. Robert’s (2019) states that social media community guidelines are developed “in the specific sociocultural context of educated, economically elite, politically libertarians and racially monochromatic Silicon Valley” (p. 93). This can result in bias’ prevalent in moderation decisions, with the values of these people embedded in the company’s algorithms.

Hence, minority groups in society; those with different experiences, cultures or value systems, are often those flagged by automated content moderation. With content moderation also being a lived experience (Gillespie, 2018, p 143) this has the ability to silence a community or marginalise an activity. Gerrard and Thornham (2020) make evident the ‘sexist assemblages’ prevalent in content moderation. They argue that gender norms factor into what is reproduced in content moderation outcomes, which have the ability to silence content related to women and their bodies.

Facebook’s banning of Modibodi’s period positive ad:

A recent example of the issues associated with ‘sexual assemblies’ (Gerrard, Thornham, 2020) in automated content moderation is Facebook’s banning of Modibodi’s ‘The new way to period’ campaign. Modibodi is an Australian underwear brand most renowned for its leakproof underwear. In September 2020, the brand released the advertisement based around breaking down the taboos associated with menstruation (Farmakiss, 2020).

Figure 2: Modibodis ‘The New Way to Period’ Ad. Source: Modibodi (standard YouTube license)

The video was flagged by Facebook’s automated moderation system due to violations against its community guidelines (Farmakiss, 2020). Facebook’s guidelines claim that “ads must not contain shocking, sensational, inflammatory or excessively violent content”, and lists examples of this as advertisements depicting mutilation, medical procedures or suffering. Facebook claimed that the ads inclusion of menstrual blood violated these guidelines.

Through banning the ad that discusses how women are made to feel “gross”, “uncomfortable” and “unnatural”, Facebook has reinforced these stereotypes and worked against normalising female menstruation in society. Modibodi’s CEO, Kirsty Chong stated that:

“It’s the 21st century and its disappointing that Facebook doesn’t want to normalise the conversation around menstruation” (Farmakiss, 2020).

Figure 3: Backlash by feminists on social media. Cindy Gallop. Source: Twitter

Due to the outrage this banning caused, Facebook has reconsidered the decision and are now running to ad. Nevertheless, the initial decision reflects the biases in Facebooks automated moderation and their ability to marginalise the expression of female bodily functions.

Conclusion

In answering the question of whether automated content moderation processes are a blessing or curse for social media users, it is important to consider that whilst they are effective in curbing trolls or banning hate speech, they can also work to silence and marginalise communities (Gillespie, 2018, p. 143). Facebook’s banning of Modibodi’s ad highlights the evident biases and cultural de sensitivity common in machine moderators. Hence, content moderation technology should not be an entire substitute for but should work in conjunction with human content moderators who are contextually and culturally aware (Cambridge consultants, 2019, p.4).

 

References:

Barrett, P. (2020). Who Moderates the Social Media Giants? NYU Stern. Retrieved from https://issuu.com/nyusterncenterforbusinessandhumanri/docs/nyu_content_moderation_report_final_version?fr=sZWZmZjI1NjI1Ng

Cambridge Consultants. (2019). Use of AI in Online Content Moderation. Retrieved from https://www.ofcom.org.uk/__data/assets/pdf_file/0028/157249/cambridge-consultants-ai-content-moderation.pdf

Chotiner, I. (2019). The Underworld of Online Content Moderation. The New Yorker. Retrieved from https://www.newyorker.com/news/q-and-a/the-underworld-of-online-content-moderation

Dwoskin, E. Whalen, J. Cabato, R. (2019). Content Moderators at YouTube, Facebook and Twitter see the worst of the web – and suffer silently. The Washington Post. Retrieved from https://www.washingtonpost.com/technology/2019/07/25/social-media-companies-are-outsourcing-their-dirty-work-philippines-generation-workers-is-paying-price/

Facebook. (2020). Community Guidelines. Retrieved from https://www.facebook.com/policies/ads

Farmakiss, B. (2020). Facebook bans period-positive ad for ‘shocking’ content: ‘It’s a normal, natural process’. Retrieved from https://honey.nine.com.au/latest/facebook-modi-bodi-ad-banned-sensational-content-period-positivity/b47e5707-7f90-42f9-9b70-850e0e620416

Gerrard, Y. Thornham, H. (2020). Content Moderation: Social Media’s sexist assemblies. In New Media & Society, 22 (7), 1267-1281. DOI:  https://doi.org/10.1177/1461444820912540

Gillespie, T. (2018). All platforms moderate. In Custodians of the internet: platforms, content moderation, and the hidden decisions that shape social media (pp. 1-23). New Haven: Yale University Press. ISBN: 030023503X,9780300235029

Gillespie, T. (2020). Content moderation, AI, and the quest of scale. In Big Data and Society, 7 (2), 1-4. DOI:  https://doi.org/10.1177/2053951720943234

Gorwa, R. Binns, R. Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. In Big Data and Society, 7 (1), 2-12. DOI:  https://doi.org/10.1177/2053951719897945

Jin, K. (2020). Keeping People Safe and Informed about the Corona Virus. Facebook. Retrieved from https://about.fb.com/news/2020/10/coronavirus/#keeping-our-teams-safe

Land, M. (2019). Regulating Private Harms Online: Content Regulation under Human Rights Law. In Human Rights in the Age of Platforms (pp 285-310). MIT Press. ISBN: 9780262039055, 0262039052

Langvardt, K. (2018). Regulating online content moderation. In The Georgetown Law Journal, 106 (5), 1354-1388. Retrieved from https://www.law.georgetown.edu/georgetown-law-journal/wp-content/uploads/sites/26/2018/07/Regulating-Online-Content-Moderation.pdf

Massanari, A. (2017). Gamergate and The Fappening: How Reddit’a algorithm, governance, and culture support toxic technocultures. In New Media & Society, 19 (3), 329-346. DOI: https://doi.org/10.1177/1461444815608807

Modibodi. (2020).  The New Way to Period. YouTube. Retrieved from https://www.youtube.com/watch?v=qSnZSaWhtJs

Roberts, S. (2017). Content Moderation. In Encyclopedia of Big Data. UCLA. Retrieved from https://escholarship.org/uc/item/7371c1hf

Roberts, S. (2019). Behind the internet. In Behind the Screen: Content Moderation in the Shadows of Social Media (pp. 1-19). New Haven: Yale University Press. DOI: 10.2307/j.ctvhrcz0v

Zuckerberg, M. (2017). Building Global Community. Facebook. Retrieved from https://www.facebook.com/notes/mark-zuckerberg/building-global-community/10154544292806634/

 

Avatar
About Kiara Magnussen 6 Articles
Media and Communications student majoring in English and Marketing 👩‍💻📚 Looking to advance my understanding of the internet to hopefully take into my PR/Communications career 💭