Youtube’s Relationship with Content Moderation

"YouTube video Brandweer Nederweert" by mauritsonline is licensed under CC BY 2.0.
Youtube” by clasesdeperiodismo is licensed under CC BY-SA 2.0.

Despite being the internet’s largest online video platform, Youtube’s content moderation constantly enacts harm through its ineffective restriction of offensive and inappropriate content. (Ceci, 2023) Youtube employs both human and machine reviewers to moderate the 500 hours of content being uploaded each minute. (Ceci, 2023) Videos, thumbnails, comments, and links are regulated against Youtube’s Community Guidelines, which prohibits spam, sensitive content, violence, and misinformation to varying degrees. (Youtube, n.d.) Through removing and restricting content, they “aim to make YouTube a safer community while still giving creators the freedom to share a broad range of experiences and perspectives.” (Youtube, n.d.) Despite these ideals, Youtube creates an online space for users to both circulate and encounter offensive content that has the potential to cause irreversible harm.

Nature of The Platform

 The nature of a platform is to control the flow of information between the creator and the viewer through employing algorithms and restrictions to cultivate a meaningful stream of content. (Green & Tho Le, 2022) This responsibility of delivering quality to the user is upheld by measuring the engagement of posts and recommending accordingly. Twitter’s source code was recently leaked on Github, revealing the exact values to which a tweet is boosted according to its likes, comments, and retweets. Additionally, this algorithm labels users based on their political views. Considering that the algorithm has been manipulated in the past to deliver more traffic towards the tweets of CEO, Elon Musk, it is likely that the same favoritism plagues other platforms such as Youtube. (Mehta, 2023) Platform culture is interlaced with bias as this is a component of their promise of value.  

Twitter and Facebook often steal the limelight when discussing grievances of online platforms, both facing controversies surrounding violations of free speech in the US and their exploitation of user data worldwide. (Illing, 2021) Youtube’s harms originate from a perception of under-regulation of content, rather than an overregulation. In reality, its moderation efforts are equally inadequate to other platforms such as Facebook, with both being understaffed in comparison to their monthly user base. (Feldman, 2019) Youtube’s public opinion as poorly regulated may originate from its reliance on machine learning to flag violations of their community guidelines. Only 0.05% of content removals in 2022 were enacted by human moderators. (Ceci, 2023) This allows for loopholes whereby malicious creators can bypass the machine regulated security of the platform in order to cause harm to viewers.

Elsagate

“Youtube cracks down on disturbing kids videos” by CBC News (2018)

The term “Elsagate” was coined in 2017. It became a popular Twitter hashtag and Reddit board on which users discussed the appearances of fetish content on Youtube and Youtube Kids. These videos used popular characters such as Mickey Mouse, Spiderman, and the eponymous Elsa to lure children into watching content which featured grotesque depictions of scat, rape, pregnancy, and gore. (Ishikawa et al., 2019) Youtube allowed this content to proliferate on its platform, even recommending it as it mimicked appropriate kids channels in its style of content. (Papadamou, 2019, 1) Only when this phenomenon caught media attention did Youtube begin to remove Elsagate videos en-masse. (Balanzategui, 2019)

Youtube scurried to recollect their reputation for quality as its content moderation was regrettably revealed to be as dumb as a toddler. 

Kids on the iPad” by Thijs Knaap is licensed under CC BY 2.0.

Elsagate has sparked a “moral panic” concerning the intersection of child and adult content through the algorithmic navigation of young children on media platforms such as Youtube. (Balanzategui, 2019) Debates arose about whether it was the responsibility of parents or the platform to keep children from finding and consuming Elsagate content. As a platform, it should be an element of Youtube’s corporate social responsibility to protect users and moderate content, rather than the responsibility of the parents to regulate the behavior of their children on Youtube and the dedicated for-children Youtube Kids. However, there are laws which protect platforms and stipulate that platforms hold no legal liability for harmful content, and that they are not required to remove offensive content. (Smith, 2021) This places the responsibility on creators to police their own content, something which some will not undertake as their goals are exploitative in nature. 

The Nair Video

Youtube’s relaxed content moderation also allows for a degree of freedom on the platform. In July of 2023, Youtube content creator Kevin Leonardo posted a video titled “Removing BUTT HAIRS Using NAIR CREAM – A Visual Guide!”. The video gained 4.5 million views within a week, and became a viral topic discussed on Tiktok and Twitter. Many viewers felt offended by the video’s sudden graphic nudity, however others highlighted the importance of videos like these in sexual education online. (Hamilton, 2023) For people from the LGBTQIA+ community, online content is the only way to access relevant sexual health information, as it is often a topic excluded from school physical education programs, leaving LGBTQIA+ people at a disparate risk compared to those who are cisgender and heterosexual. (Derek, 2022) Youtube’s decision to not remove Kevin Leonardo’s content aligns itself with cyber libertarianism, which suggests that online platforms should be spaces for free speech and enlightenment.

Gay Rights are Human Rights” by ep_jhu is licensed under CC BY 2.0.

The Happy Medium

The middleground of content moderation is an enigmatic space which has been sought out since the emergence of platform culture online. Predatory and malicious content such as Elsagate should be removed, while vulgar yet informative content such as Kevin Leonardo’s videos should stay up under age-restriction. However, due to the scale of Youtube’s content traffic, each video cannot be individually assessed by a human with critical thought. The translation from human moderators to machine learning algorithms has allowed for developments in the scope of Youtube’s moderation efforts. Following the public surfacing of the Elsagate controversies, a deep learning architecture was developed by A. Ishikawa, et al. to distinguish genuine children’s content from predatory content masquerading as the former. This system works to a 92.6% accuracy and is trained on both types of content, using keyframes and multiple factors of analysis to determine if the selected material is harmful. (Ishikawa et al., 2019) There has been a recent boom in developing applications of artificial intelligence and machine learning in analyzing visual data and video indexing. The efficiency of Content Based Video Retrieval Systems has previously been a challenge as it combines audio, visual, and text based information. (Bhaumik, 2019) However, with the growing emphasis on artificial intelligence on the global stage, a larger scale application to moderation may be on the horizon.

Racism in AI

Is AI Racist? by Channel 4 News (2023)
Futurist Artist Karen Palmer summarises the racial bias of AI in a soap-dispenser metaphor.

There are moral shortcomings in the implementation of Big Data’s artificial intelligence to the moderation of Youtube content. The datasets used within Silicon Valley’s tech industry are plagued by numerous biases and affected by inequalities in representation. The representations of artificial intelligence within the media racialise the infallible machine to be associated with whiteness. (European Union News, 2020) This conceptual identity is manifested in the products of artificial intelligence systems. Amazon’s facial recognition system, “Rekognition‘ recognized darker-skinned women for men 31% of the time but had no margin of error for recognising white men. (Lifshitz, 2021) This is attributed to the absence of diversity in training datasets. For example, African datasets only make up 1% of the industry’s training data. Furthermore, there are inequalities in the professional world of technology, with only 2.5% of Google employees being Black. (Crowell, 2023) The underrepresentation of minority groups in the development of artificial intelligence raises concern for how racist content will be moderated online if an artificial intelligence was to control content moderation. It may allow hate speech or racist imagery to remain unaddressed and potentially be recommended to users, furthering harms. (Siapera, 2021) 

Face recognition” by sk8geek is licensed under CC BY-SA 2.0.

To Sum It All Up…

Youtube’s current content moderation is limited and inadequate considering the volume of content uploaded to the site daily. Youtube allows for more freedoms in what is circulated on their site, having a multifaceted impact on the digital cultures which emerge from their platform. Harms on children exist in the predatory content of Elsagate, being recommended by an ignorant algorithm and granted agency by the failure of Youtube’s enforcement of their Community Guidelines. However, LGBTQIA+ sexual education channels such as Kevin Leonardo’s are allowed to exist on Youtube where they would not be welcome on other platforms or in wider physical education programs. There have been advances in machine moderation, specifically for Elsagate content, however the broad application of artificial intelligence in content moderation on a video-based platform such as Youtube would be subject to the biases of the industry, particularly racial bias. Youtube’s search for competency in its content moderation is an ongoing struggle of both morals and technological advancement. By leveraging artificial intelligence to uplift and protect their audience, Youtube may reinforce a reputation of quality and free speech as the most influential video-sharing platform on the internet.

References

Balanzategui, J. (2019, October 2). EXAMINING THE “ELSAGATE” PHENOMENON: DISTURBING CHILDREN’S YOUTUBE CONTENT AND NEW FRONTIERS IN CHILDREN’S CULTURE. Selected Papers of #AoIR2019: The 20th Annual Conference of the Association of Internet Researchers. https://spir.aoir.org/ojs/index.php/spir/article/download/10921/9594b

Bhaumik, H. (2019). 2. Keyframe selection for video indexing using an approximate minimal spanning tree. Machine Learning for Big Data Analysis, 31-68. https://doi.org/10.1515/9783110551433-002

Ceci, L. (2023, April 14). Number of removed YouTube videos 2022. Statista. Retrieved October 2, 2023, from https://www.statista.com/statistics/1132890/number-removed-youtube-videos-worldwide/ 

Ceci, L. (2023, August 31). YouTube: Statistics & Data. Statista. Retrieved October 2, 2023, from https://www.statista.com/topics/2019/youtube/#topicOverview 

Ceci, L. (2023, September 5). YouTube: hours of video uploaded every minute 2022. Statista. Retrieved October 2, 2023, from https://www.statista.com/statistics/259477/hours-of-video-uploaded-to-youtube-every-minute/ 

Crowell, R. (2023, May 19). Why AI’s diversity crisis matters, and how to tackle it. Nature. https://www.nature.com/articles/d41586-023-01689-4 

Derek, C. (2022, Spring). The Lack of LGBTQIA Cultural Competency in Physical Therapy Education. Scholarly Journal, 51(1), 47-51. https://www.proquest.com/docview/2696134420?accountid=14757&parentSessionId=AK7r8rRjoFIpXkT0W0lwOLaZJBP%2BMir8K%2BmgufVDZQo%3D&pq-origsite=primo 

European Union News. (2020, August 8). Whiteness of AI erases people of colour from our ‘imagined futures’, researchers argue. Right Vision Media. https://go.gale.com/ps/i.do?p=ITOF&u=usyd&v=2.1&it=r&id=GALE%7CA632056656&retrievalId=bc5689a3-20d2-4d16-8c10-7a6a1e870154&inPS=true&linkSource=interlink&sid=bookmark-ITOF&aty=sso%3A+shibboleth 

Feldman, S. (2019, March 13). Chart: How Does Facebook Moderate Content. Statista. Retrieved October 2, 2023, from https://www.statista.com/chart/17302/facebook-content-moderator/ 

Green, L., & Tho Le, V. (2022, May 24). Holding the Line: Responsibility, Digital Citizenship and the Platforms. Digital Platform Regulation, 85-109. https://doi.org/10.1007/978-3-030-95220-4_5 

Hamilton, P. (2023, July 3). Who Is Kevin Leonardo And Why Is His ‘Nair Video’ Going Viral? The Shocking YouTube Video Explained. Know Your Meme. Retrieved October 2, 2023, from https://knowyourmeme.com/editorials/guides/who-is-kevin-leonardo-and-why-is-his-nair-video-going-viral-the-shocking-youtube-video-explained 

Illing, S. (2021, May 5). Democracy and free speech: The First Amendment has a Facebook problem. Vox. Retrieved October 2, 2023, from https://www.vox.com/policy-and-politics/22356339/free-speech-facebook-twitter-big-tech-first-amendment 

Ishikawa, A., Bollis, E., & Avila, S. (2019). Combating the Elsagate Phenomenon: Deep Learning Architectures for Disturbing Cartoons. th International Workshop on Biometrics and Forensics (IWBF), 1-6. https://ieeexplore.ieee.org/document/8739202/authors 

Kimball, D. (2021, September 14). Innovation, regulation, and the Techlash: SPN Tech States. Wire Feed. https://www.proquest.com/docview/2572883312/citation/7574E493A83B4F7CPQ/1?accountid=14757 

Lifshitz, B. (2021, May 6). Racism is Systemic in Artificial Intelligence Systems, Too. Georgetown Security Studies Review. Retrieved October 2, 2023, from https://georgetownsecuritystudiesreview.org/2021/05/06/racism-is-systemic-in-artificial-intelligence-systems-too/ 

Matsakis, L. (2018, March 2). YouTube’s Content Moderation Has Become an Inconsistent Mess. WIRED. Retrieved October 2, 2023, from https://www.wired.com/story/youtube-content-moderation-inconsistent/ 

Mehta, I. (2023, March 27). GitHub takes down repository containing Twitter’s source code. TechCrunch. Retrieved October 2, 2023, from https://techcrunch.com/2023/03/27/github-takes-down-repository-containing-twitters-source-code/ 

Papadamou, K. (2019, January 21). Disturbed YouTube for Kids: Characterizing and Detecting Disturbing Content on YouTube. 1. https://encase.socialcomputing.eu/wp-content/uploads/2019/01/DisturbedYouTubeforKids.pdf 

Siapera, E. (2021, September 6). AI Content Moderation, Racism and (de)Coloniality. Int Journal of Bullying Prevention, (4), 55-65. https://doi.org/10.1007/s42380-021-00105-7 

Smith, M. (2021, August 12). It’s Time to Update Section 230. Harvard Business Review. Retrieved October 2, 2023, from https://hbr.org/2021/08/its-time-to-update-section-230 Youtube. (n.d.). YouTube Community Guidelines & Policies – How YouTube Works. YouTube. Retrieved October 2, 2023, from  https://www.youtube.com/howyoutubeworks/policies/community-guidelines/

Be the first to comment on "Youtube’s Relationship with Content Moderation"

Leave a comment