Online Hate Speech Under the Sea: A Dive into platform governance and content moderation  

Introduction 

The world of entertainment has always served as a mirror to our societal values, shaping both platform regulation and self-regulations. In the year 2023, Disney has unveiled a live-action movies of The Little Mermaid. This adaptation features a significant departure from the animated classic. Disney has cast Halle Bailey, a black actress, as the beloved princess Ariel. The iconic whit redhead mermaid who has occupied the hearts of millions since 1989 has undergone a transformation. This casting decision has ignited a storm of controversy that reverberates across numerous digital platforms. 

Representation within the entertainment industry holds a unique power to fuel discussions and debated in the online sphere. The fervent online debates primarily around issues of racism and the passionate expressions of both support and criticism directed at Bailey and Disney. This phenomenon presents a formidable challenge to platform governance, highlighting the need for effective regulation and content moderation in the digital era. 

The Little Mermaid” by Auntie Rain is licensed under CC BY-NC-SA 2.0

Challenges from Online Reaction and Discussions

In the age of the Internet, discussions and debates unfold on a global scale. Conversations regarding the race swapping of Ariel took place across various social media platforms, fan forums, and news articles. The wide spectrum of reactions on these digital platforms demonstrates the power of the digital age to amplify voices. People who support the casting are celebrating Disney for taking a step towards diversity and representation. They believe that a powerful mainstream media entity like Disney can have a positive influence on all generations, especially young Black girls who can now see themselves in a beloved character. 

(Varner, 2022)

In The Little Mermaid live-action version, Disney adapted the classic to be more inclusive and appealing to border demographics. The race-swapping of the main character, Ariel, presents the idea of embracing diversity. Bailey herself expressed encouragement upon seeing comments like, “You don’t understand what this is doing for us, for our community, for all the little Black and brown girls who are going to see themselves in you.” 

However, a widespread outcry regarding the casting has taken over the Internet. According to Entertainment Tonight Canada, the trailer received 1.5 million dislikes in just two days. After the movie’s release, review websites, such as Douban and IMDb, were inundated with negativity and unverified reviews expressing disappointment. On Twitter, hashtags like #NotMyAriel and #KeepArielWhite have been widely used to convey dissatisfaction. 

(Aparicio, 2023)
(Hansen, 2023)

This shift in tone from constructive opinions to hate speech and personal attacks is growing on social media platforms. Supporters of the Black mermaid accuse detractors of being racist, in some cases, even create fake accounts to harass and bully those who have expressed their dislike of the race-swapped mermaid. These toxic interactions that flourish on the Internet bring up critical issues surrounding platform governance and content moderation. 

A Comparative Analysis of Platform Governance on Hate Speech Policies

Online hate speech travels instantaneously and reaches to the global scale. This phenomenon makes it challenging for governments to regulate effectively, therefore, Internet companies have assumed a greater role in online governance (Brown, 2018, p. 321). Platform governance encompasses a broad range of policies and practices designed to influence user behaviour and foster a positive online environment. This includes the establishment of clear community guidelines and reporting mechanisms to combat hate speech, online bullying, and harmful comments.

Different platforms have adopted varying approaches to governance. Platforms that promptly remove hate speech and ban repeat offenders contribute to a more respectful environment, while those slow to respond allow toxicity to persist. For example, Twitter, known for its commitment to free speech, has faced criticism for its handling of hate speech. In their policy update in April 2023, tweets that violate their policy, such as attacks based on race, will be excluded from recommendations, experience restricted engagement accounts, or require removal. This suggests that while Twitter asserts its stance against online abuse, it lacks strict punitive measures to directly combat hate speech. 

On the other hand, YouTube prioritises user safety and responds quickly to combat cyberbullying. YouTube’s automated systems will flag the potential problematic content upon upload, then subject it to human review to determine whether it violates the policy. To ensure consistent policy enforcement, YouTube employs linguistic and subject matter experts to address hate speech. Between April and June 2023, YouTube removed 191,080 videos that violated their policy. 

Mastering the YouTube Algorithm: Decoding YouTube’s Recommendations and Best Practices” by Hongxin Long is licensed under CC BY-NC 4.0

As a result of differing online platform policies, the forum atmosphere surrounding Bailey’s casting as Ariel varies. YouTube provides a more comfortable and safe space for users compared to Twitter. Without personal attack, individuals can express their viewpoint rationally.  

Content Moderation in the Battle Against Online Harm

Gillespie (2018, p. 5) demonstrated that content moderation is influenced by a range of factors, including legal requirements, platform policies, and cultural norms. Content moderation plays a vital role in combating online harm and the proliferation of hate speech. Automated algorithms are wildly used to detect content that violates community guidelines, but they often struggle to discern context and nuance. 

Il 25% delle donne viene molestato online: le parole degli ‘odiatori’” by Francesco Mercadante is licensed under CC BY 3.0

The case of Halle Bailey’s casting exemplifies the difficulty of content moderation. While some comments directly perpetuated racism and launched personal attacks, others were disguised as legitimate criticisms of the casting choice. Guiora and Park (2017, p. 966) assert that platforms should be more vigilant regarding individuals who repeatedly post hateful content, as this content on the Internet can become more realistic and cause actual harm. Hate speech and harmful comments directed at individuals can have real-world consequences, perpetuating discrimination and inflicting emotional distress. This highlights the responsibility of platforms to protect individuals and provide an inclusive and respectful online environment through content moderation. 

Navigating Online Echo Chambers

Discussions and reactions on the Internet are not monolithic, they are shaped by individual users, online communities, and platforms within the online ecosystem. These elements are the key role in framing and influencing the perspective. 

Individual users bring their unique perspectives, values and even biases to online discussions. While some users engage in constructive dialogue, fostering a sense of empathy and understanding, others contribute to the toxicity by disseminating racist or sexist remarks. The anonymity on platform can amplify the spread of hate speech and harmful comments. 

Online communities provide a space for like-minded individuals to congregate, bolstering their viewpoints and intensifying the impact of their discourse. Some online communities form echo chambers that exacerbate divisive sentiments. An echo chamber refers to the phenomenon where users are surrounded by peers with similar beliefs and opinions, resulting in the reinforcement of existing views and a lack of exposure to diverse perspectives (Cinelli et al., 2021, p. 1). These echo chambers can limit users’ ability to form well-rounded opinions and make informed decisions. The moderators of these communities can either encourage constructive dialogue or stifle dissenting voices, further shaping the narrative. 

Echo chamber pop” by Kevin Hodgson is licensed under CC BY-SA 2.0

Conclusion and Reflection 

            In summary, content moderation and platform governance are critical in mitigating online harm and regulating hate speech. However, it is equally important to recognise that these efforts rely on individual users adhering to community guidelines and respecting the diversity of voices in the digital realm. The transformative culture of the Internet demands that we reflect on our online behaviour and its impact on others. We can exercise our right of free speech to express our viewpoints without resorting to personal attacks. 

Through The Little Mermaid live-action casting case, the importance of Internet platforms maintains and updates their community guidelines in response to emerging trends. This necessitates a nuanced approach that distinguishes between valid critique and hate speech.  

References

Aparicio, P. [@Aparicio14Paul]. (2023, September 14). If anyone dares speak against the black washing they did to Ariel in the little mermaid 2023 [Tweet]. Twitter. https://x.com/Aparicio14Paul/status/1702150619778445459?s=20

Brown, A. (2017). What is so special about online (as compared to offline) hate speech? Ethnicities18(3), 297–326. https://doi.org/10.1177/1468796817709846

Cinelli, M., Morales, G. D. F., Galeazzi, A., Quattrociocchi, W., & Starnini, M. (2021). The echo chamber effect on social media. Proceedings of the National Academy of Sciences118(9). https://doi.org/10.1073/pnas.2023301118

ET Canada. (2022). Trevor Noah SLAMS Racist Criticism Of Halle Bailey & “The Little Mermaid” [Video]. YouTube. https://youtu.be/SQbK-6Zh5-M?si=A70wazxJ-6xowpem

Gillespie, T. (2018). All Platforms Moderate. In Custodians of the Internet (pp. 1–23). Yale University Press. https://www.degruyter.com/document/doi/10.12987/9780300235029-001/html

Google. (2023). Featured Policies: Hate Speech. Google Transparency Report . https://transparencyreport.google.com/youtube-policy/featured-policies/hate-speech?hl=en

Guiora, A., & Park, E. A. (2017). Hate Speech on Social Media. Philosophia45(3), 957–971. https://doi.org/10.1007/s11406-017-9858-4

Hansen, J. [@KIKIALINN]. (2023, March 28). The ugliest Ariel I’ve ever seen in my life! I was really disgusted by Disney’s version of Black Ariel! [Tweet]. Twitter. https://x.com/KIKIALINN/status/1662796148556566529?s=20

Toh, M., Zhu, C., & Bae, G. (2023, June 6). “The Little Mermaid” tanks in China and South Korea amid racist backlash from some viewers | CNN Business. CNN. https://edition.cnn.com/2023/06/06/media/little-mermaid-box-office-china-korea-intl-hnk/index.html?utm_source=twCNN&utm_content=2023-06-07T15%3A50%3A51&utm_medium=social&utm_term=link

Twitter Help Center. (2019, March 5). Hateful conduct policy. Twitter; Twitter Help Center. https://help.twitter.com/en/rules-and-policies/hateful-conduct-policy

Varner, A. (2022). Representation Matters, Beautiful Black Girls Reaction to The Little Mermaid Trailer [Video]. YouTube. https://youtu.be/Qp4yfmOOv6Q?si=3EGc6-fyGiMBURzW

Zuvela, T. (2023, September 7). This Is What Disney’s Live Action “The Little Mermaid” Changes From The Original Film. ELLE. https://www.elle.com.au/culture/the-little-mermaid-release-date-28619