Protection or prejudice? Whether it’s important to the platform to take content moderation of racism?

Tiktok” by TheBetterDay is licensed under CC BY-ND 2.0.

Abstract: This paper analyzes content moderation on Facebook and the Chinese platforms TikTok and to show the important of content moderation. And discussed what are the behaviors of content moderators on these platforms when there are racial or hate comments and speech? What are the explanations for these behaviors? What are the criteria for content moderation on these platforms? And what are the implications of poorly vetted content, especially for marginalized people? (People of color and so on).

What is Content Moderation?

Content moderation refers to the process of reviewing and analyzing a series of media content, such as videos, comments, reviews, and so on sent by users on different platforms through the criteria provided by the platforms, and ultimately deciding whether to approve the user’s content. Manual and non-manual review can be performed within a fixed framework. (Gillespie,2018)

[Imagga].What is content moderation? Types of content moderation, tools and more. (2021, December 2). YouTube. https://youtu.be/SblTjhosI5o?si=JAi9Tb5Gub9IhFum

Videos on China’s TikTok with black people content have suffered from racist remarks.

The recent introduction of foreign talent in Guangdong province has led to the emergence of people from different cultures and countries on China’s social media platforms, with some people on TikTok sharing content about marrying a black man and having children or a black student showing his life at school. But this has led to some Chinese people taking offense to these videos with racist language, but TikTok has not been able to review the content in a timely manner and has allowed a lot of toxic comments to fill the comments, as well as the content of the videos. This also drew the attention of the government and warned the platforms to pay attention to the content moderation, and resolutely combat anti-black racism, and then the platforms released the content moderation standards, and cleaned up these toxic comments and conduct more stringent on content moderation. As early as January this year, the China Human Rights Network published a report on the regulation of racially discriminatory remarks on the Internet.

They mentioned that free speech should not affect the mental health of others, as well as provoking racial discrimination or hate speech. The platform media as a carrier has the obligation to review and regulate the content. Toxic and destructive speech should be banned and disciplined. So, on the TikTok platform, we can see that there are penalties as well as regulations for racist speech. After making toxic remarks, the user’s account is subjected to different levels of banned remarks. And videos will suffer from being taken down and not being passed for publication.

So, that’s why content moderation on the platform is crucial. Similarly Tarleton Gillespie says content review is an essential part for platforms. (2018) .They not only have to audit user content at the same time but also their own platforms. It not only represents the platform’s influence and accessibility but gives users a pure online experience and space. Especially in the case of toxic racist content, making the right judgment and correcting it without compromising the freedom of expression of others, and at the same time respecting the human rights and culture of marginalized people and protecting them.

TikTok on iPhone” by Nordskov Media is marked with CC0 1.0.

Facebook’s Content Moderation for Racist Comments and Users from Marginalized Populations

Facebook’s content moderation has long been criticized by the public. First, the platform’s content moderation algorithm differs when it comes to treating marginalized people and white people. This has led to different users receiving different treatment when posting remarks about racism. This includes some black activists, evaluators, and other fringe groups of people who send out racial remarks that usually don’t pass content moderation and are ushered into serious treatment on the platform, such as deletion of remarks and banning of accounts. The inequality of treatment on digital platforms and the serious bias problem in the algorithms for content review can lead to increased and severe racial discrimination, which gradually evolves from online to real society. And what the marginalized people get is more severe unequal treatment. (Siapera, 2021). Haimson (2021) and others also analyzed experimental data to conclude that on Facebook platforms, content audits often remove content from Black users. Ethnic and racial inequalities in content moderation will continue to perpetuate injustice and keep some voices from being heard. 

Content about racial justice is often particularly relevant to marginalized individuals because it relates directly to their identity as Black people and attempts to convey important points about systemic racism to their social media audiences.But often the content they post is restricted and deleted, and the platforms are doing as much as they can through content moderation to minimize racial discourse from appearing in the general public. (Haimson, 2021). Similarly, selective content moderation on Facebook. This means that directed content will appear on Facebook in different regions. So social media platforms removing content related to racism is causing users to perceive these platforms as biased, selective and unwelcoming to people of color, which at the same time shows that the media platforms are losing the level of trust and demand from their users.

Jawando on Facebook content moderation practices: ‘Hate speech is not protected speech’. (2020, August 27). YouTube. https://youtu.be/jxaDc97sYio?si=JMgEH2JnPSgn1aoi

[WION].Facebook selective in curbing hate speech, mis-information and inflammatory posts | Social media. (2021, October 24). YouTube. https://youtu.be/GIcxH2_GJe8?si=VAMDMqYjv-3MSIuL

And the reason for the disparity in Facebook’s content vetting is first and foremost because there is a difference in the education and ideas to which the digital media platform’s workers, most of whom are white males, are exposed. Gillespie also states about how the worker affect about content moderation work, ” This can lead these teams to overlook minority perspectives, and only worsens as a user base grows and diversifies. ” (Gillespie, 2018) The second is that the content review data algorithms are inadequate. Companies need to audit (digitize) content while ensuring that profits are earned. Therefore, platforms such as Facebook need to optimize their digital analytics and communicate with human content reviewers to reduce disparities in the treatment of racist speech.

Facebook at Mozcon – Alex” by Thos003 is licensed under CC BY 2.0.

The Influence of content moderation on marginalized populations

For Black users of TikTok in China, content moderation has helped to reduce the scope of cyberbullying, allowing them to be more supportive of their marginalized identities in the public online space.

But for Black social media users who are using Facebook, the bias and treat differently of content moderation and content removal limit their ability to post content relevant to their marginalized identities, again adding limitations and inequalities to their participation in the public sphere. (Haimson, 2021). As well as limiting their online participation. And this tends to make them suffer from unequal treatment when dealing with content review. Content moderation, as a connecting link, can exacerbate the birth of nasty speech, which can produce worse discrimination and conflict between races.

Conclusion: Content moderation is important to a platform:

Therefore, it is very important to review the content on the platform, first, the content review needs to remove the nasty speech, discriminatory speech and fake news and other harmful and toxic content and boycott, to reduce the user’s psychological confusion, depression and violent behavior. Gongane’s team also show the online speech and comments do not do the content moderation can lead to psychological breakdowns for users and requires that platforms must vet and censor content. “This catastrophic negative impact of social media on the society necessitates the dire need of detrimental content detection and moderation.” (Gongane and so on, 2022).

Secondly, content review is a service provided by the platform for users, so the attitude of the platform determines the living space of users in the platform, just like Facebook, there is a difference in the treatment of the situation, especially in the treatment of racial speech, so it restricts the marginal users, which is unfair treatment. So, the content audit for racial speech needs to pay more attention to the reasonableness of the speech and respect, to provide positive feedback for the users, in order to allow marginal users, black women, etc. to provide opportunities and space to participate, and to enhance the platform’s influence and be respected.

Discuss:

This article discusses the importance of content moderation for racial speech on platforms and how content moderation affects marginalized populations, but neglects to address the ethics of manual content moderation and the limitations of AI content moderation. Content moderation is a long-term endeavor that needs to be sustained across the platform in order to provide a peaceful online environment and respect the human rights of users.


Bibliography

Bottaro, A. (2022, April 22). Cyberbullying: Negative effects and how you can stop it. Verywell Health. https://www.verywellhealth.com/cyberbullying-effects-and-what-to-do-5220584

China: Combat Anti-Black racism on social media. (2023, August 16). Human Rights Watch. https://www.hrw.org/news/2023/08/16/china-combat-anti-black-racism-social-media

[WION].Facebook selective in curbing hate speech, mis-information and inflammatory posts | Social media. (2021, October 24). YouTube. https://youtu.be/GIcxH2_GJe8?si=VAMDMqYjv-3MSIuL

Gillespie, T. (2018). All Platforms Moderate. In Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media (pp. 1-23). Yale University Press. https://doi.org/10.12987/9780300235029

Gongane, V. U., Munot, M. V., & Anuse, A. D. (2022). Detection and moderation of detrimental content on social media platforms: Current status and future directions. Social Network Analysis and Mining12(1). https://doi.org/10.1007/s13278-022-00951-3

Grassegger, H., & Angwin, J. (2017, June 28). Facebook’s secret censorship rules protect white men from hate speech but not Black children. ProPublica. https://www.propublica.org/article/facebook-hate-speech-censorship-internal-documents-algorithms

Haimson, O. L., Delmonaco, D., Nie, P., & Wegner, A. (2021). Disproportionate removals and differing content moderation experiences for conservative, transgender, and Black social media users: Marginalization and moderation gray areas. Proceedings of the ACM on Human-Computer Interaction5(CSCW2), 1-35. https://doi.org/10.1145/3479610

Jawando on Facebook content moderation practices: ‘Hate speech is not protected speech’. (2020, August 27). YouTube. https://youtu.be/jxaDc97sYio?si=JMgEH2JnPSgn1aoi

Siapera, E. (2021). AI content moderation, racism and (de)Coloniality. International Journal of Bullying Prevention4(1), 55-65. https://doi.org/10.1007/s42380-021-00105-7

Laub, Z. (2019, April 11). Hate speech on social media: Global comparisons. Council on Foreign. Relations. https://www.cfr.org/backgrounder/hate-speech-social-media-global-comparisons#chapter-title-0-2

[Imagga].What is content moderation? Types of content moderation, tools and more. (2021, December 2). YouTube. https://youtu.be/SblTjhosI5o?si=JAi9Tb5Gub9IhFum

Yoshimoto, E. (2020, May 13). Supervision or suppression? How content moderation can uphold racism. The Governance Post. https://www.thegovernancepost.org/2020/05/supervision-or-suppression-how-content-moderation-can-uphold-racism/#:~:text=Regulating%20hate%20speech%

抖音用户服务协议. (n.d.). https://www.douyin.com/draft/douyin_agreement/douyin_agreement_user.html?id=6773906068725565448

论对种族歧视言论的规制 – 中国人权网. (n.d.). https://www.humanrights.cn/html/2023/zxyq_0110/69033.html

Be the first to comment on "Protection or prejudice? Whether it’s important to the platform to take content moderation of racism?"

Leave a comment