Whether the content moderation of digital platforms such as the Chinese version of TikTok and YouTube maintains the community safety of the platform or blocks the freedom and rights of users. Should the content moderation system be more transparent?

The amount of user-generated content on the social media platform has significantly increased, so the management of content has become strict and complex. The content that seems to be uploaded only with one click needs to go through a content moderation process with various regulations before appearing on social media platforms. Whether it is manual content moderation or increasingly dominant artificial intelligence, it has led to debates about the fundamental human rights — freedom of speech. The essay will explore whether platform reviews published content is an essential step in protecting the safety of online communities and users or a negative measure that violates users’ human rights by taking Douyin (Chinese versions of TikTok) and YouTube as examples, as well as discuss the crucial role that transparency of the content moderation system between these two aspects.

The purpose of content moderation

Content moderation on social media platforms is a policing mechanism that builds community engagement to promote collaboration and avoid misuse (Gorwa et al., 2020, as cited in Grimmelmann, 2015, p.6). Online platforms for social media often are considered as Utopian (Gillespie, 2018). The rapid way of publishing content makes social media platforms appear to be a freer and unconstrained space, in which perilous, pornographic and harmful speech and content pose a hidden danger to the security of platforms (Gillespie, 2018). Content moderation has become an important part of rectifying content to determine and filter suitable content based on requirements by law, cultural traditions and regulations by government to form message interchange and user action (Zeng & Kaye, 2022).

How Douyin and YouTube keep online communities safe through content moderation

TikTok”by Solen Feyissa is license under CC BY-SA 2.0 DEED

Douyin is a social media platform for short videos that can only be used in China. Its content moderation must strictly follow the Rules which are formulated by the government of the People’s Republic of China. During the 2022 Winter Olympics in China, 6,780 videos or remarks that insulted athletes or spread false information were cleared and headed off by the Douyin. Content moderation of Douyin not only ensured that no negative information that does not comply with the regulations of the country and Douyin appears in the online platform environment, but also it safeguarded the country’s image at important national moments, maintaining a healthy ideology and the ethos of the online platform.

YouTube Logo” by Rego Korosi is license under CC BY-SA 2.0 DEED

YouTube is a video social media platform for more than 100 countries around the world and its’ content moderation is mainly based on its own community standards. According to the community guidelines provided by YouTube (n.d.), it will change a little according to the requirements of different countries but accept different opinions of users to some degree, which has different governance from Douyin. In 2022, YouTube updated its content moderation measures and prohibited those who violate the rules from using the comment function within the upcoming 24 hours. Meanwhile, YouTube successfully eliminated 1.1 billion junk information in a half year with the help of artificial intelligence (Sushon, 2022). It can be seen clearly that if there were no strict content moderation on YouTube, hundreds of millions of illegal content would continue to be disseminated, causing the platform to fall into chaos.

Content moderation on the platform is constantly being strengthened, and regulations are constantly being improved, which indicates that the platform attaches great importance to the security of its online community and affirms the role of content moderation in the security of online platforms.

Content moderation sparks controversy over human rights of users

Social media platforms are often criticized for issues related to users’ rights because content moderation is considered to limit their freedom of speech. Because both positive content and negative or unrecognized content will be perceived by some users that they have the right to publish it in an online world with audiences (Gillespie, 2018).


China rigorously examines the online content that goes against “core socialist values” (The Standard, 2022). Douyin’s company takes the values ​​of the Chinese government and the Communist Party as its guidance and positively reviews user content (IPVM Team, 2022). Therefore, on Douyin, thoughts and speeches that deviate from the above regulations will not be allowed to appear, which causes the public anxiety about freedom of speech (IPVM Team, 2022).

(Norris, 2022)

Douyin’s content moderation has been questioned by people as an excessive review mechanism, which has austere control and requirements on online topics and political content. As a result, different views are unable to be seen in Douyin.


YouTube is also constantly facing criticism from users about freedom of speech. In 2023, YouTube decided that the channel String infringed its content guidelines and removed it, which brought the topic of freedom of speech to a culmination. According to Paul (2023), various users are supporting String bloggers and questioning YouTube’s moderation system on Twitter. This approach of deleting content and large-scale disabling accounts will limit the basic rights of human (Kozyreva et al., 2023). Different people have different stances towards different content on the Internet, which is also the most difficult point to govern because speech that is considered harmful by the platform will be viewed differently in the eyes of the creator and those who share the same stance as a creator. 

(Pandey, 2023)

AI content moderation

AI content moderation has become the most important review method on today’s platforms (Kozyreva et al., 2023). According to Gorwa et al. (2020), civil society is often worried about the AI moderation system, which is unable to make correct judgments more effectively and overly restricts users’ online expression. In 2020, YouTube was hit with a class-action lawsuit for being deemed to have racially and sexism in its moderation algorithm. Although in August 2023, the court dismissed the lawsuit against YouTube due to insufficient evidence but said that YouTube’s automatic moderation system may be biased (Bonilla, 2023). This incident reflects the uncertainty of algorithms and the fact that people do not trust the AI moderation system.

Moderation is of great importance, but it does not hinder freedom of speech

Although the governance measures of the platform are controversial, it is undeniable that negative comments, violence and harmful behavior will exist on any online space that allows users to interact and transmit information with others (Gorwa et al., 2020). Freedom of speech does not mean expressing oneself freely on the Internet, as there will still be restrictive factors depending on different national regulations and policies. Content moderation is not an encroach on users’ rights. Instead, it protects and provides a more secure online platform for users.

Although Douyin has implemented a tough moderation system, but for an online user group coming from all over China, this will effectively ensure the health of the content and environment of the online community. In the context of China’s political and institutional environment, this online healthy environment will further maintain the unity of the country’s cities and Chinese people, preventing the spread of regional bias or harmful content that violates national values ​​on the Internet.

Similarly, the necessity for content moderation is also reflected in the security governance of YouTube’s online community. AI content moderation systems identified and flagged 98% of the violent extremist videos that YouTube deleted (Gorwa et al., 2020). The aim of content moderation is only to identify harmful content and does not interfere with normal speech. It effectively cleans up undesirable content to prevent it from causing negative impacts in an online world where the information flows quickly.

The online world does not mean uncontrolled space as can be seen from the content regulations and requirements of YouTube and Douyin. Although the freedom of speech is a right that every person in the world has as stipulated in the International Covenant on Civil and Political Rights, but people still need to be responsible for their speech and there are restrictions (United Nations, 1966). According to the International Covenant on Civil and Political Rights (United Nations, 1966), freedom of expression is well restricted by law in the following circumstances:

“For respect of the rights or reputations of others” (United Nations, 1966, art. 19).

“For the protection of national security or of public order (ordre public), or of public health or morals” (United Nations, 1966, art. 19).

People’s rights of free speech have not been deliberately infringed, and speech has not been arbitrarily deleted or banned. Instead, it is necessary to maintain a safe and comfortable environment for users on platforms in accordance with laws and community norms. Therefore, it is inevitable that platforms need to protect their users from being attacked by others on the Internet and prevent criminal activities by content censorship (Gillespie, 2018).

Content moderation systems should be more transparent

Content moderation has always been a notoriously obscure and confidential procedure (Gorwa et al., 2020). It is hard for users not to question a content moderation system that is incompletely explained and incomprehensible. Users are hardly possible to investigate the platform liability because of the algorithmic opacity and bugs (Zeng & Kaye, 2022). So the transparency of content moderation systems is a key point in mitigating the crisis of user trust in social media platforms and the debate over freedom of speech.

In the video, Kalev Leetaru gives examples of the important role and reasons why algorithms, including content moderation systems, need to become more transparent in different aspects.

(American Thought Leaders-The Epoch Times, 2022)
  • Algorithmic systems for content recommendation and prioritization need to become transparent. Users will be able to judge whether they are being passively manipulated by algorithms and whether algorithms are biased in hiding content.(American Thought Leaders-The Epoch Times, 2022)
  • It is necessary to establish and publish a public database about content deleted by the platform to help users better understand the specific details of content moderation requirements so as to determine whether it is fair or whether the algorithm needs to be improved.(American Thought Leaders-The Epoch Times, 2022)

The content moderation system becomes more transparent, allowing users to independently judge whether their rights and freedom of speech have been improperly violated based on more specific reasons for deletion and a transparent audit system, effectively preventing negative reviews and free speech debates on social media Platforms caused by users doubting and speculation. Meanwhile, users also have an opportunity to give clear and direct feedback to social media platforms. Understanding and appropriately adopting user preferences is an important way for platforms to gain users’ reliance in content moderation measures and rules (kozyreva et al., 2023).


It is a difficult task for social media platforms to build a moderation mechanism that can balance the handling of content moderation and freedom of speech (Gillespie, 2018). Douyin and YouTube operating in different countries have different standards of content moderation, and content moderation has indeed played its rightful role. However, the debate over freedom of speech cannot be avoided. These two situations cannot be respected and satisfied at the same time (Kozyreva et al., 2023). Making the content moderation system more transparent is a key measure to help social media platforms resolve the conflict between content moderation and user rights, providing users with a more trustworthy, fair and institutionally transparent online community.

Jiawen Jia

7 October 2023


American Thought Leaders-The Epoch Times. (2022). Big Tech Companies Should Release    Algorithms for Content Moderation | CLIP. [Video]. YouTube. https://www.youtube.com/watch?app=desktop&v=u55Ny7UipRM

Bonilla, A. (2023, August 25). Federal Court tosses lawsuit claiming YouTube algorithms. discriminate. HYPEBOT. https://www.hypebot.com/hypebot/2023/08/federal-court-tosses-lawsuit-claiming-youtube-algorithms-discriminate.html

Gillespie, T. (2018). All Platforms Moderate. Custodians of the Internet : Platforms, Content. Moderation, and the Hidden Decisions That Shape Social Media (pp. 1–23). Yale University Press. https://doi.org/10.12987/9780300235029-001

Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical.  and political challenges in the automation of platform governance. Big Data & Society, 7(1), p.205395171989794. https://doi.org/10.1177/2053951719897945

Grimmelmann, J. (2015). The virtues of moderation. Yale Journal of Law & Technology. 17: 42.

IPVM Team. (2022, December 28). How Douyin Censors Anti-PRC, Communist Party Content. https://ipvm.com/reports/douyin-prc-censorship

Kozyreva, A., Herzog, S. M., Lewandowsky, S., Hertwig, R., Lorenz-Spreen, P., Leiser, M., &     Reifler, J. (2023). Resolving content moderation dilemmas between free speech and harmful misinformation. Proceedings of the National Academy of Sciences – PNAS, 120(7), e2210666120–e2210666120. https://doi.org/10.1073/pnas.2210666120

Norris, M [@ briefnorris]. (2022, April 9). In fairness, the excesses of Douyin’s content    moderation policy. [Twitter]. https://twitter.com/briefnorris/status/1512466640608251909

Pandey, K [@Kartikey_ind]. (2023, October 3). There is no free speech on YouTube. [Twitter]. https://twitter.com/Kartikey_ind/status/1709083467265536026

Paul, A. (2023, September 22). YouTube takes down Vinodh Kumar’s controversial channel    String for violating content policy. Dailyo. https://www.dailyo.in/news/youtube-takes-down-vinodh-kumar-controversial-channel-string-for-violating-content-policy-41627

Sushon, I. (2022, December 14). YouTube will send messages to users who post offensive.    comments. If that doesn’t help, they get a 24-hour timeout. Mezha. https://mezha.media/en/2022/12/14/youtube-will-send-messages-to-users-who-post-offensive-comments-if-that-doesn-t-help-they-get-a-24-hour-timeout/

The Standard. (2022, February 10). China’s Weibo, Douyin delete thousands of posts over abuse. of athletes. https://www.thestandard.com.hk/breaking-news/section/3/186978/China’s-Weibo,-Douyin-delete-thousands-of-posts-over-abuse-of-athletes

United Nations (General Assembly). (1966). International Covenant on Civil and Political. Rights. The Office of the High Commissioner for Human Rights. https://www.ohchr.org/en/instruments-mechanisms/instruments/international-covenant-civil-and-political-rights

YouTube. (n.d.) Legal removals.                          https://www.youtube.com/howyoutubeworks/policies/legal-removals/

Zeng, J., & Kaye, D. B. V. (2022). From content moderation to visibility moderation: A case. study of platform governance on TikTok. Policy and Internet, 14(1), pp. 79–95. https://doi.org/10.1002/poi3.287

Whether the content moderation of digital platforms such as the Chinese version of TikTok and YouTube maintains the community safety of the platform or blocks the freedom and rights of users. Should the content moderation system be more transparent? © 2023 by Jiawen Jia is licensed under CC BY-NC-SA 4.0 

Be the first to comment on "Whether the content moderation of digital platforms such as the Chinese version of TikTok and YouTube maintains the community safety of the platform or blocks the freedom and rights of users. Should the content moderation system be more transparent?"

Leave a comment