The Internet is playing an increasingly important role in China’s public life, and China has different Internet regulation policies from other Western countries, just like its unique political and economic system, China’s Internet policy has its own unique features. Different from technology companies protected by “safe harbor” regulations in many Western countries (Flew et al., 2019), the authors mentioned that state plays an intrusive role in the process of social platformization in China (de Kloet et al., 2019). This means that China’s network regulation policy is inseparable from state control, and the platform is subject to direct supervision and interference by the state. This paper will take Sina Weibo, the social platform with the largest number of daily active users in China, as an example to discuss whether China’s Internet regulation policies are more restrictive or protective for people.
The history of China’s Internet regulatory development:
The historical development of network regulation in China can be traced back to the early stages of the Internet’s entry into the Chinese market.The first access to the Internet took place in 1994. Also since the 1990s, a model of centralized leadership and professional governance has emerged in China’s communications management (Miao et al., 2018).Barme and Sang first mentioned the concept of China’s firewall in a 1997 issue of Wired Magazine, and in the early 2000s, China began to implement the JINDUN Project to filter and censor undesirable information on the Internet. Until 2021 the Chinese government further strengthened its regulation of the Internet and technology industries, including antitrust investigations and audits of large domestic technology companies.
Flew(2019) and other authors refer to the concept and purpose of moderation: “Moderation is the screening, evaluation, categorization, approval or removal/hiding of online content according to relevant communications and publishing policies. It seeks to support and enforce positive communications behaviour online, and to minimize aggression and anti-social behaviour.” Auditing enables people to see positive content online that is healthy, positive and in line with social values, and removes non-compliant content, but the metrics of auditing have not been standardized, which has also had a negative impact to some extent. Particularly in the implementation of online regulation in China, where Internet policy is an expression of the government’s desires and position (Miao, 2021), platforms are highly cooperative with government agencies, online regulation is heavily influenced by policy, the intensity of network supervision has always been at the center of controversy.
Sina Weibo’s real-name system and auditing mechanism as examples:
Sina Weibo is a Chinese social media platform launched in 2009. With the development of the Internet, Weibo as a digital new media under the network communication mode, and its openness, convenience, interactivity and other characteristics to meet the modern fast-paced life of the communication platform. Under the influence of national policies, Sina mandates real-name registration to ensure that the identity of the citizen using the account is consistent with the citizen’s identity, and in the past two years Sina Weibo has developed a teenage version based on the age of the registered users. The intent of requiring real-name registration was to prevent false identities and abuse, protect citizens’ private property, facilitate liability and legal tracking, and help improve information credibility, reduce cyberbullying and malicious behavior, as well as protect teenagers and maintain online information security and social stability.
However, since the release of the policy in 2012, according to a survey at the time 64% of users did not support the real-name policy for Weibo, and 78% were concerned about their personal data being leaked (Fong et al., 2012). As a matter of fact, the real-name system does have the potential to infringe on personal privacy, because the real-name system requires users to provide real identity information. And people’s fear of personal data leakage is not unwarranted, as leakage of private information is a common problem for social media platforms, such as Facebook’s Cambridge Analytica Data Scandal, which leaked millions of users’ personal data without their permission. This can cause considerable concern among users about the risk to the security of their data and the abuse of power by relevant organizations to use the collected personal information for improper purposes. It can be seen that if the real-name policy and the regulation of the relevant organizations are not thorough, there is a high risk that users’ data will be leaked and used illegally, even though the real-name system is intended to protect when it first appeared.
Protection and restriction also involve Sina Weibo’s auditing system. After a user posts content, it is first automatically verified and filtered, and any sensitive content that is detected and filtered is manually audited and notified to the user; if the content does not pass the audit and the user files a complaint, it will audit again.
The audit mechanism filters harmful information to a certain extent, prevents the spread of false information, and enables users to get a better browsing experience. However, the auditing mechanism itself is a filter bubble, it is easy to form a one-size-fits-all situation, the audit itself is easy to filter out a lot of good content, forming a kind of information cocoon, so that the platform lacks information diversity. Especially due to the historical and specific nature of the Chinese experience (de Kloet et al., 2019), it is difficult for politically relevant content to be published successfully on the Weibo platform, so the auditing mechanism sacrifices people’s right to freedom of expression and freedom of creation to a certain extent, and makes it more difficult for people to see some of the social inequalities and regions, and due to the lack of diversity in the information, it is harder for the minority groups to be seen, which can make social inequality more difficult to be seen. The lack of diversity of information makes it more difficult for minority voices to be heard, which can lead to increased social inequality and marginalization of minorities, resulting in political polarization and a lack of informed public discourse. While censorship can optimize the online environment and reduce the amount of harmful information, it also tends to create information cocoons in which people are more likely to lose their ability to think independently in the long run.
As discussed above, in the case of the social platform Sina Weibo, online regulation includes state intervention and institutional self-censorship: real-name policies and content audits. For Chinese netizens, China’s online regulation has gone a long way in protecting the security of citizens’ private identities and ensuring a healthy online environment, but it still sacrifices citizens’ freedom of expression and incomplete protection of personal privacy under the current imperfect policies and mechanisms. Overall, governments still need to improve their policies to encourage diversity and multiculturalism in regulations and policies, reduce digital exclusivity, and at the societal level, encouraging inclusiveness. On the other hand, institutions and platforms need algorithmic and technological innovations to improve the information cocoon and reduce algorithmic bias in order to expose people to more diverse information.
Barme, G. R., & Ye, S. (1997, June 1). The Great Firewall of China. WIRED. https://www.wired.com/1997/06/china-3/
de Kloet, J., Poell, T., Guohua, Z., & Yiu Fai, C. (2019). The platformization of Chinese Society: infrastructure, governance, and practice. Chinese Journal of Communication, 12(3), 249–256. https://doi.org/10.1080/17544750.2019.1644008
Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal of Digital Media & Policy, 10(1), 33–50. https://doi.org/10.1386/jdmp.10.1.33_1
Fong, S., Yan Zhuang, Lu, L., & Rui Tang. (2012). Analysis of general opinions about Sina Weibo micro-blog real-name verification system. 1st International Conference on Future Generation Communication Technologies, FGCT 2012, 83–88. https://doi.org/10.1109/FGCT.2012.6476588
GCFLearnFree. (2018, November 29). How Filter Bubbles Isolate You. YouTube. https://www.youtube.com/watch?v=pT-k1kDIRnw
Miao, W., Zhu, H., & Chen, Z. (2018). Who’s in charge of regulating the Internet in China: The history and evolution of China’s Internet regulatory agencies. China Media Research, 14(3), 1+. https://link.gale.com/apps/doc/A549658139/AONE?u=usyd&sid=bookmark-AONE&xid=1946e707
Miao, W., Jiang, M., & Pang, Y. (2021). Historicizing Internet Regulation in China: A Meta-Analysis of Chinese Internet Policies (1994-2017). International journal of communication [Online], 2003+. https://link.gale.com/apps/doc/A665415430/AONE?u=usyd&sid=bookmark-AONE&xid=4408805f
The Guardian. (2018, March 20). What is the Cambridge Analytica scandal? YouTube. https://www.youtube.com/watch?v=Q91nvbJSmS4
Does Chinese online regulatory policy restrict or protect people? © 2023 by Siyang Xiong is licensed under CC BY-NC-ND 4.0