Bullying, harassment, violent content, hate, porn and other problematic content circulates on digital platforms. Who should be responsible for stoping the spread of this content and how? 

As the carrier of technological development, the digital platform has evolved from an independent technological individual to a technological means that has a symbiotic relationship with today’s political economy and culture. A large number of Internet users use digital platforms to learn about the economy and politics of their countries. In addition, the dawn of the “Web 2.0” era has transformed digital platforms from the sole control of tech companies and governments to one where every web user can participate. As a result, the freedom and creativity of digital platforms have increased significantly in the era of “Web 2.0”. However, due to the increased freedom of digital platforms, their controllability and security will be reduced to a certain extent. An obvious manifestation is the spread of bullying and violence discussed on digital platforms. How to control the spread of such bad information and who is responsible for bad information posted on digital platforms is the focus of this article.

 

First, what is defined as violent pornography and harassment needs to be studied before examining who is responsible for violent pornography and harassment posted on digital platforms. According to some parents in China, the so-called pornographic content may be scenes similar to kissing that appear in American TV dramas. In addition, the criteria for defining violent content vary from country to country. For example, some NPC characters in the Moba mobile game King of Glory do not wear helmets when riding motorcycles. Therefore, this NPC character has been reported for rectification before it was launched. Therefore, before addressing inappropriate content on digital platforms, we should first determine how to develop a sound rule for evaluating content published on digital platforms.

In the era of “Web 2.0”, the objects that govern and use digital platforms consist of three parts:

  • Government
  • Digital platform R&D company
  • Internet users

The government dominates the control and regulation of bad information on digital platforms (Gillespie, 2017). While pornography and violence on digital platforms are held to the same standards in most countries. Gillespie (2017) pointed out that the main victims of inappropriate content on these digital platforms are adolescents and children. Adolescents are the hope and future of a nation. Therefore, it is necessary for every country to solve the problem of cybersecurity that poisons the physical and mental health of young people. For example, China’s efforts to clean up the Internet. Although we cannot ignore that the Chinese government’s rectification measures serve Chinese politics, such as cannabis and other drugs that are banned in China, these drugs are allowed to be smoked and sold in some parts of the United States. Internet users can sell marijuana in some parts of the United States, but it would be illegal to publish it on the Internet in China. China’s net-cleaning campaign has played a major role in regulating online content. In response to China’s net-net policy, the main means for the government to manage the content of digital platforms can be from the following aspects:

  •  Publish relevant digital platform content laws. For example, the government should formulate corresponding cyber human rights laws for cyber violence that often occurs in lousy information spread by digital platforms.
  • The government can take tougher measures to forcibly shut down some illegal websites and conduct online real-name identity authentication. Accounts are blocked by tracking IP addresses of bad information on digital platforms. For example, the IP addresses of netizens can be queried on Weibo. Therefore, when netizens post bad information and are tracked, the government has the right to permanently ban the accounts they own.
  • Use the media to guide public opinion. The government is the leader of social thought. The government should purify the content of digital platforms by combining it with the media.

However, the first consideration in the implementation of government policies is the control of the entire social order. Each country has its own social nature and different social order. Addressing and improving bad information on digital platforms is not just a matter of government means. For some authoritarian governments, the main purpose of controlling information on digital platforms is not to protect the mental health of young people by removing some bad information posted on digital platforms. These dictatorships use their mastery of information to rule citizens’ minds. Therefore, relying solely on government regulation does not reasonably define bullying, violence and pornography on digital platforms.

 

Digital platform R&D companies are in the middle ground when it comes to managing digital platforms. Internet users publish information on digital platforms created by R&D companies, and the government formulates relevant cyber laws for information published on digital media. Therefore, digital platform R&D companies should focus on search engines and program algorithms for sensitive words and bad content existing in the information base. For example, on Twitter, there will be sensitive word reminders for inappropriate content. Web users are subject to Twitter’s algorithm when posting such content and are unable to post such objectionable content to digital platforms. However, digital platform R&D companies belong to individual companies, and the meaning of individual companies’ existence is to obtain greater profits. Obviously, those violent and pornographic content can get more views due to the curiosity of Internet users. A large number of page views will make the digital platform development company more profitable. Therefore, if the digital platform R&D company is independently responsible for the management of lousy information published on the digital platform, the evaluation criteria for bad information may be lowered. Internet users can take advantage of algorithmic loopholes deliberately created by companies developing digital platforms to spread bad information. Moreover, digital platform R&D companies should sign a network supervision cooperation agreement with the government. The reason for the existence of Internet regulation contracts is to balance the evaluation standards between R&D companies and the government for content such as violence, pornography and harassment. Through the cooperation between digital platform R&D companies and the government, the standards for the definition of inappropriate content on digital platforms can be controlled within a reasonable range.

As the largest part of the development of digital platforms, Internet users are important disseminators and creators of digital platform content. On the other hand, the vast majority of inappropriate content on digital platforms comes from individuals. Kenton (2022) pointed out that information on digital platforms with high personal participation has rapid dissemination and uncontrollability. For example, the online violence of Korean netizens against Korean artists and the initiators in the Korean Nth Room incident. They are the dissemination and creation of pornographic information and cyber violence based on personal behaviour. Therefore, Internet users should improve their own code of conduct online when browsing and creating content on digital platforms. However, content on digital platforms cannot be managed solely by the individual self-control of web users. Internet users have a high degree of freedom, and their personal will is not able to produce large-scale improvement of bad information published on digital platforms.

 

Hence the question of “Who is responsible for bad content that appears on digital platforms?” This paper argues that the three subjects existing in the digital platform should be responsible for the content appearing in the digital platform at the same time. It starts with a tripartite collaboration to define what constitutes violence, pornography and harassment in a country-specific context. Internet users should regulate their own network behaviour. They should raise their awareness of cyber ethics. Digital platform research and development companies should use algorithms to block out bad information content in digital platforms. The government should issue relevant network security laws and policies to prohibit the dissemination of bad information on the Internet at the legal level. Most importantly, the government, digital platform R&D companies and Internet users should be responsible for the information published on digital platforms at the same time.

 

Reference List

Kenton, W. (2022, October 11). What is web 2.0? Definition, impact, and examples. Investopedia. https://www.investopedia.com/terms/w/web-20.asp

Gillespie, T. (2017). Regulation of and by Platforms. In The SAGE Handbook of Social Media (pp. 254–278).

Vox. (2014). Why the government should provide internet access [Video]. In YouTube. https://www.youtube.com/watch?v=QOHCJtrQWTU

 Focus on “Clean Network 2021” and focus on action_Xinhuanet. (n.d.). Focus on “Clean Network 2021” and focus on action_Xinhua. Retrieved October 15, 2022, from http://www.xinhuanet.com/politics/ldzt/jw2021/index.htm