

With the advent of the Web.2.0 era, the Internet, as a pioneering force for innovation and development, has become an important engine leading economic development and a significant way of cultural life for citizens. However, with the development of major platforms and websites, danger and confusion have gradually become significant points of contention on the Internet, which is filled with harmful content such as violence, pornography and hatred. Society has widely discussed the issue of regulating these undesirable contents. The Internet is far more challenging to regulate than traditional media. This is because, in the Web 2.0 era, the mode of information production and dissemination has changed, and citizens’ self-generated content is the main content of social media or the Internet, spreading fast and widely (Kenton,2022). In this situation, it is not enough to rely on the government alone to govern. This essay will elaborate on how they can stop the violent content on the Internet from three aspects: government regulation, self-regulation of Internet industry and user participation and supervision.
·Government regulation
In the general environment of the Internet, the spread of vulgar, violent, pornographic and other undesirable content can cause different degrees of negative impacts and even endanger social security. This is especially true for children who are still establishing their values, which may cause them to avoid reality, develop mental health, and develop extreme tendencies that hinder their healthy physical and mental development. Therefore, as the central defender of public interest, the government should take the primary responsibility of governance and supervision.
On the one hand, the government should set standards for undesirable content. As mentioned in Linz’s report (1986), existing studies do not have data to support whether the type of violence desensitizes viewers. Hence, there are no clear regulations on the types of images that need to be regulated, but objectionable content should be restricted. Therefore, the government should lead in establishing standards and carefully balancing the conflicting interests of public rights, freedom of expression, and youth protection to establish a set of regulatory standards for different situations.
On the other hand, restricting access to violent content for particular groups is also an effective way to stop the spread of violent content. Governments and citizens may be more willing to restrict the access or manner in which certain groups are exposed to violent content if they believe that violence transmitted by the media cannot directly impact social crime (Linz, 1986). As an illustration, the Federal Communications Commission (FCC) has issued guidelines for underage advertising and video. The head of the FCC also adjusted the timing of television programs so that programs unsuitable for children to watch with their families were not allowed to be broadcast in prime time (cited from Linz, 1986). Moreover, a rating or particular real-name system could be adopted for offensive and undesirable violent content to distinguish between minors and adults. It is worth mentioning that safety experts in the UK believe that gory, violent and pornographic content can cause severe physical and psychological damage to adolescents’ development. Keeping children from undesirable information is vital because adolescents may have extreme or inappropriate sexual fantasies due to receiving undesirable information (Grant, 2022).
Thus, even though government resources may be limited, it is the government’s responsibility to regulate online violence. Standards for violent content go to establish uniform regulations and determining the appropriate scope of regulation is the leading way for the government to stop undesirable content.
·Self-regulation of the Internet industry
The Internet’s vast volume, complexity and technical characteristics make direct government regulation very difficult (Trudel, 2009). Self-regulation and autonomy of the industry become essential to stopping undesirable content. Social media has a close relationship with users and presents a bridge to the public, law enforcement, and regulators. Because platforms bring together the information that citizens view, they allow regulators to regulate information through the intermediary role of the platform (Gillespie, 2017). Many Internet service providers have their user agreements, codes, and other regulations and maintain codes through user complaints and keyword searches, including regulation of violent content. For example, the public perceives Trump’s Twitter postings as inciting radical militants. Subsequently, Twitter and Facebook banned Trump from using their platforms because promoting violent and criminal behaviour violated the terms of service of both platforms (Cusumano et al., 2021). This shows that self-regulation by the industry is more relevant than direct government intervention.
Research by Gillespie (2017) suggests that platforms use their own rules to regulate their users’ illegal content and behaviour, with more algorithm-based intervention from the platform than laws and regulations. However, platforms can only constantly regulate harmful content and cannot do anything to stop it completely. In addition, many countries control violent content on the Internet, mainly utilizing rating systems, intelligent identification, and filtering. In addition to the famous rating system in Western countries, Asian countries are also developing technical systems to control the spread of undesirable content. For example, Singapore’s “Home Internet System” and Japan’s official filtering software “FILTERing” are designed to filter pornography, violence and other objectionable information(Kaspersky, n.d.; OpenNet Initiative, n.d.).
Thus, platforms have the advantage of being able to act or react in a timely and more resilient manner than government regulation. Therefore, they must constantly improve their rules and innovate their systems and take into account the strength of regulation while also ensuring that technological innovation does not decay. This requires the platform to make a sharp and rapid sense of its data and wind direction.
·User participation and monitoring
Harmful content on the Internet infringes on mentally immature minors, and adults are sometimes affected by violent content. Therefore, users also need to enhance their sense of responsibility, improve their self-discipline, and participate in violent content governance. On the one hand, users should strictly self-discipline and not participate in producing and disseminating undesirable content; on the other hand, they can also monitor violent information by reporting complaints and the rest. In addition, users can also spontaneously form social monitoring organizations to collectively promote the governance of violent content on the Internet. For example, the Internet Watch Foundation is a nonprofit organization with more than seventy people in various fields of work working to stop violence and content, aiming to help sexually abused children by tracking down and removing abusive videos circulating on the Internet. Moreover, the organization operates with funding from community donations (Internet Watch Foundation, n.d.). Thus, as an individual user, the individual can also provide support to stop lousy information. In addition, different users with different cultural backgrounds and education levels can bring different perspectives on data feedback and monitoring, thus enhancing the role of user monitoring.
To sum up, In the face of the rapid development of Internet technology, the spread of violent content on the Internet must be stopped. Between the characteristics of Web 2.0, the regulation of undesirable content requires the cooperation of many parties. The government, industry, society and users all need to participate. The government improves the judging standard and related regulations, the industry or platform relies on its intermediary role to quickly intercept or delete undesirable information, and the supervision of users and society can reduce the dissemination range. Therefore, although the governance of the three dimensions cannot wholly counteract undesirable violent content, it can effectively stop the spread of undesirable content and control it within a specific range.
·References
Gillespie, T. (2017). Regulation of and by Platforms. In The SAGE Handbook of Social Media (pp. 254–278).
Linz, D. (1986). Issues bearing on the legal regulation of violent and sexually violent media. Journal of Social Issues, 42(3), 171–193. https://doi.org/10.1111/j.1540-4560.1986.tb00249.x
Trudel, P. (2009). Web 2.0 Regulation: A risk management process. SSRN Electronic Journal, 7(243). https://doi.org/10.2139/ssrn.1573980
Cusumano, M. A., Gawer, A., & Yoffie, D. B. (2021, January 15). Social media companies should self-regulate. Now. Harvard Business Review. https://hbr.org/2021/01/social-media-companies-should-self-regulate-now
Grant, H. (2022, April 1). Age checks needed urgently to protect children from online porn, say charities. The Guardian. https://www.theguardian.com/global-development/2022/mar/31/age-checks-needed-urgently-to-protect-children-from-online-porn-say-charities
Internet Watch Foundation. (n.d.). Why we exist. Internet Watch Foundation. Retrieved October 13, 2022, from https://www.iwf.org.uk/about-us/why-we-exist/
Kaspersky. (n.d.). Content filtering in Japan. Kaspersky. Retrieved October 13, 2022, from https://support.kaspersky.com/KISA/Android_11.47/en-US/194555.htm
Kenton, W. (2022, October 11). What is web 2.0? Definition, impact, and examples. Investopedia. https://www.investopedia.com/terms/w/web-20.asp
OpenNet Initiative. (n.d.). Internet filtering in Singapore in 2004-2005: A country study. OpenNet Initiative. Retrieved October 13, 2022, from https://opennet.net/studies/singapore