
In fast-paced social life, the Internet and social media platforms provide users with virtual space to participate in social activities and communication, and have become one of the main ways for people to send and receive information and understand the real-time information of various countries and regions. However, some extreme users and criminals, taking advantage of the fast transmission speed of digital platforms and under the banner of freedom of expression, have unscrupulously published bullying, harassment, violence, hatred, pornography and other problematic content on the platforms. It not only threatens the physical and mental health of users or brings about post-traumatic stress disorder (PTSD), but also causes social panic. The live rape of a teenager in Ohio is a typical example, which the spread of bullying and harassment has caused the audience’s traumatic impact and their concern for social security.
Therefore, the spiteful spread of problematic content on the digital platform has attracted the attention of government agencies, major media platform companies and users. As participants in building a healthy Internet environment, they all have an important responsibility to prevent problematic content from spreading on the digital platform.

State and government agencies— Playing a mainstream role in Internet regulation
As the authority of each country, government agencies have the responsibility to prevent the spread of problematic content. They play the role of Internet inspectors and preside over the ethics of content moderation. In order to maintain the healthy environment and development of the Internet, the State Council Information Office of the People’s Republic of China stated that the government directly intervenes in the supervision of Internet communication content, and citizens who spread violence, harassment, bullying and other content with problems will be punished by law. The government’s attention to problematic content and the release of targeted regulations make the Internet platforms and users aware that these acts that damage the Internet environment are not allowed politically and legally and are serious violations that must be condemned and compulsorily punished.
In addition, government agencies lead the establishment of laws and regulations for the content industry, including Internet news information service regulations. All social media platforms and citizens need to legally develop and use the Internet according to the laws and regulations established by government agencies on the prohibition of the dissemination of violence, pornography and other harmful information. Obviously, the development direction and operation policies of the media platform are closely related to national laws and regulations.
Internet and Media Platform Company——Performers who prevent the spread of problematic content
With the development of the Internet and the expansion of the scope of platforms and service objects, media platforms have gradually become more responsible for Internet management and supervision. In the digital age, as the most important way of information dissemination, they can control the information and content that users can browse and receive. In the mid-1990s, masterminds in various places began to worry about the proliferation of illegal content such as pornography and piracy on the Internet, because it was difficult to find out the responsibility and illegal acts of “publishers” online, but to blame Internet service providers (Tarleton, 2017). Anonymity and virtuality exist in the platform and network, and some creators who publish questionable content use this feature to commit illegal acts. When the perpetrators cannot be found, more people will focus on the platform’s mistakes in content moderation. This leads to the fact that the rules set by the platform are often deeper and more specific than those formulated by law (Tarleton, 2018). By monitoring and moderating the content published by users on the platform, platforms can ensure that high-quality content is presented to improve the number of users on the platform and their satisfaction. During the review process, they can ensure that the spread of problematic content is prevented from the source, thus enhancing the value of the platform.
With the emergence of more and more Internet information spread methods, the content of supervision and moderation has become huge and complex (O’Hara & Hall,2018). Therefore, in order to effectively prevent the spread of these contents, major companies and platforms choose to outsource moderation projects to experienced moderation companies. Use accurate and professional algorithms to automatically or artificial cognition to identify the problematic content not only in order to meet legal requirements or avoid implementing additional policies to avoid losing customers, but also stabilizes advertisers and investors, protects the image of the enterprise, respects the ethics of individuals and institutions, and promotes the business value and economic development of the company and the platform. At the same time, the expansion of the demand for moderation services has led to the rise of the moderation industry.
Furthermore, the moderation and management policies formulated by the media platform are adjusted and adapted according to different national political and cultural backgrounds. Although the intermediary liability system is subject to national and regional constraints, most platforms are largely unconstrained (Tarleton, 2017). In many cases, there is no clear boundary between images of nudity or violence with global and historical significance and images without global and historical significance (Tarleton, 2018). Due to the different political, cultural and historical backgrounds of different countries, the boundaries and definitions of violence, pornography and other content are different. This way of formulating politics ensures the development and market position of the platform in the international arena, and can reduce the conflict of political and cultural positions caused by the dissemination of these problematic content in the international and social arena.
However, more and more restrictions and regulations cannot avoid the loss of users. The algorithm can quickly and sensitively locate images or videos related to violence, pornography and other keywords. In the process of automatic review, identification errors cannot be avoided, and some innocent creators’ works will be forced off the shelf. The moderation error event of Tik Tok caused the creators to complain about the platform and gradually lose patience and confidence, then choose other service platforms to use. Hence, all major Internet and media platforms, as the executors of preventing the dissemination of problematic content on the digital platform, formulate their own policies to supervise and manage in accordance with the laws and regulations of the government and relevant Internet management institutions, and effectively prevent the dissemination of harmful information such as violence and pornography. However, the excessive intervention and regulation of user published content also face the problem of customer churn. The moderation system needs to make technical adjustments to meet the needs of users.
Users ——Interactive participants in Internet construction
Users are interactive participants of the Internet, and they are indispensable to the construction of a healthy Internet society and environment. As the result, they have the right and responsibility to supervise whether the platform and other users abide by the laws and rules of Internet use. Nowadays, the boundary between the receiver and sender of information is gradually blurred. Users not only receive all kinds of information when browsing the network, but also create and send it to the platform for sharing. When they see illegal information, in order to maintain their physical and mental safety and health, and maintain a harmonious Internet environment, when the system cannot identify the content in question in time, they all timely complain and report to prevent the continued spread of the problematic content. For example, most TikTok users report videos containing bad information such as vulgar and violent videos on their home pages in a timely manner. The videos will be marked as having problems and given priority to manual review, and then quickly removed from the shelves, or even the publishers will be blocked directly. In the process of participating in the moderation system, users prevented the spread of these bad information, which not only saved audit resources, but also gave feedback on the platform’s content quality, attracting the attention of the platform.

Conclusion
It is necessary to prevent the spread of violence, bullying, harassment and other problematic content that threaten the Internet society and user security released on the digital platform. The government, as an authoritative official institution, establishes laws to regulate and restrict the use of the Internet. The social media platform, as the main channel of information spread, has formulated deeper and more specific policies to manage the platform in coordination with laws and use professional technology to moderate problematic content. Besides, users play a role of mutual supervision to report problematic content in a timely manner to prevent continued spread. Although the spread of problematic content on digital platforms continues to occur, the mutual cooperation and supervision of the government, digital platforms and users effectively prevent the spread of these problematic content.
Reference List
CBS NEWS (2016,April 19). Ohio teen claims she livestreamed 10-minute rape for “evidence”. Marina Lonina: Ohio teen claims she livestreamed 10-minute rape for “evidence” – CBS News
Contreras, B (2021,Dec 3). ‘I need my girlfriend off TikTok’: How hackers game abuse-reporting systems. TikTok creators say they lose videos through mass reporting – Los Angeles Times (latimes.com)
British Psychological Society ( 2015, May 6). Viewing violent news on social media can cause trauma. Viewing violent news on social media can cause trauma — ScienceDaily
Tarleton,G (2017) ‘Governance by and through Platforms’, in J. Burgess, A. Marwick & T. Poell (eds.), The SAGE Handbook of Social Media, London: SAGE, pp. 254-278.
Tarleton,G. (2018) All Platforms Moderate. In Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. Yale University Press. pp. 1-23.
The State Council Information Office of the People’s Republic of China (2011, May 11). Internet information dissemination and standardized governance. http://www.scio.gov.cn/zhzc/10/Document/1014610/1014610_2.htm
O’Hara, K., & Hall, W. (2018). Four Internets: The Geopolitics of Digital Governance (No. 206). Centre for International Governance.Innovation. https://www.cigionline.org/publications/four-internets-geopolitics-digital-governance
Platz, C (2021, Aug 7).The Ticking of the Tok. The Ticking of the Tok: When TikTok Moderation Backfires | Ideaplatz (medium.com)