
Introduction
With the popularisation and rapid development of the Internet, the Internet has become a platform for many people to express their individual thoughts and the birthplace of public opinion. An open, interactive and expansive platform for information exchange, the Internet provides a high degree of freedom for users to express their opinions and disclose information through the Internet. As Cassim (2015) mentioned, the Internet and social media have the characteristics of safety, security and anonymity, and online users rely on these properties to freely express their opinions on the Internet.

However, some online users take advantage of the anonymity of the Internet to facilitate the spread of undesirable online phenomena and make them more common. While the law and social media promote free speech, it does not mean that words can be used to harm others, create disinformation or spread illegal content (DeCook et al., 2022). Because the offensive content will not only affect the physical and mental health of individuals, but also affect the entire network environment and social atmosphere. Therefore, individuals, platforms and governments should all take action to build a friendly online environment and prevent the spread of bad content. This article will analyse and discuss their responsibility to prevent the spread of harmful online content from the perspectives of individual Internet users, social media platforms, and the government, while emphasising their indispensable regulatory responsibilities.
Why is Necessary to Protect Internet Content?
Spreading bullying, harassment, violent content, hate, pornography, and other questionable content is an act that damages the online environment. Representative of these behaviours is cyberbullying, which is when users receive harm, demeanor, and insults online through text or pictures (Dillon & Rhodes, 2019). Cyberbullying has appeared more frequently on social platforms in recent years, and celebrities or ordinary people may experience cyberbullying. Moreover, the consequences of cyberbullying are not only psychological trauma but may also lead to the victim’s suicide. In a real case, a 22-year-old Japanese actress committed suicide after receiving cyberbullying after participating in a variety show. Actual cases of cyberbullying have made netizens, social media platforms, and governments pay more and more attention to regulating the way and behaviour of every citizen on the Internet. Not only cyberbullying, but also the spread of pornography, violence and hate content can cause social disorder. Therefore, in order to maintain social order and protect the rights of internet users, internet users, social media platforms and the government should do their part to stop the occurrence of online malpractices.
Who Should Stop the Spread of Harmful Content?
The Platforms
Digital platforms are becoming increasingly influential in the media space, and they have the responsibility and obligation to moderate and regulate the content posted. Platforms take on the responsibility of managing content and regulating user activity not only to meet legal requirements, but also to avoid losing offended or harassed users and to protect the corporate image of the platform (Flew et al., 2019). Because the platform is the main channel for information dissemination and exchange, strengthening the review and control of content by the platform is conducive to curbing the spread of problematic content and behaviors on the Internet. For example, Twitter, as one of the modern global social media platforms for news, entertainment and commentary, has established new rules that prohibit any form of discrimination and bullying such as race, nationality, sexual orientation, disability to regulate the behavior of users (Klonick, 2018). Despite social media platforms censoring and regulating content for their users, much online violence, hate speech, harassment, and sexual violence still exist. For instance, three English footballers were subjected to a barrage of racist abuse on their personal Instagram accounts following the men’s Euro 2020 final, which Instagram later admitted was the result of a lapse in vetting racist comments. Instagram’s missteps in censoring content imply that digital platforms still need to step up their censorship efforts with strict platform rules and order for content production and dissemination.

Moreover, the platform is now faced with the difficulty that requires the platform to solve communication problems in different countries. Because various countries enforce different standards for platform regulation content, platforms will be caught between conflicting legal systems and unable to agree on cross-border dissemination issues (Flew et al., 2019). Hence, media globalisation is challenging for platforms to formulate policies. DeCook et al. (2022) also mentioned that content regulation and management of user behaviour are global, and the unbalanced practice of moderation may cause dissatisfaction among people from different countries. Global digital platforms still need to strive for institutional structures that transcend national borders to improve the global management of content by platforms ((Flew et al., 2019). Therefore, the current global digital platform should be in the public interest and regulate content transparently, subject to the scrutiny of people around the world.
The government
The government is a state institution with authority to create strict legal regulations to restrain the spread of harmful content by unscrupulous individuals on the internet. When digital platforms are used to publish fake news, hate speech, and violent content, the government should intervene to enforce regulation of the illegal content published to avoid inflaming public sentiment and causing social panic (Xie et al., 2021). When platforms cannot be effectively governed, national governments should develop regulatory frameworks and laws to regulate them. For example, governments have different rules and laws regulating social media platforms. In Australia, one of the rules is that the cybersecurity authority can issue social media companies with a 48-hour deadline to remove illegal content. Also, the Chinese Internet Regulatory Bureau has set up cyber police to shut down and remove illegal websites and software. Hartley and Vu (2020) point out that intuitive government policy interventions during a pandemic can be a powerful means of verifying social media content and dispelling public misunderstandings. It illustrates that the intervention of government regulators can effectively curb the persistent dissemination of harmful information and actively create a friendly Internet environment for the public. However, government intervention needs to be controlled, and excessive control or politically motivated use of the media to guide public thought and behaviour is undesirable. The form of the state and economic relations prevalent in the operation of regulation of the media are closely linked, and it inevitably attracts a range of stakeholders to the regulation in an attempt to shape practice (Schlesinge, 2020). It reflects the potential for governments to use social media to shape public opinion and control popular thought as a political tool.

For Example, Twitter hid former President Donald Trump’s tweets during the election because they contained content that misled citizens about the process of engaging with the ballot box. Government intervention in social media is necessary but should maintain objectivity and fairness. The government need to establish and improve the monitoring system for information release on media platforms to reduce the generation and dissemination of information, and retain the information release of each media permanently (Xie et al., 2021). Therefore, the government’s enactment of relevant laws and appropriate intervention in managing platforms is conducive to building a civilised online cultural environment and preventing the spread of harmful content.
The Internet user
As recipients of information, Internet users should fully play the role of supervision. Jones and Savage (2019) believe that it is not only the perpetrators who are cyberbullying and that bystanders also become indirect abusers in the face of others being indifferent to cyberbullying. It shows that citizens should bravely stand up for the victims and resist when they encounter online bullying. Therefore, the public should improve their moral education, refrain from cyber violence against others, consciously resist online pornography and violent content, and promptly report harmful content to the platform and the government (Xie et al., 2021).

In addition, when netizens encounter harassment, violence, hatred, pornography and other problematic content on the Internet, they must organise and fight together. For example, a campaign by Internet users to boycott sexist dairy ads has been watched and investigated by the Advertising Standards Authority. It reflects the improvement of netizens’ own moral awareness, and the power of the masses has attracted the attention of relevant departments. In conclusion, internet citizens should enhance their moral and legal awareness and actively regulate the spread of undesirable content on the internet.
Conclusion
Overall, in an era of rapid Internet development where criminal elements seek to disrupt the online environment and social order by spreading undesirable content, netizens, digital platforms, and the government should actively fulfil their responsibilities to protect online content from harmful information.
Reference
BBC. (2014). Resist Porn Culture brands Peta dairy advert ‘misogynistic’. At https://www.bbc.com/news/uk-england-nottinghamshire-30351833, accessed 6 December 2014
Cassim, F. (2015). Regulating hate speech and freedom of expression on the Internet: Promoting tolerance and diversity. South African Journal of Criminal Justice, 28(3), 303–336.
Criddle, C. (2021). Instagram admits moderation mistake over racist comments. At https://www.bbc.com/news/technology-57848106, accessed 15 July 2021.
DeCook, J. R., Cotter, K., Kanthawala, S., & Foyle, K. (2022). Safe from “harm”: The governance of violence by platforms. Policy and Internet, 14(1), 63–78. https://doi.org/10.1002/poi3.290
Dillon, K. P., & Rhodes, N. (2019). Defining Cyberbullying: Analyzing Audience Reaction to Anti-Bullying Public Service Announcements. In The Routledge Handbook of Communication and Bullying (1st ed., pp. 221–229). Routledge. https://doi.org/10.4324/9781315148113-24
Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal of Digital Media & Policy, 10(1), 33–50. https://doi.org/10.1386/jdmp.10.1.33_1
Hartley, K., & Vu, M. K. (2020). Fighting fake news in the COVID-19 era: policy insights from an equilibrium model. Policy Sciences, 53(4), 735–758. https://doi.org/10.1007/s11077-020-09405-z
Jones, S. E., & Savage, M. W. (2019). Examining Cyberbullying Bystander Behavior. In The Routledge Handbook of Communication and Bullying (1st ed., pp. 230–240). Routledge. https://doi.org/10.4324/9781315148113-25
Klonick, K. (2018). THE NEW GOVERNORS: THE PEOPLE, RULES, AND PROCESSES GOVERNING ONLINE SPEECH. Harvard Law Review, 131(6), 1598–1670.
Reality Check Team. (2020). Social media: How do other governments regulate it? At https://www.bbc.com/news/technology-47135058, accessed 12 February 2020
Schlesinger, P. (2020) After the post-public sphere. Media, Culture & Society, 42(7–8), 1545–1563.
Spring, M. (2020). Trump’s postal-vote tweet misleading, says Twitter. At https://www.bbc.com/news/technology-53888376, accessed 24 August 2020
Xie, R., Liu, D., Jia, Y., & Zhang, P. (2021). Influence of We Media Information Dissemination on Public Emotional Cognition and Behavior under Government Responsibility Constraint. Discrete Dynamics in Nature and Society, 2021, 1–14. https://doi.org/10.1155/2021/2711602