As we enter the web 2.0 era, the rapid development of digital platforms has been accompanied by a variety of problems, such as hate, bullying, porn and other negative content circulating in the online world causing a huge detrimental impact. To balance the public and individual interests and to maintain the online environment, users, platforms and governments all have the responsibility to stop the spread of these contents. Users need to stop the spread of bad information by self-regulation. For platforms, they need to be more stringent in their content moderation mechanisms and increase the punishment for users who violate the rules. And the government needs to improve relevant laws while strengthening people’s awareness of resisting undesirable content. All in all, stopping the spread of problematic content on digital platforms to clean up the online environment is a goal that needs to be achieved by users, platforms and governments working together.

For users, a better way to stop the spread of negative content is to raise their moral awareness and realize the harm caused by such content, so that they can regulate themselves and refrain from posting and distributing the content. Users have a responsibility to do so because they are members of the online world and cannot compromise the public interest by posting malicious content for personal interest. With the development of media, the model of information communication has changed from the earlier one-way linear communication named transmission model to a multi-way communication model named ritual model (Carey, 2009). This coincides with Karpf’s (2018) idea that the model of information communication has shifted from the earlier dogmatic and unidirectional model to the current rational and multidirectional one. This historical shift in the mode of information communication cannot be ignored, as it means that information senders and receivers can interact with each other and influence each other, people have more of their own ideas, and people have more opportunities to speak on digital platforms. However, this situation also comes with hidden dangers, which increase the possibility of negative content such as bullying, hate, and porn being posted and distributed on digital platforms. The spread of such content has caused significant harm, with participants who have experienced cyberbullying, for example, more than four times as likely to report suicidal thoughts and attempts than those who have not (U.S. Department of Health and Human Services, 2022). A famous teenage suicide in the United States was caused by cyberbullying. Rebecca Sedwick, a 12-year-old girl, suffered from a year-long cyberbullying and finally chose to take her own life when she couldn’t take it anymore. Therefore, to stop the circulation of problematic content on the Internet, users need to be more ethical and aware of the harm that such negative content can cause to others and to the social order. In addition, users need to exercise self-regulation and view the posting and distribution of such negative content as a shameful act.
It is the responsibility of platforms to strengthen content moderation and increase punishment for violating users so that problematic content can be stopped from spreading in the digital world. This is how we can balance public and personal interests as much as possible and maintain a civilized and positive online environment. Before people were aware of the harm caused by this bad content, the mechanisms and rules of the platforms were not well developed, which gave this negative and toxic culture an environment to thrive. It wasn’t until the content got into big trouble that platforms panicked and realized their responsibility to stop the growth of this bad culture. As in the tragic case of Reddit, Zoe Quinn was subjected to cyberbullying by her ex-boyfriend who wrote and posted emotionally negative content to Reddit (Massanari, 2016). However, Reddit’s content moderators wanted to remain neutral and therefore did not want to get involved in the content debate. This shows that part of the reason for this disaster was the lack of strict content moderation on the platform, which led to the posting and distribution of problematic content. Therefore, it is the responsibility of the platform to increase the review of content to prevent bad content from being posted, to solve the problem from the root and balance the public interest and personal interest, and protect the personal interest of each user on the platform from being damaged while preventing the problematic content from disturbing the public order of the network. The platform can not only hire content moderators to review the content, but also use artificial intelligence to detect the bad words in the content, to stop the problematic content from being posted. For instance, Parent Security posted on twitter about Irish researchers using machine translation to help AI detect cyberbullying. This kind of content moderation mechanism built by human and technology can maximize the prevention of problematic content posting and circulation. In addition, the platform can also increase the punishment for users who post offending content. After a user is found to have posted questionable content, the platform can punish the account with a ban on commenting, a ban on posting new works and a cancellation of the account depending on the severity. Especially now that the account real-name system is being vigorously enforced, account punishment will serve as a powerful warning that users will reduce or avoid posting problematic content to maintain their reputation. In short, platforms have a responsibility to stop the spread of problematic content in the digital world, to balance public and personal interests, and to maintain a civilized online environment. Therefore, platforms should strengthen content moderation and increase punishment for users who violate the rules.
The government has the responsibility to improve the relevant laws and at the same time strengthen people’s awareness to resist undesirable content, which is to stop the circulation of this content in the online world. As the documentary The Social Dilemma (Orlowski, 2020) shows, Pizzagate was used as problematic content by conspirators to circulate on digital platforms, leading many people to be controlled by this content and to channel their emotions. These people who were blinded by the negative content started fights that caused not only financial losses but also numerous deaths and injuries. They became the poor victims of political party struggles. This situation shows us that problematic content has a negative impact on the economic situation, while it poses a greater threat to people’s security and public safety. This is an issue that deserves a lot of attention from the government. According to Delcker (2020), the German government became aware of the problem in 2017 and therefore enacted the Network Enforcement Act, the NetzDG, which did go some way to curbing the spread of problematic content, for example by requiring social media platforms such as Facebook to take swift action to remove inflammatory material or face large fines. Some people believe that the Act may infringe on individuals’ right to freedom of speech, and the Act still does not do a good job of addressing this aspect of the problem. So, a new rulebook, called the Digital Services Act, was published to clarify how online platforms should review illegal content. So, a new rulebook, called the Digital Services Act, was published to clarify how online platforms should review illegal content. It proves that the law is improving, and that the government is balancing the public interest with individual interests while trying to curb the spread of bad content. In addition, the government should also call on people to take the harm caused by problematic content and realize that they should resist such content. The government can achieve this goal by releasing relevant propaganda films, requiring schools to conduct relevant public classes, and so on. In conclusion, it is the government’s responsibility to stop problematic content from circulating in the online world, and to do so they need to improve the relevant laws while strengthening people’s awareness of the need to resist bad content.
Bullying, violent content, porn and other problematic content circulating on digital platforms has become a big problem in the online world, and users, platforms and governments have the responsibility to solve this problem. To stop the spread of these contents, maintain a good online environment and balance public and individual interests, users should stop the spread of problematic information by self-regulation; platforms need to strengthen content moderation and the punishment for offending users; and governments need to improve relevant laws while strengthening people’s awareness of resisting problematic contents. In conclusion, stopping the spread of problematic content is a goal that needs to be achieved by users, platforms and governments working together. With the joint efforts of these three parties, the future of the Internet will be better.
References:
ABC Action News. (2013, September 17). Funeral for Rebecca Sedwick, victim of cyber bullying, held in Bartow. [Video]. YouTube. https://www.youtube.com/watch?v=vTGp-N9uZ3I.
Carey, J. W. (2009). A cultural approach to communication. In Communication as culture: Essays on media and society. (2nd ed.) New York: Routledge,11-28.
Delcker, J. (2020, October 6). Germany’s balancing act: Fighting online hate while protecting Free speech. POLITICO. Retrieved October 3, 2022, from https://www.politico.eu/article/germany-hate-speech-internet-netzdg-controversial-legislation/
Karpf, D. (2018, September 18). 25 years of wired predictions: Why the future never arrives. Wired. Retrieved August 22, 2022, from https://www.wired.com/story/wired25-david-karpf-issues-tech-predictions/
Massanari, A. (2016). #Gamergate and the fappening: How reddit’s algorithm, Governance, and Culture Support Toxic Technocultures. New Media & Society, 19(3), 329–346. https://doi.org/10.1177/1461444815608807
Parent Security [@ParentSecurity]. (2022, September 27). Irish researchers use machine translation to help AI detect cyberbullying. [Tweet]. Twitter. https://twitter.com/ParentSecurity/status/1574783650767446016
The Social Dilemma. (2020). Watch The Social Dilemma | Netflix Official Site. Retrieved October 3, 2022, from https://www.netflix.com/jp-en/title/81254224.
U.S. Department of Health and Human Services. (2022, July 21). Cyberbullying linked with suicidal thoughts and attempts in young adolescents. National Institutes of Health. Retrieved October 2, 2022, from https://www.nih.gov/news-events/nih-research-matters/cyberbullying-linked-suicidal-thoughts-attempts-young-adolescents