
Abstract:
The various types of information disseminated on platforms have brought convenience to people’s lives as well as a lot of distress, and have had a certain impact on society, especially on the growth process of young people who are deeply affected by it. Does freedom of expression mean that there are no restrictions? This article discusses who should be responsible for stopping the spread of undesirable content on digital platforms. It will also delve into the ways to stop the spread of such information.
The nightmare videos of childrens’ YouTube — and what’s wrong with the internet today | James Bridle By TED. All rights reserved. Retrieved from https://www.youtube.com/c/TED
The impact of uncontrolled dissemination of undesirable content on digital platforms
Millennials are the first generation to be introduced to social media and use it as a primary form of communication (Durfy, 2018). From this generation onwards, children grow up with the Internet and thus Internet addiction becomes the most important parenting challenge for parents. Internet addiction is infiltrating the lives of adolescents on both a psychological and physical level, bringing about morbid obesity, depression and anxiety, and other attention problems (Clark, 2014). One of the major contributing factors is the ease with which adolescents can access things on the Internet that they should not be exposed to at their age, such as pornographic websites and violent games. Premature exposure to these can be stimulating for teens, and these contents can cheer you up, can also make you down. When their values are not fully formed and they are not yet able to master their emotions well, premature exposure to undesirable content on the Internet can quickly break them down. For example, Dolly in Australia chose to commit suicide at the young age of 14 because she was unable to cope with online violence (ABC News , 2019). The research showed that teenage users who spent the most time on social platforms reported higher rates of depression than those who spent the least time (from 13% to 66%) (Miller & Bubrick, 2018). The result is unacceptable to everyone and is a wake-up call for Internet governance. Preventive measures should be improved as early as possible before and after the occurrence of intervention measures, so as to avoid and timely stop the uncontrolled spread of pornography, violence, terror, and other undesirable content on the Internet which leads to terrible consequences.

Who should be responsible for stopping the distribution of such content?
The people who should be responsible for the dissemination of undesirable content can be divided into three categories: governments, companies and individuals. The government sector must take the main responsibility for Internet regulation. Because the government has the greatest power among these three, it is most capable of restricting the behavior of platforms and individuals, and should also take responsibility for cleaning up the Internet environment. This will also be more conducive to the government’s governance of the real world. But the special nature of social governance also makes the government face the choice of the degree of restriction. In order to balance the scale between the development of the Internet industry and Internet governance, the government can not do a complete suppression of such information.

For the company, it needs to be classified again into Internet companies as well as those brokerage companies to which the influencers belong. It is undeniable that the opportunities offered by the Internet are unprecedented and mean unlimited money. It is true that posting curious content is more likely to generate debate and attention, but the impact is also incalculable. Brokerage companies have the ability to screen creative content and stop publishing such undesirable content, and they can effectively suppress it before it spreads. Whereas Internet companies have strong technical support and the ability to regulate platform data, they can also manipulate (here means delete directly) undesirable content. Internet giants such as Facebook and Google, dominate most of the Internet market and routinely reap the benefits of the various types of information available on their platforms (Musiani, 2013). Since these private organizations are able to profit from relevant information on the Internet, they should also be held accountable.
Individuals are both the creators and the recipients of information on the Internet. Therefore, they are responsible for the content posted on the Internet, which means they are responsible for themselves. From a legal perspective, citizens also have an obligation to be responsible for their own words and actions, and the Internet is not a place outside the law. People should be responsible for not being the publisher of undesirable content nor the disseminator of undesirable content, or being forced to be the recipient of undesirable content.
How can we stop the spread of such content?
The first method: establish a complete regulatory system.
The government and other regulatory authorities have to implement their function of legislation and promulgate regulations before undesirable content is released. Make it clear to organizations and individuals that the Internet is being regulated and that it is illegal to distribute content such as violence and pornography. This is the primary and direct way for the government to intervene as a way to restrict the behavior of companies as well as individuals in terms of information dissemination. The Online Security Act of 2021, requires the industry to regulate illegal and restricted content, including mechanisms to remove abusive and harmful content from online and prohibit the sharing of private images without consent (WESNET, 2019). This will effectively increase the cost of crime and make people more cautious about posting online content.
Platforms also need to strengthen the review of content before it is released. In Chinese version of TikTok, videos are reviewed before they are released, too sexy videos are warned that they may contain pornography and other undesirable orientation and are not released. Platforms can avoid ethical problems and improve the efficiency of reviewing content by using artificial intelligence wisely. AI will not be psychologically burdened by receiving certain specific information and can improve the accuracy of the review (New America, n.d.). With the countless amount of data being generated globally every second, a full manual review would be an incredible amount of work, using AI can reduce the cost of review (Wang & Zhu, 2022). The platform should also establish a comprehensive reporting mechanism, so that users who find undesirable content in the process should be able to report it easily, and then be notified by the platform to remove it (HREOC, 2002). This can help stop the wider spread of undesirable content in time.

It is true that the Internet is inclusive for individuals, but this does not mean that there are no restrictions. It is important to make the public aware that posting and spreading such undesirable content such as violence and pornography on the Internet is illegal and will cause negative social impacts. The real-name system can effectively prevent people from using false personal information to spread undesirable content on the Internet indiscriminately (Park & Moon, 2019). Meanwhile, the public can become one of the regulators and help the government and companies to regulate the online environment together. The public, as users, is closer to these undesirable contents on the Internet, which is more conducive to checking the information concerned. Then timely report the platform or government department’s reporting system, which can effectively stop the further spread of undesirable content.
The second method: restricting access rights.
The government department will make regulations on the scope of access to various websites. Further implementation will be carried out by Internet companies. By real-name registration, personal information during registration can help the Internet company to classify users, and then restrict the access rights of different users according to the classification (Singer et al., 2012). The Chinese version of TikTok’s teenage mode is a good example. Parents can enable this mode for their children with one click, and the next videos pushed by TikTok will all be content suitable for teenagers to watch. Stopping the spread of undesirable content through various ways can maintain the stability of the Internet environment, but it is also vital for individuals to learn how to safely use the Internet to find information (Clark, 2014). That will help them at least not actively accessing that undesirable information.

Conclusion:
By regulating from all angles and areas of society at the same time, it will effectively stop the spread of bad content on the Internet. Government restricts companies and companies restrict individuals. Human beings are supposed to be the creators of the Internet and should not make the next generation of human beings the victims of the Internet.
Reference list:
ABC News . (2019, September 18). Dolly Everett’s suicide leads teen to create “powerful and relevant” cyberbullying ad. ABC News. https://www.abc.net.au/news/2019-09-19/teen-suicide-of-dolly-everett-sparks-new-ad-on-cyberbullying/11523028
Clark, T. (2014, April 25). Internet Addiction: Too Much Time on the Internet for Kids. The Center for Parenting Education. https://centerforparentingeducation.org/library-of-articles/kids-and-technology/how-much-time-internet-kids/
Durfy, L. (2018, December 20). Social Media for Every Generation. PostBeyond. https://www.postbeyond.com/social-media-generations-2/#:~:text=careful%20online%20research.-,Millennials,media%20while%20still%20in%20school.
HREOC. (2002, October). Internet regulation in australia. Australian Human Rights Commission. https://humanrights.gov.au/our-work/publications/internet-regulation-australia
Miller, C., & Bubrick, J. (2018, April 30). Does Social Media Use Cause Depression? Child Mind Institute. https://childmind.org/article/is-social-media-use-causing-depression/
Musiani, F. (2013). Dangerous Liaisons? Governments, companies and Internet governance. Internet Policy Review, 2(1).
New America. (n.d.). Everything in Moderation. New America. Retrieved October 14, 2022, from https://www.newamerica.org/oti/reports/everything-moderation-analysis-how-internet-platforms-are-using-artificial-intelligence-moderate-user-generated-content/the-limitations-of-automated-tools-in-content-moderation/
Park, Y., & Moon, H. (2019, December 4). Internet Real-name System. Gazette Gokmu. http://gazette.gokmu.com/mobile/article.html?no=3158
Wang, L., & Zhu, H. (2022, June). How are ML-Based Online Content Moderation Systems Actually Used? Studying Community Size, Local Activity, and Disparate Treatment. ACM Digital Library. https://dl.acm.org/doi/fullHtml/10.1145/3531146.3533147
WESNET. (2019, September 9). Legal Guide to Image-Based Abuse Legislation in NSW. TechSafety. https://techsafety.org.au/blog/legal_articles/legal-guide-to-image-based-abuse-legislation-in-nsw/