
The growing popularity and development of the Internet has not only created new public spaces, spawned an online economy, and created new forms of culture, but has also changed the way people live, think and value. With the advent of the web 2.0 era, people are increasingly accessing various forms of Internet content through proprietary social media platforms and applications, so content is freely and richly mobile on the Internet, it is estimated that 500 hours of video are posted on YouTube every minute and 243,000 photos are uploaded on Facebook (Lalani et al., 2020). As Microsoft’s Chief Digital Safety Officer says, “Technology provides the tools to learn, entertain, connect and help solve some of the world’s biggest challenges, but digital security remains a threat to these possibilities” (World Economic Forum Annual Meeting, 2022). Many observers are raising concerns today that search algorithms and social media may be undermining the quality of information people see online, and that the spread of wrong information could exploit democracy in the digital age. Therefore, how to regulate Internet content has become one of the topics of extensive global debate.
The necessity of managing harmful content on the Internet

A platform is a business model, social media platform connects consumers with digital content creators, and makes their interaction profitable through advertising revenue, as the platforms do not produce content but only act as platforms for freedom of expression, so they do not believe they should be responsible for the content produced by their users (Esade Business & Law school, 2020). Obviously, this argument is narrow. As the commodity attribute of media information becomes more and more prominent, some media platforms are willing to sacrifice social and public interests to achieve commercial interests, attracting and guiding audiences to publish vulgar or false information that appeals to the public in order to increase their traffic. In Christchurch, the white supremacy gunman killed 50 people at two mosques and live-streamed his shooting spree on Facebook. Minors make up one-third of Internet users, and global estimates indicate that one-third of all children have been exposed to online pornography, and many more children are being lured over the Internet and traded for sex by offenders over the Internet. These incidents highlight the great risk of the rise of hatred and extremist ideologies, as well as the danger of confusion in the transmission of social values leading to physical and psychological harm to young people, so making Internet content regulation an indisputable initiative.

-
Government Management
The pressure to effectively remove illegal and harmful content online has now been implemented into law in several national jurisdictions. Take Germany’s relatively new legal framework “netzwerkdurchsetszungsgesetz” (NetzDG) as an example. According to the regulations of NetZDG, if social networks with more than 2 million users in Germany fail to delete illegal content quickly according to German law, they will be held legally responsible. (Gorwa, 2019), and a new rulebook Digital Services Act proposed by the EU on how Europe should regulate the large tech and digital sector, which specifies that Very Large Online Platforms (VLOPs) should be strictly regulated, a risk assessment must be conducted at a minimum of once a year to assess any negative impact on privacy, freedom of expression and information, prohibition of discrimination and children’s rights, and regulations such as prohibiting targeted advertising for children and restricting the collection of data for analysis, etc. Those who violate the above regulations will be fined up to 6% of their global turnover (Vosloo,2022). This means that VLOP’s algorithm is more transparent to the EU and its member states, that censors reviewing it have access to key data from such platforms, which can take prompt measures against erroneous information to raise awareness of the impact of science and technology on children. To some extent, the above regulations have created a strict network legal environment, which has increased the responsibility of companies operating in the digital field. The direct result is that the company’s market operation costs have increased because they must devise resources and technology to comply with legal requirements to avoid responsibility.

-
Platform internal management
Social media is increasingly taking responsibility for managing content and regulating user activity, not only to comply with legal requirements but also to avoid losing users who choose to leave because of offence and harassment. Most advertisers don’t want their products to be associated with unethical information and videos, and the platform has an obligation to protect the corporate image of customers. (Flew et al., 2019). Fake news is the publication of false information or misleading articles that deceive the public by fake media accounts that are faked as real media sites, a classic case of which was during the 2016 presidential election in the United States, the most widely circulated false news reports were Pope Francis’ endorsement of Donald Trump, Hillary Clinton’s sale of weapons to ISIS, Hillary Clinton’s disqualification from federal office, and the FBI director receiving millions of dollars from the Clinton Foundation. It is difficult for news consumers to distinguish true news from false news. A survey from the Ministry of Public Affairs shows that “fake news headlines deceive American adults about 75% of the time and are remembered by a large percentage of voters” (West,2017). This means that this information can largely change the outcome of election campaigns and influence public perceptions, so Internet companies should make efforts and contributions to censoring technology. Most social media platforms currently review and filter user postings. Review is the process of screening, evaluating, categorizing, approving or removing/hiding online content based on posting policies (Gillespie, 2018). For some special sensitive words or offensive language, the platform will give warnings or risk tips when publishing, even though the platform has invested a lot of thought in the algorithm technology for information tracing, there are still many users using the platform loopholes to avoid tracing. So the solution YouTube figured out was to rely on deep learning algorithms for a round of censorship of risky content and a mechanism for reporting user complaints, and then recruit users to find videos that evaded censorship from the AI network (West,2017). Platforms can also strengthen online accountability systems by instituting stricter real-name policies and enforcement against fake accounts, and users must submit proof of true identity to hosting platforms to avoid people using fake names to post objectionable content and engage in prohibited activities on the Internet.
Conclusion
The technological gene of the Internet determines that there is a contradiction between empowerment and restriction. As a new media platform that is open, free, equal and shared, the Internet has become the greatest tool of power in today’s society, but as a communication medium and public space, the Internet must be subject to the restraint of state power and the regulation of law and morality. All in all, there are still many problems to be explored and solved in the management of Internet content, but this is not the task of a single subject, many subjects in society need to make efforts for it. Under the compulsory means of government regulations and the algorithm technology of technology companies, the public’s own efforts are needed. Network information spread in a variety of ways, and the speed and spread of the wide increase the difficulty of the network of harmful information governance, which is a complex and long-term process, only the cooperation of multiple parties to build a civilized and healthy network environment.
References:
Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal of Digital Media & Policy, 10(1), 33–50.
Gillespie, T. (2018). All Platforms Moderate. In Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. New Haven: Yale University Press. https://doi-org.ezproxy.library.sydney.edu.au/10.12987/9780300235029
Gorwa, R. (2019). The platform governance triangle: conceptualising the informal regulation of online content. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1407
Ingber, S. (2019, April 24). Global effort begins to stop social media from spreading terrorism. NPR. https://www.npr.org/2019/04/24/716712161/global-effort-begins-to-stop-social-media-from-spreading-terrorism
Lalani, F., Li, C., & Forum, W. E. (2020, January 13). How to help slow the spread of harmful content online. World Economic Forum. https://www.weforum.org/agenda/2020/01/harmful-content-proliferated-online/
School, E. B. & L. (2020, February 10). Should social media platforms be regulated? Forbes. https://www.forbes.com/sites/esade/2020/02/10/should-social-media-platforms-be-regulated/?sh=5467f4ec3370
Vosloo, S. (2022, June 20). How the Digital Services Act will keep children safe online. World Economic Forum. https://www.weforum.org/agenda/2022/06/eu-digital-service-act-how-it-will-safeguard-children-online/
West, D. M. (2017, December 18). How to combat fake news and disinformation. Brookings. https://www.brookings.edu/research/how-to-combat-fake-news-and-disinformation/
World Economic Forum Annual Meeting. (2022, May 23). Online safety: Making the internet safer by tackling harmful content. World Economic Forum. https://www.weforum.org/impact/online-safety/
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.