Background and problems
With the development and popularization of the Internet and mobile communication technologies, people have been brought into a new networked society. New things such as online news, shopping, voting, and election have enriched people’s online life. The Internet has become an essential part of the national information infrastructure, contributing greatly to social informatization. But according to Gillespie, “Social media platforms arose out of the exquisite chaos of the Web. (Gillespie, 2018) On the one hand, the Web has facilitated the efficient and internationalized dissemination of political culture and the development of participatory political culture, expanding online interaction, participation, expression, and social connections. On the other hand, the Web has created a powerful impact on the dissemination of mainstream political culture, increasingly visualizing bullying, harassment, violent content, hate, pornography, and other problematic content. Therefore, determining who is responsible and how to be responsible in the face of harmful content on digital platforms has become a significant challenge.
Reasons for and effects of destructive content
There are two main reasons for the prevalence of network uncivilization. First, there are not enough constraints on the subjects using the Internet. The Internet, a virtual and open platform, gives users greater freedom of speech and greater content inclusion, while users exist behind the wire, showing only nicknames and virtual information. Users do not need to assume the constraints and rules of reality, which leads to the neglect and slackness of users’ awareness of laws and regulations as well as ethics and morality. Second, the creation and innovation of rules, regulations, and institutional norms are not perfect. With the continuous development of network technology, management means and concepts have not progressed, thus causing a disconnect and giving opportunities to users with bad intentions. The destructive effects of these undesirable contents have become more drastic through the wide spread of the Internet. Unwanted Internet content may potentially threaten users’ outlook on life, values, and worldview while endangering physical and mental health. In 2013, a 12-year-old girl from Polk County, Florida, jumped to her death after experiencing cyberbullying. The two girls who taunted and bullied her were convicted of felonies. Rosalind Wiseman believes that cyberbullying is so public that it makes everyone feel like it’s happening all the time. Victims can feel extreme pain because they feel like the entire community is banding together against them and that no one has their back. Therefore, governments, digital platforms, and users are all responsible for stopping the creation and distribution of harmful content.
“Young Girls Arrested for Bullying Classmate in Rebecca Sedwick Suicide Case” by ABC News. All rights reserved. Retrieved from https://www.youtube.com/watch?v=4a3H63w5W7U
Who has the responsibility and how to implement the responsibility
The state and government need to take responsibility for the soundness of information on the Internet. On the one hand, to prevent the lack of discrimination of individuals and platforms from causing gossip to be destructive and to prevent foreign powers from disturbing domestic stability. On the other hand, the government has the influence and ability to formulate policies to address the underlying issues. Because the sharing nature of digital media makes it easy for undesirable content to be spread by users across multiple platforms, this behavior of users largely only serves to amplify the negative impact, while the review and management of content by individual media are not enough to eliminate the spread of undesirable information completely. Facebook CEO Mark Zuckerberg claimed that “he had increasingly come to believe that Facebook should not make so many important decisions about free expression and safety on [its] own.” (Gorwa,2019) And therefore, creating an “Oversight Body ” for content moderation would let users appeal takedown decisions to an independent body (Zuckerberg, 2018, n.p). In addition, the government is, to some extent, more focused on public interest and social impact, in contrast to platform companies that may be profit-driven, leading to some unobjective management phenomena and causing mistrust among users. Germany’s NetzDG law came into effect at the beginning of 2018. platforms are required to establish procedures to review content and complaint information, and both individuals and media face fines if objectionable content is not removed promptly. In 2015, Australia’s Enhancing Online Safety Act created an eSafety Commissioner with the power to demand that social media companies take down harassing or abusive posts. These examples demonstrate that governments actively regulate online content, using the law as the basic framework for objective and effective regulation.
For digital platforms：
Digital platforms are the most direct vehicles for reviewing and managing information, and social media platforms have now evolved as content curators. Platforms are responsible for regulating user behavior and alerting online users to adjust or remove inappropriate content to avoid spreading undesirable content. (Gillespie,2018)
Globally, YouTube employs 10,000 people in monitoring and removing content, as well as policy development. Google said 8.8m videos were taken down between July and September 2019, with 93% automatically removed by machines, and two-thirds of those removed 3.3 million channels and 517 million comments. This demonstrates the considerable role digital platforms play in managing content selection for distribution, and the amount of content is huge. The platform’s filtering system can automatically restrict access to problematic sites based on general notifications, end-user preference, or keywords. The second thing the platform can also do is borrow from imitating some large websites and implement real-name authentication. Internet users must carry out accurate and effective identity authentication when they post information on microblogs, ins, and other self-media platforms. Which can, to a specific extent, control the behavior of some Internet users to post information freely and effectively reduce the spread of false and inaccurate information.
For media platform users：
In social media, users are given great space for participation and freedom of expression, which not only meet people’s needs for information but also help them interact, communicate, and share with others through its social attributes, thus gaining a sense of identity and belonging. Because Internet users often cannot accurately judge the boundaries of their speech, they self-righteously exercise the so-called “freedom of speech” out of justice and social morality, resulting in the frequent occurrence of wrong online language violence. According to statistics, the number of people who have experienced cyberbullying increased from 18% to 37% between 2007 and 2019. 59% of American teens have been harassed online. Cyberbullying has led to countless cases of suicide and mental illness.
Therefore, people must be limited to a certain extent regarding freedom of expression on the Internet. And need to improve their civic awareness and moral literacy, strengthen self-management and self-restraint, establish a correct outlook on life, values, and worldview, adhere to socialist culture and spirit, and stick to spiritual beliefs and moral pursuits.
Maintaining a healthy and orderly development of the online cultural market is a protracted battle that cannot be achieved without the cooperation of individuals, platforms, and the government. The government should reasonably legislate regulation to prevent over-regulation, give users the right to reasonably express their opinions, and gift enterprises the space to review the platform and make reasonable profits. Media should promptly discover and deal with network malpractices and illegal behaviors to prevent deterioration and proliferation, provide an exemplary network environment, and focus on their profits while not endangering the interests of society. While enjoying the right to freedom of expression, users should strengthen their self-management ability, reason, speak carefully, and respect others.
Check, R., 2022. Social media: How do other governments regulate it? [online] BBC News. Available at: https://www.bbc.com/news/technology-47135058
Gorwa, R., 2019. The platform governance triangle: conceptualising the informal regulation of online content. Internet Policy Review, [online]8(2), pp.1-2. Available at: https://www.econstor.eu/bitstream/10419/214074/1/IntPolRev-2019-2-1407.pdf
Gillespie, T. (2018). All platforms moderate, Custodians of the Internet: platforms, content moderation, and the hidden decisions that shape social media. Yale University Press. https://doi.org/10.12987/9780300235029
Humanrights.gov.au. 2022. Internet Regulation in Australia | Australian Human Rights Commission. [online] Available at:
Stanglin, D. and Welch, W., 2022. Two girls arrested on bullying charges after suicide. [online] Usatoday.com. Available at: https://www.usatoday.com/story/news/nation/2013/10/15/florida-bullying-arrest-lakeland-suicide/2986079/
Yaraghi, N., 2022. How should social media platforms combat misinformation and hate speech?. [online] Brookings. Available at: https://www.brookings.edu/blog/techtank/2019/04/09/how-should-social-media-platforms-combat-misinformation-and-hate-speech/
Zuckerberg, M. (2018). A Blueprint for Content Governance and. Enforcement. Retrived from