Who should be responsible for the ‘toxic messages’ in the network?

"Social Media Logos" by BrickinNick is licensed under CC BY-NC 2.0.


In the information-explosive Internet era, all content has exploded in digital form, leading to a substantial increase in the amount of information disseminated. In modern times, individuals have access to all the information they want through the Internet. The emergence of social media platforms has made the Internet a ‘lawless place’ where politicians and insurgents express their views and where violence, discrimination, pornography, bullying, hate speech and more and more conflicts and controversial political issues are disseminated (Zeitzoff, 2017). This paper will therefore look at three aspects of platforms, governments and individuals to see who is responsible for these toxic messages and how to stop the spread.

facebook website screenshot
facebook website screenshot” by Spencer E Holtaway is licensed under CC BY-ND 2.0.

Who should stop the spread of these toxic messages?


The companies in charge of social platforms should stop the spread of ‘toxic messages’ and regulate themselves. There are three main reasons for this situation on platforms. The first is that the anonymity of speech due to the protection of personal privacy on the Internet makes the language on the web more paranoid. The second is that, as negative freedom, it is not subject to more stringent censorship mechanisms. The third point is that the technical availability of operational and permissionless innovation (DeNardis & Hackl, 2015) allows the public to show a completely authentic personal form when accessing social platforms, and this so-called freedom of expression is also the key reason for the production of inappropriate speech and vulgar photos. For large media platforms such as Facebook and Twitter, the reasonable and careful removal of certain sensitive information are an important manifestation of self-regulation within the industry.

LIFE Magazine June 23, 1972 - The Beat of Life
LIFE Magazine June 23, 1972 – The Beat of Life” by manhhai is licensed under CC BY-NC 2.0.


Social platforms are two-sided. On the one hand, they give you extreme freedom to express yourself, and on the other hand, anonymous people on the Internet can attack your speech as a result. Fake news, hate speech and pornographic photos are often easily found when you open Google. The photo of Napalm Girl, a naked girl running on the ground, was pornographic and obscene to readers and even to the girl herself, and Facebook tried to make the right decision by removing the photo several times but eventually succumbed to the community. Today, such photos are still clearly displayed anywhere on the Internet. Media platforms offer a wide range of communication opportunities, and what appears to be a presentation of online utopia but is really the presentation of ‘poisonous information’ is increasingly more and more common, a reflection of the unclear establishment of the regulatory bottom line of the platform swaying back and forth (Gillespie, 2018). The generation of hate speech is often accompanied by online violence, name-calling, threats and flesh searches. Indian female journalist Sagarika Ghose was threatened with gang rape for responding to violence against women. The fact is that platforms often do not have overly clear rules about such hate speech, and platforms often cannot spare more energy to be responsible for each user’s content. The policies of the past decade have led to a lack of accountability to users and more arbitrary enforcement of rules on platforms (Tarleton Gillespie, 2016). Good management of platforms can help to improve the maintenance of social goodwill and increase brand value, so platforms that have control should also take the initiative and take responsibility to stop the spread of ‘poisonous messages’.

Is it time for social media to be regulated?” by CBC News. ALL RIGHTS RESERVED. Retrieved from: https://www.youtube.com/watch?v=dL0pckEO4kE

Can the government regulate social media platforms?

In the absence of platform regulation, some want to turn the task of regulating social platforms over to the government. In fact, government power has long permeated social media, as shown by Trump’s use of Twitter to rule during his time in office, during which a plethora of fake news also created a virtual reality for his own suitors. China, however, is a prime example of a country that has truly placed platform regulation in the hands of the government. Unlike the Western format, China’s has adopted a top-down model, with the Chinese government exerting extensive and diverse control over the entire social platform, further ensuring the stability of the regime and public opinion (King et al., 2017).

The Chinese Internet regulator currently proposes a bill that would require high-quality tough standards for all operators and compel all social media platforms to censor comments posted by users. The bill provides for a series of strong measures, such as real-name Internet access, and TikTok is also forced to display its IP address under each comment (Coco Feng, 2022). The effect was obvious, as the Internet became a much better place, with much less so-called pornography, violent content, and fraud. But the price was freedom of expression, and the Biden administration tried to push for sufficient transparency and data protection on social media platforms so that media companies could be held accountable for their users’ hateful content or false information (Ghosh, 2021). In practice, government regulation, in a sense, is the introduction or promotion of a law by the government to bind media platforms to make changes. Whether in China or the United States, government regulation is tantamount to exposing more of one’s privacy and passing regulations to bind ‘toxic information’ on platforms.

Tiktok” by TheBetterDay is licensed under CC BY-ND 2.0.


Are individual users responsible for the actions of social platforms?

With the great spread of COVID-19 around the world in recent years, users are spending more time on social platforms. Public communication on the platforms has almost no threshold, and there is no cost to participate in activities. As an individual user with normal social needs, it is important to maintain freedom of expression while abiding by the user code of each software. Maintaining a certain level of media awareness and a self-imposed code of ethics is what unlocks true media professionalism (Campbell, n.d.). Therefore, if individual users want to reduce negative information and speech on the Internet, the first thing they need to do is improve their own sense of ethics and reduce the problems that arise from their wrong speech. Considering the strong interactivity and mediating role of the digital environment, users lack a certain moral sensitivity online compared to face-to-face offline interactions, so establishing an interactive communication environment can help improve users’ moral sense and reduce the spread and generation of ‘poisonous information’.


The development of the Internet has been accompanied by controversy from the very beginning. The information on the Internet is always true or false, and it is difficult for one to determine the true identity of the person one is chatting with. Today, there may be countless strangers who give you praise for your comments and pictures, but you don’t know which day will be abusive or mocking and will net violence against you because of the disagreement of views. Platforms need to improve the current network culture and strengthen the audit of bad information and dissemination channels. Government agencies need to establish sound laws to restrain the behaviour of platform parties and users and achieve corrective effects. Individual users should improve their own sense of morality and strengthen their ability to empathise. All three parties have the responsibility and obligation to stop the spread of poisonous information. But how to truly and effectively reduce the occurrence of such things in the future is still a subject worthy of attention.


Coco Feng. (2022, June 18). China to require sites to hire adequate censors to review online comments. South China Morning Post. https://www.scmp.com/tech/policy/article/3182204/china-tighten-grip-social-media-comments-requiring-sites-employ

DeNardis, L., & Hackl, A. M. (2015). Internet governance by social media platforms. Telecommunications Policy, 39(9), 761–770. https://doi.org/10.1016/j.telpol.2015.04.003

Ghosh, D. (2021, January 14). Are We Entering a New Era of Social Media Regulation? Harvard Business Review. https://hbr.org/2021/01/are-we-entering-a-new-era-of-social-media-regulation

Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. In Custodians of the Internet. Yale University Press. https://doi.org/10.12987/9780300235029

King, G., Pan, J., & Roberts, M. E. (2017). How the Chinese Government Fabricates Social Media Posts for Strategic Distraction, Not Engaged Argument. American Political Science Review, 111(3), 484–501. https://doi.org/10.1017/S0003055417000144

Scott Campbell. (n.d.). THE RESPONSIBILITIES OF SOCIAL MEDIA PLATFORMS AND USERS. Retrieved 13 October 2022, from https://publicengagement.umich.edu/the-responsibilities-of-social-media-platforms-and-users/

Tarleton Gillespie. (2016). Governance of and by platforms – Culture Digitally. https://www.semanticscholar.org/paper/Governance-of-and-by-platforms-Gillespie/e7c3645db1305757902679d8902c9ae290cb1b7c#paper-header

Zeitzoff, T. (2017). How Social Media Is Changing Conflict. Journal of Conflict Resolution, 61(9), 1970–1991. https://doi.org/10.1177/0022002717721392