Whose responsibility is it to stop bullying, harassment, violent content, hatred, pornography, and other problematic content from spreading on the digital platform and how to stop it?

“Social Media Keyboard” by Shahid Abdullah is marked with CC01.0. Retrieved from: https://creativecommons.org/publicdomain/zero/1.0/?ref=openverse.

What is violent content mean

While social media platforms provide users with new opportunities to interact and engage with a larger spectrum of individuals ( Boyd,2011; Varnelis,2008, cited by Gillespie, 2018), it is also clear that these platforms pose risks (Gillespie, 2018). The Internet is flooded with destructive content such as violence and pornography due to the information may be transmitted rapidly, easily, and affordably through the internet (Litan, 2001). Violent content on the Internet mainly refers to the content presented in words, pictures, or audio-visual ways on the Internet and seriously infringes on or damages life and health (eSafety, 2022). This kind of content includes not only the direct presentation of violent acts and phenomena in the objectively real world but also the performance of fictional violent scenes. Besides, some expressions or information on the Internet are closely related to violence, such as beautification, propaganda, incitement to violence, and incitement to hatred and terrorism, which can be called cyber violence in a broad sense.

The Dark Sides of Digital Platforms

1. Tiktok

TikTok’s secret algorithm is meticulously created to keep users scrolling as long as possible


On July 3, 2018, TikTok was banned in Indonesia after the Indonesian government accused it of publishing “pornography, inappropriate content, and blasphemy”(Cox,2018). After issues over pornographic content surfaced in February 2019, several Indian lawmakers advocated for the outlawing of or better regulation of TikTok content, cyberbullying, and deep forgery, which involves replacing real persons in existing photographs or films with portraits of other people(Cox,2018). On TikTok, extremist material has also been present. ISIS recruitment material was reportedly discovered on TikTok in October 2019. These films, which also show weaponry, dead people, and beheadings, portray ISIS militants.

ISIS turns to TikTok for recruitment ISIS is reportedly using one of the world’s most popular video-sharing apps as its latest recruiting tool. According to The Wall Street Journal, TikTok has removed nearly two dozen accounts that posted propaganda. Georgia Wells joins CBSN’s Nikki Battiste for a closer look.


2. Suicide of Amanda Todd

Amanda Todd is a girl who suffered from cyberbullying. More than 17 million people have seen Amanda Todd’s YouTube video by the end of October 2012. This young girl from British Columbia describes how she was blackmailed and abused in the video “My Story: Struggle, Bullying, Suicide, Self-harm.” On October 10, 2012, more than a month after the video’s September 7 premiere, Amanda hung herself in her home.

“Stop” by kevin dooley is licensed under CC BY 2.0.

Who should stop it

It is much more difficult to regulate the network than any traditional media. The network has changed the mode of information production and transmission, and users’ self-generated content has become the main force of cyberspace, especially social media. Faced with this situation, it is far from enough for the government to perform a monologue to control the harmful information in cyberspace. In the regulation of Internet problematic content, it is necessary for the government, companies, industries, and users to take their own responsibilities and work together to build a comprehensive governance system. Nowadays, there are generally three modes of Internet governance in countries all over the world: government-led mode, industry self-regulation-oriented mode, and government-industry co-governance mode. Government-led performance is more prominent in South Korea. The United States and Britain are the most typical types of self-regulation, while Germany, France, and Singapore are the representatives of government and industry co-governance.


1. Regulation by Code and Design

“Government” by Nick Youngson is licensed under CC BY-SA 3.0.Retrieved from:https://www.picpedia.org/highway-signs/g/government.html

In many countries, grading systems, intelligent identification, and other technical means are mainly used to control the content of cyber violence (UNODC, 2012). For example, the United States, with the same concept and method of controlling pornographic content, mainly regulates violent content through grading and technical control to treat adult users and adolescent users differently. In terms of grading, the United States started to use the Internet Content Selection Platform (PICS) in 1996. This platform refers to the American film grading standards and divides the network information into five levels according to sex, violence, physical nudity, swearing, etc. The higher the level, the more restricted the contact of minors (Kuchta, 2017). Moreover, the Entertainment Software Rating Committee established in the United States in 1994 also developed a game rating system. This system divides the game into seven levels, and the number and degree of violence and pornography are the key basis of the grading (Kohler, 2009).  Of course, the debate over violent video game ratings is still going strong today. Despite the fact that the ESRB is one of the best and most informative rating systems, some proponents have called for government regulation of the sector (Kohler, 2009).


Social media platforms are already playing a role in how the metaverse is being defined and developed. “Social Media Mix 3D Icons” by Blogtrepreneur is licensed under CC BY 2.0.

The huge volume, complexity, and technical characteristics of the Internet make the direct regulation of the government very difficult. The self-regulation and autonomy of the Internet and industry have become an important part of network governance. From the beginning, network governance has the characteristics of unofficial and decentralization. Although with the wide popularization of Internet technology and application, all countries have brought the Internet into the scope of legal regulation, it is undeniable that private subjects have always occupied a unique, important, and even dominant position in network governance. The same is true for the governance of cyber violence. FacebookTwitter, and other companies will use “hash” digital fingerprints to identify deleted terrorism-related content, and they also insist that this technology will not be used as a general censorship tool. Facebook, Google, Microsoft, and Twitter announced on Monday that they have joined forces to curb images with obvious terrorist colors on the Internet. Earlier, the European Union had criticized several major social media companies in the United States for not working hard enough to curb hate speech (Fioretti, 2018). These technology companies said in a statement that they are building new technologies to identify extremist content through a digital fingerprint called “hash”, including terrorist recruitment videos and execution pictures, which will be collected into a shared global database. Once the hash is created, it will be attached to the content like a watermark, making it easy to identify and delete the content. Despite the fact that each of these procedures is important for learning more about suspicious files, many malicious code analyzers do these steps in a different sequence or alter some of them based on prior knowledge or the context of the code (Aquilina, 2008).

2. User

“Internet Users in the State Library of Victoria” by ricklibrarian is licensed under CC BY-NC 2.0.

The Internet has become an indispensable part of every ordinary person’s work and life. The harmful content of the Internet almost infringes on the interests of everyone and every family, so users themselves need to strengthen their sense of responsibility, improve their self-regulation, and actively participate in the governance of violent content. Users, on the one hand, should be strictly self-disciplined and not participate in the production and transmission of violent information. On the other hand, we should actively monitor violent information by reporting complaints.


Look Forward to The Future

There are many potential challenges in supervising platform companies, including the relative novelty of their business models, the major challenges posed by some government interventions to freedom of speech, the lack of meaningful policy experiments and identifiable precedents, and the fear of stifling future innovation (Gorwa, 2019). In short, network information technology is the biggest technological revolution in the 20th century. In the face of the rapid development of network technology, the governance situation of network violence content is very severe. However, if the regulatory standards can be made more refined, scientific, and reasonable, the government, enterprises, industries, society, and users should co-govern in mechanism, and the advanced technologies should be comprehensively used in means, and the harmful violent content of the network can be controlled within a certain range, although it can’t be completely eradicated.


“michael-nuccitelli-stop-online-child-pornography” by iPredator is marked with CC0 1.0.






Aquilina, J. M. (2008). Chapter 8 – File Identification and Profiling: Initial Analysis of a Suspect File On a Linux System. In J. M. Aquilina (Ed.), Malware Forensics (pp. 379–488). Syngress. https://doi.org/10.1016/B978-1-59749-268-3.00008-6

Cox, J. (2018, December 18). TikTok Has a Nazi Problem. Vice. https://www.vice.com/en/article/yw74gy/tiktok-neo-nazis-white-supremacy

eSafety. (2022). What is illegal and restricted online content? ESafety Commissioner. https://www.esafety.gov.au/report/what-is-illegal-restricted-content

Fioretti, J. (2018, January 19). Social media companies accelerate removals of online hate speech -EU. Reuters. https://www.reuters.com/article/eu-hatespeech-idUSL8N1PC5QK

Gillespie, T. (2018). Regulation of and by Platforms. In J. Burgess, A. Marwick, & T. Poell, The SAGE Handbook of Social Media (pp. 254–278). SAGE Publications Ltd. https://doi.org/10.4135/9781473984066.n15

Gorwa, R. (2019). The platform governance triangle: Conceptualising the informal regulation of online content. Internet Policy Review, 8(2). https://policyreview.info/articles/analysis/platform-governance-triangle-conceptualising-informal-regulation-online-content

Kohler, C. (2009). July 29, 1994: Videogame Makers Propose Ratings Board to Congress. Wired. https://www.wired.com/2009/07/dayintech-0729/

Kuchta, R. (2017, October 9). The hash—A computer file’s digital fingerprint. Newtech.Law. https://newtech.law/en/the-hash-a-computer-files-digital-fingerprint/

Litan, A. M. R. and R. E. (2001). The Economy and the Internet: What Lies Ahead? Brookings. https://www.brookings.edu/research/the-economy-and-the-internet-what-lies-ahead/

UNODC. (2012). The use of the Internet for terrorist purposes. 158.