Who should be responsible for stopping “hate content” on digital platforms and how?

“Social Media Icons Color Splash Montage” by Blogtrepreneur is licensed under CC BY 2.0
“Social Media Icons Color Splash Montage” by Blogtrepreneur is licensed under CC BY 2.0

Social Media Butterfly – Instagram” by Blogtrepreneur is licensed under CC BY 2.0

In the Web 2.0 era, the two most important aspects can be described as “collective Web content generation” and “social media platforms” (Han, 2001, p. 25). As Potts believes, the Internet age has made social media platforms flourish Development, that is, “the process of co-evolution of the market and non-market” (Quiggin & Potts, 2008, P.147). However, in the context of the rapid development of the Internet era, on the one hand, the dissemination of positive content to society and individuals. On the other hand, negative content such as bullying, harassment, violent content, hate speech, and pornography are also spread through the internet and social media platforms. Therefore, this article will look at who should be responsible for preventing the spread of this content and how to stop it. Separately through platforms (the companies who own and manage these platforms). In addition, individuals as users of the Internet and national government agencies should also intervene and prevent hate content on the digital platforms.

Web 2.0 Expo” by Julie Pimentel is licensed under CC BY-NC 2.0

Who should be responsible for stopping “hate content?”

  • Platforms (the companies who own and manage these platforms)
  • Individual (As a user from digital platforms)
  • Government

The companies who own and manage these platforms should be most responsible for stopping the spread of hate content. A platform can be defined as a programmable architecture that organizes interactions among users and in particular, shapes the way we live and organize our society (Gehl as cited in Van, 2018, p. 10). As Goffman (1959, as cited in Marwick, 2013, p.10) argues, people present themselves in different ways depending on their context. That indicates, digital technology facilitates the proliferation of people’s identity creation. In other words, through the online identity of the platform, a good bridge for building relationships and connections with others is created (Klein, 2016, p.91). However, when issues such as bullying, harassment, pornography, violent content, and hatred spread on digital platforms, it not only affects users’ good vision of the platform but also affects the operation and development of the platform itself. Therefore, the companies who own and manage these platforms need to be primarily responsible for preventing such adverse events.

Non-Legal Responses to Hate Speech on Digital Platforms” by University of Delaware. ALL RIGHTS RESERVED. Retrieved from:https://www.youtube.com/watch?v=IsvheIHDbK4

In addition, social media platforms such as Twitter, Facebook and YouTube are challenging the old system and disrupting various sectors, such as journalism (Kloet et al., 2019, p.25). According to a 2019 news report from CBS Morning, the global increase in teen suicides is due to social media platforms’ indifference to hate content (Jericka, 2019). As Ian Russell argues, she believes that social media platforms need to be more supportive of teens Suicide is partly to blame because “she invited people who might have self-harmed into the club and normalised those behaviors…”(CBS, 2019). Yet it’s the lack of regulation by social media companies on their platforms that has led to all kinds of misfortunes. However, digital media companies often refer to themselves as “communication intermediaries” rather than media companies (Gillespie, 2018). It is important to note that while the intervention in managing and curating digital platforms continues to grow, social media companies (the owners of the platforms) must increasingly take on the responsibility of managing content and monitoring user activity, not just maintain the enterprise Image but also protect users, to reduce the online fermentation of bad content (Flew, Marti & Suzor, 2019, p.38).
“USER” by Alexander Svensson is licensed under CC BY 2.0
USER” by Alexander Svensson is licensed under CC BY 2.0

In fact, in the era of Web 2.0, platform holders are primarily responsible for hate content on social media platforms. However, the users of the platform, aka “individuals”, also need to assume a secondary responsibility—stop verbal attacks. In 2020, homicide George Floyd sent shock ripple across the globe. In the case of the reform, the main contradiction is that “technology” refers to social media or other information platforms that help people become more open while at the same time uncovering the bane of the hate theory (Shivangi, 2020). In many cases, users not only represent the reception and dissemination of information, especially in the context of the rapid development of the Internet era, users have achieved the development of the platform. Therefore, as a user, that is, every individual on the Internet should use their own strength to avoid and prevent the fermentation of bad speech and hate content.


"government" by Mike Lawrence is licensed under CC BY 2.0
government” by Mike Lawrence is licensed under CC BY 2.0

Moreover, the information technology revolution has not only changed people’s daily lives but also the interaction between governments and citizens. In other words, the government has also managed citizens in a new way (Chun et al., 2010, p. 1).

Furthermore, government agencies should also do their best to stop hate speech and other bad speech on the Internet. Taking the Russo-Ukrainian war as an example, social media platforms such as Facebook, Twitter, Instagram, and others use “fake news” and misinformation to gain attention, while also contributing to the proliferation of hate speech and violent content (Barbara, 2022). Although Ukrainians can be made to feel less lonely through social media platforms, the key problem with that is that the impact of anything is two-sided. On the one hand, social media allows people all over the world to witness the violence and resistance, but on the other hand, it can contribute to the continuation of this bad phenomenon (via exaggeration or fake news).All these show that hate speech and violent content are undoubtedly affecting the security of the country and society. In order to maintain social and national politics, government involvement is required, not just platforms (the companies who own and manage these platforms) or users of social media platforms. This requires the joint efforts of the three parties to create a safe and stable social media platform.

How To Stop Hate Online” by Samtime. ALL RIGHTS RESERVED. Retrieved from: https://www.youtube.com/watch?v=_JVJNdPiRf4

How to stop hate content on digital platforms?

"Twitter" by Fotero is licensed under CC BY-NC 2.0
Twitter” by Fotero is licensed under CC BY-NC 2.0

For the platforms with the most responsibility (the companies who own and manage these platforms) to stop hateful content from growing on the platform, it first need to strengthen “content moderation.” Specifically, when two or more users are on the platform When hate speech is generated about a certain matter, content moderators should immediately delete or block the entry. Other objectionable content should be dealt with in the same way, such as the distribution of pornographic images, bullying or harassment, etc. Especially, companies that manage platforms can choose to hire in-house teams instead of outsourcing content moderation, which will enhance the quality and level of content moderation.


"Tik-tok HEAD" by GwynethJones is licensed under CC BY-NC -SA 2.0
Tik-tok HEAD” by GwynethJones is licensed under CC BY-NC-SA 2.0

Since social media platforms give users too much control over the compilation, the comments posted by users are based on different backgrounds, cultures, and life experiences (Hall, 2017). Users are more involved in the production of media, so that users have stronger control and can adjust at any time according to their needs. As it stands, it’s not clear what users can do to stop the bad phenomenon from developing. In the final analysis, only self-restraint can be exercised. Just like the core values promoted by TikTok, it is hoped that users of the platform “anything they create and share should not hurt the local community” and, at the same time, make people feel happy and enjoy their creativity (TikTok, 2016). Therefore, as everyone on the Internet, it is important to remember that the Internet is about bringing people closer, not worsening bad content.

At the same time, government departments can regulate bad speech on Internet platforms by establishing and strictly enforcing laws and regulations. Specifically, by establishing a real-name system of personal identities, appropriate pressure can be given to users, so that the occurrence of hate content in the network can be further reduced and avoided. In addition, based on the background of laws and regulations, a certain number of measures can be taken according to the seriousness of the circumstances rather than verbal warnings or bans. However, the government is unable to deal with it in a timely manner and often only starts to notice when the bad speech has developed seriously. Nevertheless, the fast upload and update speed of content on the platform is also one of the reasons why government departments need to further think about specific policies in the future.


“Future Perfect" by Brett Burton is licensed under CC BY-NC 2.0
Future Perfect” by Brett Burton is licensed under CC BY-NC 2.0

In summary, it is impossible to say who must need to responsible for stopping the spread of hate content. In the context of the rapid development of digital media platforms, whether it is the company that manages and owns the platform, the users of the platform (individuals) or government departments, they have the responsibility to jointly prevent the occurrence of bad content. In order to effectively stop the spread of many harmful phenomena on social platforms, it will still be one of the subjects that are explored in the future.



Barrett, P. M. (2020). Who moderates the social media giants. Center for Business.

Barbara, G. (2022, September 3). Invasion of Ukraine highlights social media’s role in war. NEWS @ TheU. https://news.miami.edu/stories/2022/03/invasion-of-ukraine-highlights-social-medias-role-in-war.html

Chun, S., Shulman, S., Sandoval, R., & Hovy, E. (2010). Government 2.0: Making connections between citizens, data and government. Information Polity15(1-2), 1-9. https://doi.org/10.3233/IP-2010-0205

CBS Mornings. (2019). Parents of teens who died by suicide hope speaking up will prevent more deaths. https://www.cbsnews.com/news/teen-suicide-social-media-bullying-mental-health-contributing-to-rise-in-deaths/

De Kloet, J., Poell, T., Guohua, Z., & Yiu Fai, C. (2019). The platformization of Chinese Society: infrastructure, governance, and practice. Chinese Journal of Communication, 12(3), 249–256. https://doi.org/10.1080/17544750.2019.1644008

Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal of Digital Media & Policy10(1), 33-50. https://doi.org/10.1386/jdmp.10.1.33_1

Hill, A. (2017). Reality TV Engagement: Producer and Audience Relations for Reality Talent Shows. Media Industries (Austin, Tex.), 4(1), 1–. https://doi.org/10.3998/mij.15031809.0004.106

Han, S. (2011). Web 2.0. Routledge. https://doi.org/10.4324/9780203855225

Marwick, A. (2013). Memes. Contexts, 12(4), 12-13. https://doi.org/10.1177/1536504213511210

Outsource Accelerator. (2019). What is content moderation. https://www.outsourceaccelerator.com/articles/content-moderation/

Quiggin, J., & Potts, J. (2008). Economics of Non-market Innovation and Digital Literacy. Media International Australia incorporating Culture & policy (128), 144-150. https://doi.org/10.3316/ielapa.200810221

SAMTIME Explained. (2018, September 26). How to stop hate online. [Video]. YouTube. https://www.youtube.com/watch?v=_JVJNdPiRf4

Shivangi, C. (2020, July 11). How social media is helping combat racism. Voice Of Youth. https://www.voicesofyouth.org/blog/how-social-media-helping-combat-racism

Tik Tok. (2016). Tik Tok Mission Statement Interpretation. https://mission-statement.com/tik-tok/

University of  Delaware Explained. (2019, April 18). Non-Legal Responses to Hate Speech on Digital Platforms. [Video]. YouTube. https://www.youtube.com/watch?v=IsvheIHDbK4

Van, J. (2018). The Platform Society as a Contested Concept. In The Platform Society. Oxford University Press. https://doi.org/10.1093/oso/9780190889760.003.0002

Waskul, D., Vannini, P., & Klein, U. (2016). Sharing selfies. In Popular culture as Everyday Life (pp. 85–95). Routledge, Taylor & Francis Group.