Who Should Be Responsible for Stopping the Spread of Hateful, Inappropriate, and Illegal Content on Digital Platforms?

Federal and international policies can play a key role in preventing the spread of hateful and problematic content online

"Social Media Logos" by BrickinNick is licensed under CC BY-NC 2.0.
"Social Media Logos" by BrickinNick is licensed under CC BY-NC 2.0.


Local, federal, and international policies governing the use of digital platforms can play a key role in preventing the spread of hateful and problematic content online through compliance requirements and potential legal actions against offenders. Also, formal and informal awareness campaigns and initiatives in social media can complement the prevention framework. Internet service providers (ISPs) and social media networks should also take responsibility and accountability for monitoring activity spread through their services and work directly with governments and law enforcement to ensure the safety of their users. However, proponents of free speech argue that regulating speech online will erode ideological diversity and restrict other forms of speech. Within this context, this essay critically examines the role of the community, service providers, government, and law enforcement in preventing the spread of problematic content online.

“Instagram and other Social Media Apps” by Jason A. Howie is licensed under CC BY 2.0.

How Does Hateful and Problematic Content Spread Online?

Despite going against the internet ecosystem and laws, bullying, hate, and other problematic content are commonly spread across digital spaces. Being online gives some users a feeling of control and invisibility that may inspire hateful and problematic content and free speech give users the right to voice their views. However, according to various restrictions, free speech does not give everyone a free pass to incite hate and violence. Sexual harassment, violent threats, and the spreading of problematic misinformation have emerged across nearly all types of digital platforms (Lumsden & Morgan, 2017). Harmful content continues to degrade social health and destroy the trust that people have in one another. Further, those who create and spread hate and problematic content very rarely see any consequences for their actions, even if the content is reported and illegal..research from Mathew et al. (2019) shows that users on social media become easily desensitized to such content, and in turn, are indifferent to the harm that it causes. Therefore, indifferent and passive behavior from other users exacerbates the spread of problematic content because the offender knows that there are little to no consequences to their actions. 

Social Media Mix 3D Icons – Mix #2 by Visual Content is licensed under CC BY 2.0.

How to Stop the Spread of Problematic, Hateful, and Illegal Content Online

Online hate speech and the spread of problematic content are pervasive, and in recent years, there has been an increase in harassment and threats under the banner of free speech. The rise in online harassment and threats have resulted in real-life violence, and ISPs, social media platforms, advocacy groups, governments, and law enforcement all play key roles in stopping the spread of problematic content online and related violence offline. Society can begin to navigate this issue by taking collective action to condemn hate online and putting social pressure on those who violate norms and laws. Such as preventative measures and awareness campaigns, laws and policies to prosecute offenders, and protection and support services for victims. Currently, there are no sweeping international guidelines or policies regarding cyberbullying and other problematic behaviors, but as of 2016, the European Union has implemented the Code of Conduct on Countering Illegal Hate Speech Online, and several states in the United States have successfully prosecuted hate speech and cyberbullying cases. Laws and codes targeting online harassment are working to curb the spread of hate and problematic content online and individuals and groups who promote online harassment are finally being held accountable. 

One of the biggest problems with online harassment and hate is that many victims do not have the knowledge or resources to pursue legal action. and victims are often unable to remove problematic content, such as identifying information and harassing messages, images, and videos from some digital spaces. Victims are also concerned that the more they pursue and try to resolve the issue, they will be exposed to even more cyberbullying and harassment (Carter, 2021). Therefore, policies, laws, and campaigns need to focus on the rights and restoring justice for victims. Unfortunately, the legal landscape for cyberbullying and other types of harassment like revenge porn are just recently catching up to help victims and prosecute perpetrators. For example, revenge porn has been successfully prosecuted in Australia, the EU, the US, and in other countries (Magaldi et al., 2020). Under laws in New South Wales, for example, someone who distributes an intimate image without consent may be prosecuted under the Crimes Act 1900 (NSW). In South Australia, just threatening to distribute an intimate image is a criminal offense. and, victim reports show that the offending content is still available online despite breaking harassment, stalking, and privacy laws (Magaldi et al., 2020). 

Looking from a political perspective each country has its own way of governance problematic content.   Section 230 of the Communications Decency Act (1996) in the US provides immunity to ISPs and online platforms for hosting any type of third-party content. A Gallup poll shows that nearly 80 percent of Americans have reported that they think ISPs and online platforms should be more closely regulated and be held accountable for curbing the spread of hate and illegal content online. As such, ISPs and online platforms must be held responsible for hosting hate and harassment and letting offenders continue to harass and incite violence. Since most of the major digital platforms are hosted in the US, like Facebook, YouTube, and others, policymakers in the US need to address Section 230 and make ISPs and online platforms financially and legally responsible for allowing offenders to spread such content. 

Additionally, law enforcement agencies, ISPs, and online platforms can work together to address issues of harassment, hate, and other problematic content online more efficiently by integrating technological solutions to trace offenders and ensure that they face legal consequences.  Like YouTube, Facebook, TikTok, Instagram, and other sites have administrative representatives overseeing the overall operation and protecting users from causes of harm (Arifin et al., 2022). Representatives for the sites can intervene by closing accounts and executing restrictive measures against users to ensure that they do not become the cause for the erosion  and acceptable practices. Every user should be expected to adhere to the provisions and meet the requirements for their account to remain operational and active.  these sites have a mechanism for preventing the spread of such information by banning some users from accessing the platforms using their identifying details if they propagate such offensive and inappropriate content that are against the policies and regulations of user conduct.

Lastly, online platforms must also take responsibility for helping the victims of harassment and hate speech and research and fund awareness campaigns. They  work directly with international groups like the United Nations that advocate for global peace and social cohesion. These campaigns have proven helpful to prevent the spread of problematic content and give victims a voice. The assumption for using this approach as a suitable solution is that knowledge and awareness fuel positive behavioral change (Weinstein, 2018). For instance, social media users should be aware of the adverse effects of exposure to pornographic content early on and the potential harm to behavioral traits due to effects like addiction and irresponsible sexual behavior (Weinstein, 2018). Public awareness campaigns can significantly reduce the spread of such content as people are conscious of the outcomes and the consequences to innocent victims. 

“Internet Law” by Blogtrepreneur is licensed under CC BY 2.0.



ISPs and online platforms have claimed that they are not responsible for the content that users share, but policymakers across the globe are placing increased scrutiny on these sites for allowing hateful, inappropriate, and illegal content to spread and inspire real-life violence. This issue has been at the forefront of policymaking, education, and the justice system as more and more communities are affected by problematic content. Policies and laws governing the spread of hateful, illegal, and other problematic content play a key role in protecting users, increasing compliance, and preventing future offenses. Awareness campaigns initicated public and private institutions complement the legal framework and increase societal pressure to address and call out offenders. Collectively,  platforms also account for hate speech, harassment, and inciting violence. Online platforms have mechanisms to report videos, posts, and other images that spread hate, misinformation, and violence, and users can call out problematic content can boost positive and supportive messages on the internet. 












Arifin, R., Wedhatami, B., Alkadri, R., & Nte, N. D. (2022, September). The internet gang of violence: Trend of cyberbullying on the internet. In AIP Conference Proceedings (Vol. 2573, No. 1, p. 040012). AIP Publishing LLC. https://doi.org/10.1063/5.0109805

Carter, C. (2021). An Update on the Legal Landscape of Revenge Porn. National Association of Attorneys General. https://www.naag.org/attorney-general-journal/an-update-on-the-legal-landscape-of-revenge-porn/

Lumsden, K., & Morgan, H. (2017). Media framing of trolling and online abuse: silencing strategies, symbolic violence, and victim blaming. Feminist Media Studies, 17(6), 926-940. https://doi.org/10.1080/14680777.2017.1316755

Magaldi, J. A., Sales, J. S., & Paul, J. (2020). Revenge Porn: The Name Doesn’t Do Nonconsensual Pornography Justice and the Remedies Don’t Offer the Victims Enough Justice. Or. L. Rev., 98, 197. https://scholarsbank.uoregon.edu/xmlui/handle/1794/25136

Mathew, B., Dutt, R., Goyal, P., & Mukherjee, A. (2019, June). Spread of hate speech in online social media. In Proceedings of the 10th ACM conference on web science (pp. 173-182). https://doi.org/10.1145/3292522.3326034

Hatzipanagos, R. (2018). How online hate turns into real-life violence. The Washington Post. https://www.washingtonpost.com/nation/2018/11/30/how-online-hate-speech-is-fueling-real-life-violence/

RMIT University. (2019). One in five Australians is a victim of ‘revenge porn’, despite new laws to prevent it. https://www.rmit.edu.au/news/all-news/2019/jul/revenge-porn-laws

Weinstein, E. (2018). The social media see-saw: Positive and negative influences on adolescents’ affective well-being. New Media & Society, 20(10), 3597-3623. https://doi.org/10.1177/1461444818755634

Yaraghi, N. (2019). How should social media platforms combat misinformation and hate speech? The Brookings Institute. https://www.brookings.edu/blog/techtank/2019/04/09/how-should-social-media-platforms-combat-misinformation-and-hate-speech/