Who should be responsible for stopping the spread of hate speech on digital platforms?

Assignment 2-TUT03-Yancen Liu

Who should be responsible for stopping the spread of hate speech on digital platforms?

“No violence no hate speech” by faul is licensed under CC BY 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by/2.0/?ref=openverse.

Due to social media platforms based on the Internet, thoughts inside a certain geographic area are no longer constrained by physical distance in the era we live in. The rapid development of social media has therefore enhanced people’s options and opportunities for direct interaction and communication, but this convenience has also led to an increase in hate speech, and it is now harder to stop the spread of this hate speech (Gillespie, 2018). So while the benefits outweigh the drawbacks, the negative effects can’t be ignored.

Hate speech can be a forerunner to atrocity crimes, and it can have terrible impacts on society and on individuals (unitednations, 2019). And in recent years, hate speech-induced cyberbullying cases have become increasingly common, and the victims include a large number of teenagers. A staggering one-third of teens have experienced some form of cyberbullying that has taken a toll on their hearts(O’Dea & Campbell, 2012).

Jamey Rodemeyer, 14, is a gay teenager who identifies himself as being from Buffalo, New York. After speaking openly about his struggles with his sexual orientation, Rodemeyer received hate speech on social media sites such as “you shouldn’t have been born,” leading him to commit suicide in September 2011 after suffering from bullying at school and online.(ABCNews, 2011).

Therefore, there is no question that in order to promote social harmony and public safety, hate speech on digital platforms needs to be limited and prohibited to some extent. Internet media companies and governmental institutions must assume responsibility for this.


Why are the companies responsible for this?

“Web companies from the original web 2.0 logo collage which are still going” by Meg Pickard is licensed under CC BY-NC-SA 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc-sa/2.0/?ref=openverse.

As more and more media content migrates to these services in search of new audiences with the advent of the Web 2.0 era, where all types of Internet content are increasingly accessible through private social media platforms and applications, digital media and communications platform companies (Facebook, Apple, Amazon, etc.) are becoming more and more influential and significant on a global scale (Flew et al., 2019). As social media is a public forum for people to exercise their right to freedom of expression, it helps to ensure that freedom of expression is better guaranteed(Gillespie, 2018) . At the same time the rapid development of digital platforms and their influence has provided a huge boost to the cultural exchange and economic development of modern society. However, because they are platforms with a large amount of information and are difficult to regulate, digital platforms are also used by hateful people to cause harm to others and destabilize society. For example, in the 1994 Rwandan genocide against the Tutsi, hate propaganda broadcast by Radio Liberty incited the Hutu majority to kill fellow Tutsis, exacerbating ethnic tensions (unitednations, 2014).

And digital media platforms sometimes unconsciously contribute to the occurrence of hate speech, algorithms help internet users to better accept the content they are interested in, and hate groups can use these algorithmic benefits to improve the image of their website in search results and their content views on social media (Fernandez, 2021). Although Internet platform companies do not actively shape public discourse, these platforms (Facebook, YouTube, twitter) are indeed important tools for the dissemination of speech, which means that they should be held accountable for the dissemination of speech (Gillespie, 2018).


Digital platform companies effectively curb their negative impact by constantly updating their policies and reviewing content posted by users to remove information containing hate speech (Flew et al., 2019). However, platforms are limited in their ability to censor, and based on the speed at which users post content and the huge flow of information, it is impossible for platforms to pre-censor all new content (Flew et al., 2019). Many companies have taken significant steps to stop the spread of hate speech on their platforms, but more work still needs to be done to preserve freedom of expression while reducing hate speech, which is a difficult balance to achieve.

Government agencies’ measures to stop hate speech on the Internet

“The Law” by smlp.co.uk is licensed under CC BY 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by/2.0/?ref=openverse.

The proliferation of online hate speech and the harm it causes is first and foremost a social phenomenon, not just an online one. Therefore, in addition to measures taken by digital platform companies themselves, it is necessary and more effective for government agencies to enact relevant laws and take punitive measures against online hate speech. The proliferation of “disinformation” and “fake news” on social media in the Russia-Ukraine conflict has played an extremely pernicious role in fostering online hate speech and sectarian sentiment, largely reinforcing the radical views of certain individuals and potentially leading to extreme verbal and physical violence (Brown, 2022). This shows that the power of hate speech on social networks can undoubtedly threaten national security and fuel international conflict. It is difficult to control online hate speech and the consequences of online hate speech simply by the companies themselves regulating it.

Policymakers and international organizations are aware of the potentially devastating consequences of hate speech, and governments have established their own standards for content and enforcement methods to regulate online hate speech. A 2018 law in Germany, for example, requires large social media platforms to remove posts that are “manifestly unlawful” according to the standards set by German law within 24 hours ( Laub, 2019). Censorship by the national government allows for a more accurate standard for illegal hate speech online. In addition to the legal constraints and penalties for breaking the law, the fear of the consequences that usually follow from breaking the law can effectively stop the spread of hate speech.

However, global Internet governance is difficult to reach consensus on due to differences in culture, economy, legal policies, and national political formations across countries; for example, the European Union regulates interventions by digital platform companies more strictly than the United States (Flew et al., 2019). Therefore, globally, there cannot be one standard for regulating the Internet, and standards for hate speech vary from country to country, and in such cases where different countries have different standards, the control of Internet platforms is often affected by legal conflicts (Flew et al., 2019). For example, the U.S. and some EU countries have different standards for identifying hate speech. social media such as Facebook are U.S. companies, but their social influence is in other countries and regions, and it is a challenge to reconcile the governance requirements of different countries for hate speech.

While there are conflicts and differences between governments regarding the control of hate speech, the consensus is that they have all taken varying degrees of steps to reduce the increase in hate speech online. Internet companies are also able to better control their content in compliance with legal codes of conduct. The current approach to controlling online hate speech is mainly government-led and platform-led, with the government often proposing regulatory requirements and setting standards and punishment systems from the perspective of the impact of hate speech on collective morality and collective values, and it is up to the platforms to implement them according to the platform and their own community conventions, and according to the understanding of individual job holders (Gillespie, 2018). Finally, the internet has made hate speech and hate crimes transnational and globalized, which brings the problem of joint enforcement when punishing them, and the real operation not only has the problem of docking different judicial and law enforcement systems, but also faces high enforcement costs. Therefore, the governance of hate speech worldwide remains a major challenge at present. In any case, stopping online hate speech is a shared responsibility and obligation between governments and companies.



Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation AS media policy: Rethinking the question of Digital Communication Platform Governance. Journal of Digital Media & Policy, 10(1), 33–50. https://doi.org/10.1386/jdmp.10.1.33_1

Gillespie, T. (2018). All Platforms Moderate. In Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media (pp. 1–23). essay, Yale University Press.

Unitednations. (2019). Stopping hate speech. YouTube. Retrieved October 5, 2022, from https://www.youtube.com/watch?v=rnbcQT-b8ak&t=9s

Unitednations. (2014). Never again: An interview with Adama Dieng, special adviser on prevention of genocide. YouTube. Retrieved October 5, 2022, from https://www.youtube.com/watch?v=4dlBjGXGHa8

Fernandez, H. (2021). Curbing hate online: What companies should do now. Center for American Progress. Retrieved October 5, 2022, from https://www.americanprogress.org/article/curbing-hate-online-companies-now/

ABCNews. (2011, September 22). Lady Gaga calls for anti-bullying law after teenager’s suicide following cyber-bullying. YouTube. Retrieved October 5, 2022, from https://www.youtube.com/watch?v=Nif28JkDifg

Brown, S. (2022, April 6). In Russia-ukraine war, Social Media Stokes ingenuity, disinformation. MIT Sloan. Retrieved October 5, 2022, from https://mitsloan.mit.edu/ideas-made-to-matter/russia-ukraine-war-social-media-stokes-ingenuity-disinformation

Laub, Z. (2019). Hate speech on Social Media: Global Comparisons. Council on Foreign Relations. Retrieved October 5, 2022, from https://www.cfr.org/backgrounder/hate-speech-social-media-global-comparisons

O’Dea, B., & Campbell, A. (2012). Online social networking and the experience of cyber-bullying. Annual Review of Cybertherapy and Telemedicine, 181, 212–217. https://doi.org/10.3233/978-1-61499-121-2-212