The management of hate speech

Post no hate poster. United Nations Interregional Crime and Justice Research Institute. Some rights reserved.

In the contemporary context, with the advanced development of social media, individuals are granted the opportunity to share their voices on multiple platforms, initially without any restrictions. Social media platforms have become a place that allows and supports opinion diversity as it is free for the community to form their arguments and judgments. It could be observed that our society has become more coherent with social media platforms as tools to connect and create a strong tie with each other. While acting as a place where freedom of speech is encouraged, social media has also shown the ability to allow some entities to share toxic information, namely illegal or hate speech. It has proven that social media while allowing a diverse range of voices, should also be managed cautiously in order to govern this toxic behavior of users. In this context, hate speech and illegal speech is being administrated by German law and Europe’s 2016 Code of Conduct on Countering Illegal Hate Speech Online. This essay would analyze the historical backgrounds, arguments surrounding the two online regulatory concepts, and also discuss the topic of illegal and hate speech in general.

Freedom of speech in any context, online or offline, should be encountered along with social norms. In other words, it should be embedded with our moral values and therefore, it is essential to detect and govern toxic behavior that violates society’s ethical standards. Individuals usually seek to share their opinions on social media platforms since it is a space where they could be dispatched from their real identity, therefore, motivates them to act very differently. For instance, on Reddit, individuals are provided with the opportunities to create multiple user accounts, which means users could easily commit hate speech numerous times since their identity could be renewed (Massanari, 2017). According to Gagliardone et al. (2015), online hate speech could be presented in a range of tensions, it could be the representation of conflicts between different social groups without mutual ideas, or it could be observed as the opportunities and challenges that the internet granted for our community. Additionally, it presents access to our fundamental rights, namely freedom of speech. According to Gillespie (2018), sometimes it could be very challenging to detect illegal and hate speech since there are no clear boundaries between what is observed as acceptable and what does not. This could be explained as illegal and hate speech is approached very distinctively in different cultures, some freedom of expression could be tolerated in some specific cultures, and not in others. A number of European countries, specifically Germany and France have then approached the topic of illegal and hate speech by banning specific forms of speech, not only due to its high potential to lead to harmful situations but also for their inherent toxic nature that could be endorsed to the society (Gagliardone et al., 2015). According to Wahl (2018), the Code of Conduct Countering Illegal Hate Speech has shown positive result with the assessment of 70% illegal and hate speech have been removed by social media enterprises after being notified by the NGOs. According to European Commission (2016), social media platforms, or could be referred to as IT Companies, including Facebook, Microsoft, Twitter and YouTube have joined EU Internet Forum along with other social media enterprises in order to actively participate in claiming responsibility and promoting positive freedom of expression.

Illegal and hate speech have been defined as certain forms of expressing negative social phenomena such as racism and xenophobia, regulating hatred and violence towards a specific social group that could be based on their race, color, or religion (European Commission, 2016). Social media platforms and the European Commission have observed and recognized the harmful effects that hate speech could potentially impose on society, therefore, showing effort to respond to hateful online instances and interrelate illegal online hate speech. According to European Commission (2016), online hate speech not only affect the targeted social groups negatively but it also violates the freedom of speech of those who utilize social media platforms with non-discriminatory behavior as it potentially blocks their online participatory. With the agreed terms between social media platforms and the European Commission, social media platforms are entitled to certain public commitments that will ensure the effectiveness of the Code in preventing and eliminating toxic or discriminatory behavior online. One of the most symbolize terms is that IT Companies would review most of the valid notifications in order to remove in merely 24 hours and block access to these types of contents (European Commission, 2016). Another significant term is that social media platforms are actively taking responsibility in education and raising awareness among society through establishing clear terms in community guidelines. This would enable social media users to take a proactive role in detecting violating content, and in some cases, accepting that they are violating the terms. According to the Council of the European Union (2019), the Code has actively taken the role to request IT enterprises to regulate community standards that ban illegal and hate speech, along with effectively managing systems and teams that would review the reported content. This has shown result as the percentage of flagged content within the time frame of 24 hours has risen from 81% in 2018 to 89% in 2019. Importantly, it has been reported that IT companies are actively taking the role to provide regular training in order to deliver support to their content reviewers (Jourová, 2019).

Similarly, in another discourse, Germany Network Enforcement Act, commonly referred to as NetzDG law enforced in 2017 has also put a commitment for social media platforms to either remove or regulate hate speech within 24 hours with 50 million euro fines if companies fail to fulfill (Lomas, 2020). However, it has been observed that Germany Law – NetzDG has been utilized by large social media platforms to aid the state in collecting citizen’s databases without legitimate validation.

NetzDG claimed to be forwarding personal data. Twitter. All rights reserved.


The new arising concern is that authors of reported posts on social media platforms’ personal data would be directly sent to the police, which would risk individuals who did not post any violating content to be involved in serious police-reported situations (Lomas, 2020). Additionally, it has also been detected that users who own flagged posts are only provided minor notification indicating that their personal data would be delivered to the police. Nevertheless, NetzDG is a decent attempt to prevent individuals from spreading hate speech freely, according to Anas Modamani – who has been accused to be involved in terror attacks in social media posts (Evans, 2017). Modamani suggested that the mass deletion of the law would be acceptable as long as fake news and hate speech got expelled from social media platforms.
Anas Modamani being falsely linked to terror attacks in posts on social media. BBC. Some rights reserved.Anas Modamani being falsely linked to terror attacks in posts on social media. BBC. Some rights reserved. 

According to Lomas (2020), the chilling effect of hate speech by blocking free speech of users who are in fear of expressing themselves without being involved in troublesome situations should not be ignored. Therefore, leading social media platforms to significant changes in terms of rules and governance with respect to EU Code of Conduct on hate speech. In comparison to EU Code of Conduct, NetzDG is observed to be more prone to resulting in social media platforms excessively removing content in order to avoid being fined (Heldt, 2020). Social media platforms under the regulation of NetzDG strictly follow the government’s view of what is considered illegal content, therefore abandoned their own ability to measure in order to remove unlawful content in the shortest amount of time (Heldt, 2020). The EU Code of conduct, on the other hand, has been clearly stated as a voluntary commitment between the IT companies and European Commissions and therefore, would not be treated as a legal document that the government could have the right to take down (European Commission, 2020). While NetzDG regulates merely on government’s view, EU Code of Conduct imposes the ability for social media platforms to actively take responsibility for illegal, hateful content that exists in their own space. Therefore, NetzDG could be observed as lacking the liability to inform the user’s notifications in regard to unlawful content. Regardless, according to Heldt (2020), there is no pragmatic proof for the over content removal and negative effects on online speech under the NetzDG. It is more likely that NetzDG is lacking the ability to inform what social media platforms should deal with their complaint tools and also, lacking transparent reports for the reason of removing content (Heldt, 2020). A suitable proposal for the improvement of NetzDG would be introducing that users have the rights to appeal, but it is not an obligation in any form. However, the EU Code also contains certain limitations due to the reason that it allows social media platforms to entirely decide what content should be deleted then content removal would entirely rely on these platforms’ judgments without little to no public authorities’ participation (McNamee & Pérez, 2016). This should be referred back to the principles of German law as EU Code presents the disparity between what companies should be doing and what they are actually doing without the government lacking information of the number of deleted messages that actually violated the criminal law and whether all these potential crime cases were properly investigated or prosecuted (McNamee & Pérez, 2016). Similarly, in respect to the Australian context where hate speech has largely occurred on social media platforms such as Facebook and Instagram with the majority of individuals supporting actions that would prevent the occurrence of hate speech enshrined through legislation and most importantly, 78% of those require IT Companies to take more active roles (eSafety Commissions, 2020). With a proportion of 64% individuals who are suffering forms of hate speech taking no action and 58% individuals reported suffering from negative impacts, namely mental anxiety and relationships tribulations, a combination of Europe’s 2016 Code of Conduct and NetzDG laws should be perceived as sufficient in the Australian context (eSafety Commissions, 2020).

To recapitulate, Europe’s 2016 Code of Conduct and German law (NetzDG) while operated and function differently, both showed sufficient results in specific contexts. To be specific, while the EU Code highlights the responsibility of IT companies in taking active roles to remove hate speech, the essential role of the Government’s view and decisions are also enshrined in NetzDG. The Code of Conduct and NetzDG both generally showed no sign of over-removal of content and the actions taken by both laws to promote positive sharing behavior have also resulted positively. Nonetheless, both the EU Code of Conduct and NetzDG should take more effort into presenting transparency to social media users.





Massanari, A. (2017). Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329-346. ISSN: 1461-4448

Council of the European Union. (2019). Progress on combating hate speech online through the EU Code of conduct 2016-2019. Retrieved from

eSafety Commissioner. (2020). Online hate speech: Findings from Australia, New Zealand and Europe. Retrieved from

Europe Commission. (2016). Code of Conduct on countering illegal hate speech online. Retrieved from

European Commission. (2020). Q&A: The Code of conduct on countering illegal hate speech online. Retrieved from

Evans, P. (2017). Will Germany’s new law kill free speech online? Retrieved from

Gagliardone, I., Gal, D., Alves, T., Martinze, G. (2015). Countering online hate speech. UNESCO series on internet freedom. Retrieved from

Gillespie, T. (2018). All platforms moderate. In Custodians of the internet: platforms, content moderation, and the hidden decisions that shape social media (pp. 1-23). New Haven: Yale University Press. ISBN: 030023502X,9780300235029

Gooth, J [@janagooth]. (2020). Tweets [Twitter profile]. Retrieved from

Heldt, A. (2020). Germany is amending its online speech act NetzDG… but not only that. Retrieved from

Lomas, N. (2020). Germany tightens online hate speech rules to make platforms send reports straight to the feds. Retrieved from

McNamee, J. & Pérez, F. (2016). FAQ: EU Code of Conduct on illegal hate speech. Retrieved from

United Nations Interregional Crime and Justice Research Institute. (2020). Post no hate [image]. Retrieved from

Wahl, T. (2018). Application of Code of Conduct Countering Illegal Hate Speech Online Positive. Retrieved from