Illegal Hate Speech Online – Should Australia outsource regulation to Social Platforms?

Governed by Google and Regulated by Reddit!

The Dangerous Power Play of Social Platforms as Moderators of Free Expression. Image: The Economist, © Selman Hosgör, The Economist Newspaper Ltd (Sept 6th 2018), All Rights Reserved.


The sheer size of the internet; with its anonymity and connectivity, makes it a hot bed for extremist views, harassment and hate speech. The complexity of regulating content extends beyond the capacity of traditional law enforcement. Whilst an imperative exists to monitor online content, who exactly should be responsible for governance of illegal hate speech online? This essay provides a brief overview on the debate of whether Australian social platforms should follow in the footsteps of the EU Code of Conduct on Countering Illegal Hate Speech Online and the German Network Enforcement Laws (NetzDG), concluding that outsourcing regulation to these platforms is wrought with danger, due to technological limitations, potential bias, unfettered power and political, economic and social agendas of social platforms, which compromises transparency and freedom of expression.

A Brief History of Online Hate Speech

No universal definition for Hate Speech exists. Many diverse definitions prevail, yet, a common recommendation by the Council of Europe in 1997 embraced by many nations, suggests hate speech includes “all forms of expression which spread, incite, promote or justify racial..or religious hatred or intolerance”. In the digital age in which we live, this has been extended to include fake news; where false information is spread with intent to harm. The Wharton School of Business’ “How Can Social Media Firms Tackle Hate Speech?” podcast provides an insight into the role of social platforms as moderators.

Social Media Demon. Image: Vox Political, © Mike Sivier, Vox Political (April 5th 2019), All Rights Reserved.

What is the EU Code of Conduct?

Following the Bataclan concert hall terrorist attack that occurred in Paris in 2015, The EU Code of Conduct was agreed upon with Facebook, Microsoft, Twitter and YouTube to counter the spread of illegal hate speech online, with Instagram, Google+, Snapchat and Dailymotion joining in 2018. Monitoring of hate speech has resulted in 72% of hate speech removed within 24 hours.

Germany as a Regulatory Model

The NetzDG Law was introduced on 1st January, 2018 as part of an extension of its Voksverhetzung“incitement to hatred” criminal code, in response to increased far-right propaganda surrounding Merkel’s decision to open German borders to immigration. This law requires all social platforms to remove obvious instances of hate speech and abusive content within 24 hours, or face potential fines of up to 50 million euro.

Can Germany Fix Facebook? Image: © Andrey Popov, Shutterstock, Facebook, Zak Bickel, The Atlantic, Some Rights Reserved.

What is Australia currently doing about online hate speech?

In 2019, a new Criminal Code Amendment Sharing of Abhorrent Violent Material Bill 2019 was legislated in response to the mass shooting in New Zealand by an Australian white nationalist, who live streamed the massacre and posted a hate-filled manifesto video to online 8chan. New legislation holding social media platforms accountable for abhorrent violent posts such as videos which show murders, rapes, kidnappings or terrorist acts was implemented, to ensure, according to Australian’s Prime Minister Scott Morrisson, that “these platforms should not be weaponised”. Penalties for failure to remove expeditiously, includes fines of up to 10% of companies annual profits and up to 3 years imprisonment for employees. Limitations to these new social media laws is that under the Racial Discrimination Act 1975, it applies only to race hate speech and excludes religious based hate speech.

The YES argument:

  • Online hate speech is too dangerous to ignore.

Online hate speech is more detrimental than offline hate speech, as the Internet’s global reach has enabled greater exposure to a larger audience. Social platforms, due to their extensive popularity, pose a particular risk (Oboler 2014). Incitement to terrorism, along with harassment from online trolls and propaganda from political trolls, means an imperative exists to protect Australian social media users from the direct harm of speech that incites violence.

  • Social Platforms are responsible for monitoring illegal content

Social platforms control the bulk of the world’s information flows and have the power to shape opinions. They therefore have a corporate responsibility and obligation to monitor and restrict dangerous, racist and illegal hate speech. An evaluation of the EU Code of conduct initiative, revealed that platforms have doubled their notifications, increased bot identification and improved algorithms; an initiative that could have similar results if implemented in Australia.

Regulatory Approaches. Image: © Dahrendorf Forum Working Paper No. 6 (Dec 28th 2018), All Rights Reserved.

The NO argument:

  • Social Media Platform Bias & Agendas

Outsourcing regulation and moderation to social media platforms runs the risk of allowing social platforms to decide which online speech to control, based upon their own political, economic or social agendas. As articulated by Tusikov and Haggart (2019b), platforms that are pressured into a rapid response, without regard or accountability for social problems, may interpret rules themselves and give unprecedented control to public companies, with limited transparency (Cobbe 2019). This therefore allows for potential exploitation, self-interest and societal biases, exposing the subjective nature of censorship, particularly by profit-making organisations who may seek to sway public opinion for their own political or economic gain.

  • Censorship gives undue Power to Social Platforms

Social Platforms already wield a great deal of power in society. By placing the right to censor in the hands of these corporations, we give them wider reaching power of control over our lives. As acknowledged by Cowan (2019), “expecting Facebook to stop the spread of fake news by fact checking a user’s news feed, we give Facebook the power to subjectively determine truth. By asking Youtube to pre-vet content, we give them the power to determine what thoughts are and aren’t acceptable”. Censorship should not be controlled by Social Platforms that already hold immense economic and political power and can profit by deciding what content is made available to the public.

If facebook became a digital censor. Image: © Niko Efstathiou, Pro Journo Davos 2017, Medium, Some Rights Reserved.
  • Freedom of Expression

Suppression of online content restricts our individual rights to freedom of expression, and whilst the Australian Constitution does not have the same strong stance as the USA on freedom of speech (which is contained within its First Amendment), Australia is a Democratic nation and has implied rights under the International Covenant on Civil and Political Rights 1966. The recent Violent Material bill imposed on social media platforms in Australia has angered proponents of free speech and Australian media companies. Advocates for free speech suggest that counter-speech rather than censorship is the most effective means of tackling racist and radicalised rhetoric.

  • Limitations of Technology

Social Platforms are largely ineffective in regulating against hate speech, as they rely upon algorithms to detect and filter online hate speech words, such as racist slurs or incitements to terrorism. Automation and AI tools have a high error rate, and increasingly, racist trolls are bypassing these systems by using encryption and their own invented codes that avoid detection from platforms automatic filters, rendering them ineffective.

Facebook Algorithms to Detect Hate Speech. Image: © Michael Kan, PCMag, All Rights Reserved.


Outsourcing regulation of hate speech to social platforms in Australia may seem like a credible option, however an evaluation of the EU Code of Conduct and  Germany’s NetzDG laws appear to be backfiring, with the outsourcing of “free speech to commercial enterprises resulting in many platforms hitting delete by default to avoid fines” (Roxborough 2018). Australia’s new Violent Material bill has received criticism for its “knee jerk” response and difficulty sentencing the offenders, whilst enforcing actions against companies such as Facebook, who are not based in Australia, may prove difficult. Additionally, allowing social platforms to determine what speech constitutes a violation of hate speech, lacks transparency and offers no judicial scrutiny. With no ‘nuanced understanding of context, culture and law (Human Rights Watch 2019), social platforms may seek to place self-interest ahead of social good, and if faced with short periods to review content, may elect to risk free expression rather than risk hefty fines.

“Many platforms hit delete by default to avoid fines” – Roxborough 2018


Whilst it is acknowledged that gains to reduce online hate speech have resulted from the EU and German models and it is desirable to reduce illegal hate speech, propaganda and violence online, I oppose the introduction of similar laws in Australia, due to the significant risks that are posed by placing regulation in the hands of social platforms. The introduction of such laws would place an enormous degree of power and control over censorship of free speech into the hands of ‘for profit’ companies that have their own biases and agendas. This would be compounded by limited transparency and no judicial oversight. The suppression of speech is open to an abuse of power as social platforms decide what constitutes hate speech, the potential for inconsistent intervention and the limitations of algorithms, which poses significant risks to an open internet. Australia should instead seek to find a balance between Germany’s strict laws and the USA’s freedom of speech that does not consolidate or compound the immense power already held by our media companies.

To Break a Hate-Speech Detection Algorithm, Try ‘Love’. Image: Wired, © Casey Chin, Some Rights Reserved.

Hyper-textual Article Reference List:

ABC News. (2015). “Paris attacks: More than 120 killed in concert hall siege, bombings and shootings; suspected terrorists dead”. Retrieved from <>.

Australian Human Rights Commission. (2013). “Freedom of information, opinion and expression”, Rights and Freedoms. Retrieved from <>.

Bradshaw, S. and P.N. Howard. (2017). “Troops, Trolls and Troublemakers: A Global Inventory of Organized Social Media Manipulation”, Computational Propaganda Research Project. Samuel Woolley and Philip N. Howard, (Eds). Working Paper 2017.12. Oxford, UK. 37 pp. Retrieved from <>.

Cheik-Hussein, M. (2019). “The ‘chilling’ unintended consequences of Australia’s new social media laws”, AdNews. Retrieved from <>.

Cobbe, J. (2019). “Algorithmic Censorship on Social Platforms: Power, Legitimacy, and Resistance”, SSRN. Retrieved from <>. DOI: 10.2139/ssrn.3437304.

Cowan, S. (2019). “It is easier to control guns than thoughts”, The Centre for Independent Studies. Retrieved from <>.

De La Baume, M. (2017). “Angela Merkel defends open border migration policy”, Politico. Retrieved from <>.

Echikson, W. and Knodt, O. (2018). “Germany’s NetzDG: A key test for combatting online hate”, CEPS Research Report. No. 2018/09. Retrieved from <’s%20NetzDG.pdf>.

European Commission. (2017). “Countering online hate speech – Commission initiative with social media platforms and civil society shows progress”, European Commission – Press Release. Retrieved from <>.

Farrar, T. (2019). “Fake News and Social Censorship: An Overview”, Government Europa. Retrieved from <>.

Grattan, M. (2019). “Morrison flags new laws to stop social media platforms being ‘weaponised”, Computer World. Retrieved from <>.

Heffernan, V. (2018). “Ich Bin Ein Tweeter”, The Wired. Retrieved from <>.

Human Rights Committee. (1966). “International Covenant on Civil and Political Rights”, United Nations Human Rights Office of the High Commissioner. Retrieved from <>.

Human Rights Watch. (2018). “Germany: Flawed Social Media Law”. Retrieved from <>.

Jones, D. and S. Benesch. (2019). “Combating Hate Speech Through Counterspeech”, Berkman Klein Centre for Internet and Society at Harvard University. Harvard University Press. Retrieved from <>.

Jourová, V. (2016). “Code of Conduct on countering illegal hate speech online: First Results on Implementation”, European Commission. Retrieved from <>.

Jourová, V. (2019). “Code of Conduct on countering illegal hate speech online: Fourth Evaluation Confirms Self-Regulation Works”, European Commission. Retrieved from <>.

Kan, M. (2019). “Facebook Taps Next-Gen AI To Help It Detect Hate Speech”, PCMag. Retrieved from <>.

Karakeva, S. (2018). “Monitoring and Tagging Hate Speech in Social Media”, Connecting the Dots: The Future of Collective Management. Retrieved from <>.

Law Council of Australia. (2019). “Livestream laws could have serious unintended consequences, chilling effect on business”, Law Council Media Releases. Retrieved from <>.

Matsakis, L. (2018). “To Break a Hate-Speech Detection Algorithm, Try ‘Love”, The Wired. Retrieved from <>.

Muno, D. (2014). “Racial Discrimination Act: The Two Minute Version”, Amnesty International. Retrieved from <>.

Network Enforcement Act. (2017) “Act to Improve Enforcement of the Law in Social Networks”. Article 1. (July 12th 2017). [Act]. Retrieved from <>. 

Oboler, A. (2014). “Legal Doctrines Applied to Online Hate Speech”, Computers and Law Journal. Retrieved from <>.

Parliament of Australia. (2019). “Violent Material Bill 2019”, Criminal Code Amendment (Sharing of Abhorrent Violent Material) Bill 2019. [Act]. Retrieved from <>.

Rodriguez, J. (2009). “Hate Speech”, Council of Europe. Retrieved from <>.

Roxborough, S. (2018). “Why an Ambitious New Online Anti-Hate Speech Law Is Backfiring in Germany”, Hollywood Reporter. Retrieved from <>.

Stein, J. (2016). “How Trolls Are Ruining the Internet”, Time Magazine. Retrieved from <>.

Suzor, N. (2019). “What do we mean when we talk about transparency in content moderation?”, Digital Social Contract. Retrieved from <>.

Tingle, R. (2016). “Googles, Skypes and Yahoos: Racist trolls have made up their own slang so they can use vile slurs online without being caught by automatic filters”, Daily Mail UK. Retrieved from <>.

Tusikov, N. and B. Haggart. (2019a). “Stop outsourcing the regulation of hate speech to social media”, The Conversation. Retrieved from <>.

Tusikov, N. And B. Haggart. (2019b). “It’s time for a new way to regulate social media platform”, The Conversation. Retrieved from <>.

Vice News. (2019, April 5th). “In response to the Christchurch terror attack, Australia’s parliament fast-tracked new laws seeking to punish social media platforms and their executives for failing to remove violent videos, “expeditiously.” #VICENewsTonight“. [Twitter Post]. Retrieved from <>.

Yaraghi, N. (2018). “Regulating free speech on social media is dangerous and futile”, Brookings. Retrieved from <>.

Multimedia Reference List:

Chin, C. (2018). “To Break a Hate-Speech Detection Algorithm, Try ‘Love”, The Wired. [Image]. Retrieved from <>.

The Economist. (2018). “How Social-Media Platforms Dispense Justice”. [Image]. Retrieved from <>.

Efstathiou, N. (2017). “If Facebook became a Digital Censor”, Pro Journo Davos, Medium. [Image]. Retrieved from <>.

Carroll, J. And D. Karpf. (2018) “How Can Social Media Firms Tackle Hate Speech?”, Knowledge@Wharton, [Podcast]. Retrieved from <>.

Goldzweig, R., M. Wachinger, D. Stockmann and A. Römmele. (2018). “Dahrendorf Forum IV”, Working Paper No. 6. London School of Economics and Political Ideas. [Table]. Retrieved from <>.

Kan, M. (2019). “Facebook Algorithms to Detect Hate Speech”, PCMag. [Image]. Retrieved from <>.

The New York Times. (2019). “How the New Zealand Gunman Used Social Media | NYT News”, Youtube. [Video]. Retrieved from <>.

Popov, A. And Z. Bickel. (2017). “Can Germany Fix Facebook?”, The Atlantic. [Image]. Retrieved from <>.

Sivier, M. “Social Media Demon”, Vox Political, [Image]. Retrieved from <>.

Be the first to comment

Leave a Reply

Your email address will not be published.