Hate speech is defined as public speech that expresses animosity against an individual or a group on the basis characteristic such as religion, ethnicity,gender, color,or sexual orientation,etc.(Nockleby,2020)
In the generation of Web2.0,social media platforms have been used as a tool to spread hate speech and terrorism.Public discourse has become political weapons targeting minorities groups such as immigrants, refugees, women and children,etc.(Robert,2016)Online hate speech raises difficult legal issues because of the lack of international consensus on hate speech law.In 2016, the European commission signed Code of conduct on countering illegal hate speech with Google，YouTube, Facebook, Twitter and microsoft-hosted consumer services such as LinkedIn on 31 May.The German NetzDG law, passed in June 2017, imposed heavy fines of up to 50 million euros on social media platforms if fails to remove “illegal” posts within 24 hours.(Douek, 2020)
This essay will argue that the obligation of social media platforms to remove hate speech should be applied in Australia.The first reason is that hate speech is a wide spread issue in Australia but current Australian law fails to address the problem.The second reason is that the Code f Conduct has achieved positive results on effectively removing hate speech contents so it may optimize the network environment in Australia as well.On the other hand,this essay will discuss the reasons against the introduction of this law in Australia considering the risk of restriction on freedom of speech and the complex global nature of the internet.
Video:A Look at EU Guidelines to Address Illegal Content Online
History background on Hate Speech
In 1859, The British philosopher John Stuart Mill points out in On Liberty that “The only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others”.
The Harm Principle affects the legal definition of hate speech and offer the opinion that freedom of expression can be legally restricted when the speech incites discrimination and violence against others.(Matamoros-Fernández,2017)
In 2015, the flow to the European Union of refugees migrants from countries mostly under totalitarian regimes has shaken EU politics and society (Assimakopoulos, Baider & Millar, 2017)Hate speech against asylum-seekers triggered tension and prejudice.The media discourse repeated alarmist expressions like “huge migrant crisis” and “wave after wave of migrants entering the EU”, emphasizing the main consequences of these migrants — violence and threats.Comments about refugees like “refugees should drown” or “more asylum-seekers’ homes would be burned”flowed online(UNHCR. 2016)
In 2015, German police reported 906 attacks on asylum seekers’ homes, ranging from burning to physical attacks(UNHCR. 2016).The ambiguities of some mainstream parties andpoliticians, such as imposing sanctions on those providing assistance to refugees further exacerbate moral and political confusion.
The 2016 report of the European Commission noted that, “racial insults are becoming more prevalent, xenophobic and hate speech has reached unprecedented levels, and trust in national and European institutions is eroding” (ECRI, 2017).
In response, the European Union introduced law ordering social media platforms to remove hate speech.
The Debate:Should the obligation of social media platforms be applied in Australia?
The ‘Yes’ Camp
A research report conducted by eSafety Commissioner (eSafety) in Australia gives the following findings demonstrating the wide spread of hate speech in Australia.
Every 1 in 7 adults aged 18-65 Australians (14%) experienced online hate speech in 12 months to August 2019.
The rate LGBTQI or Indigenous group experience online hate speech is twice the national average
In 2019,The Australian Parliament passed the Criminal Law Amendment Criminal Code Amendment Act 2019 in response to the terrorist attack in Christchurch, New Zealand that creates new criminal offense for failing to “ensure expeditious removal” of “abhorrent violent material” (D’Souza, Griffin, Shackleton & Walt, 2018)
However,the law has been widely condemned by Internet rights groups, the technology industry and academics who study freedom of speech for its ambiguity and uncertain effectiveness.It was passed in a very short period of time, with no time for consultation with experts or civil society.It did not define the meaning of “expeditious” so it is not clear that in what time period a platform would need to act to comply with the law’s requirement.(Douek,2019)
Compared with Australian laws,The fifth evaluation of EU Code shows successful and effective results on countering hate speech:
90% of flagged contents was assessed by social media platforms within 24 hours
72% of the content counted to be illegal hate speech is removed
The figure were only 40% and 28% respectively when the Code was first launched in 2016.(European Comission,2020)
I am proud to announce that Code of Conduct remains a success story; 90% of flagged content was assessed by platforms in 24 hours.
In countering illegal #HateSpeechOnline, Code of Conduct offers urgent improvements while respecting freedom of expression.https://t.co/fbUuwnR1Fi
— Věra Jourová (@VeraJourova) June 22, 2020
I welcome results of 5th evaluation of EU Code of Conduct on countering illegal #onlinehatespeech.
I encourage IT companies to keep up the good work.
But further improvements on transparency and feedback to users are needed.https://t.co/t97KahLSHS#NoPlace4Hate @EU_Commission pic.twitter.com/xbpgAvPwWS
— Didier Reynders (@dreynders) June 22, 2020
In 90.4% of the cases, IT companies have achieved the target of reviewing notifications within 24 hours,and the positive trend continues to grow (89% in 2019)
In the first quarter of 2019, Facebook deleted 4 million pieces of content that violated hate speech policy.In 2018, more than 6.2 million Twitter accounts are warned containing hate speech, and Twitter took action against about 536,000 accounts.A 2018 study measuring the effectiveness of the Code studies activities of about 175 “haters” in several states reports a drop of number of hate tweets from 60,000(in2016) to 7400 ( in 2018) (Conway et al., 2018).
Firstly,Government regulation on social media raises concerns on the restriction of freedom of speech and it may limit the progress of our society and even further trigger bias and discrimination.
In April 2015, Facebook banned a trailer for a new ABC comedy show that sought to counter the white frame of the aboriginals ,because it used an image of two topless women participating in a traditional Aboriginal ritual.(（7）Matamoros-Fernández, 2017) The platform flagged the video “offensive” and a violation of its nudity policy (Aubson, 2015).
In 2016, Indigenous activist Celeste gave a speech discussing colonialism and indigenous feminism , accompanied by a similar image and Liddle was temporarily banned by Facebook for posting a “sexually explicit nature ” photo in violation of facebook’s rules (Liddle, 2016).
— Kevin_Rennie (@Kevin_Rennie) March 16, 2016
Therefore, it can be argued that the government regulation social media may lead to risks of a setback in our fight for democracy and expression of individuality.
Secondly,social media platforms are multinational companies so users in different areas consume contents at their own cultural level and political environment,leading to different definition of ‘hate speech’.The laws governing social media need to be sensitive to socially acceptable traditions in each country.(Flew, Martin & Suzor, 2019).A rule apply to European countries may not necessarily obtain the same positive results in Australia.
Activists and journalists have found themselves censored as in disputed territories such as the Palestinian Territories, Kashmir because Facebook wants to avoid legal liability.(Laub, 2019).In the Aboriginal example mentioned above,Facebook justified the ban on nudity image by citing its “community standards”
However, Facebook refused to ban racist pages and hate speech targeted at Aboriginal and initially ruled that they did not violate its terms of service, but forced its creators to rename them as “controversial content” (Oboler, 2013).
This shows social media platform bias of understanding unique indigenous culture and a bias towards supporting western ideas and white supremacist of free speech.(Gillespie, 2010)
Implications of the Law
Firstly,the introduction of this law gives ordinary users a double identity because they will not only be the one to be ‘regulated’ but also the ‘supervisor of network environment .”Users can report or flag hate speech that let them feel offended or discomfort.The cooperation of social media censored mechanism with ordinary users can create a healthier network environment.But there is a risk of malicious tip-offs against individual or a cultural group .The law will also act as a way of education to increase netizens’ awareness and sensitivity to online hate speech and terrorism.In the long run, netizens will be able to deal with complex information on the Internet more rationally and will not be easily incited by terrorist speech, and therefore reduce hate crime rate and obtain stability of society.
Hate speech is a common problem in Australia that is growing and which may trigger further social problems such as hate crime.In general, the law should be applied in Australia because it has already achieved effective results in Europe.However, the regulation also comes with problems such as restrictions on freedom of speech.There is a risk that the platform delete posts with sensitive information which are actually spreading culture.Also,the complexity of the network environment means that this policy may not achieve the same positive results in the Australian environment.In fact, dealing with hate speech is a long and complicated process.I think the fundamental solution to the problem must be a a concept called‘coregulation’ which is the combination of self-regulation of social media platforms and government intervention.Governments should work with social media companies to discuss solutions that mutually appropriate for both parties to strike a balance between promoting freedom of expression and eliminate hate speech.The ultimate goal is to improve the quality of Ordinary Internet users so as to create a better network environment.
Aubusson, K. (2015, April 13). Facebook pulls clip for ABC show ‘8MMM’, claiming images of Aboriginal women breached nudity policy. Sydney Morning Herald. Retrieved from http://www.smh.com.au/digital-life/digital-life-news/facebook-pulls-clip-for-abc-show-8mmm-claiming-images-of-aboriginal-women-breached-nudity-policy-20150413-1mk2ws.html
Baider, F. H., Assimakopoulos, S., & Millar, S. L. (2017). Hate speech in the EU and the C.O.N.T.A.C.T project.In S. Assimakopoulos, F. H.
Baider, & S. Millar (Eds.), Online Hate Speech in the European Union: A DiscourseAnalytic Perspective (pp. 1-6). Springer. SpringerBriefs in Linguistics https://doi.org/10.1007/978-3-319-72604-5_1
Conway, M., Khawaja, M., Lakhani, S., Reffin, J., Robertson, A., & Weir, D. (2018). Disrupting Daesh: Measuring Takedown of Online Terrorist Material and Its Impacts. Studies In Conflict & Terrorism, 42(1-2), 141-160. doi: 10.1080/1057610x.2018.1513984
Douek, E. (2020). Germany’s Bold Gambit to Prevent Online Hate Crimes and Fake News Takes Effect. Retrieved from https://www.lawfareblog.com/germanys-bold-gambit-prevent-online-hate-crimes-and-fake-news-takes-effect
Douek, E. (2019). Australia’s New Social Media Law Is a Mess. Retrieved from https://www.lawfareblog.com/australias-new-social-media-law-mess
D’Souza, T., Griffin, L., Shackleton, N., & Walt, D. (2018). Harming women with words: The failure of Australian law to prohibit gendered hate speech. Harming Women With Words, 41(3), 939-976.
ECRI. 2017. Annual report on ECRI’s activities covering the period from 1 January to 31
December 2016. ECRI report. Retrieved from
e.Safety Commissioner. (2019). Hate speech Online Findings from Australia, New Zealand and Europe (pp. 6-15). Retrieved from https://www.esafety.gov.au/sites/default/files/2020-01/Hate%20speech-Report.pdf
European Commission. (2020). Commission publishes EU Code of Conduct on countering illegal hate speech online continues to deliver results. Retrieved from
Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal Of Digital Media & Policy, 10(1), 33-50. doi: 10.1386/jdmp.10.1.33_1
Gillespie, T. (2010). The politics of ‘platforms’. New Media & Society, 12(3), 347–364. doi:10.1177/146144480934273
Laub, Z. (2019). Hate Speech on Social Media: Global Comparisons. Retrieved from https://www.cfr.org/backgrounder/hate-speech-social-media-global-comparisons
Liddle, C. (2016, March 14). Rantings of an Aboriginal feminist: Statement regarding the Facebook banning. Blackfeministranter. Retrieved from http://blackfeministranter.blogspot.com/2016/03/statement-regarding-facebook-banning.html
Matamoros-Fernández, A. (2017). Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930-946. doi: 10.1080/1369118x.2017.129313
Nockleby, John T. (2000), “Hate Speech” in Encyclopedia of the American Constitution, ed. Leonard W. Levy and Kenneth L. Karst, vol. 3. (2nd ed.), Detroit: Macmillan Reference US, pp. 1277–79.
Oboler, A. (2013). Aboriginal memes and online hate (pp. 1–87). Melbourne: Online Hate Prevention Institute. Retrieved from
Roberts, S. T. (2016). Commercial content moderation: Digital laborers’ dirty work. In S. U. Noble& B. Tynes (Eds.), The intersectional Internet: Race, sex, class and culture online (pp. 147–160).New York, NY: Peter Lang.
UNHCR. 2016. Global trends: Forced displacement in 2015. UN Refugee Agency report. Retrieved from http://www.unhcr.org/statistics/unhcrstats/576408cd7/unhcr-global-trends-2015.html.