Online Hate-Speech Laws in Europe: Would they work in Australia? And do Australians really want them?

Hate Speech Online, It's Enforcement, and racism in Australia

Image: Parade
Image: Parade

Europe’s Code of Conduct on Countering Illegal Hate Speech Online (2016) defines Hate Speech as:

“Conduct publicly inciting to violence or hatred directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin.”

Hate speech does not only negatively affect those targeted, but those who advocate freedom, tolerance, and acceptance in society, and has a severely crippling effect on online discourse, and thus should not be tolerated. The document clarifies that this definition can apply to any content, no matter the intention of the user.

The 2016 Code of Conduct on Countering Illegal Hate Speech Online contains a list of commitments made by tech companies, such as Facebook, Twitter and Google, which can be summarized as follows:

  • Products of Tech Companies must have a clear and effective way of monitoring reports of hate speech in order to promptly remove offensive content
  • These products must have clear rules and/or guidelines to prohibit hateful conduct, and companies must attempt to educate their users on prohibited content, and promote counter-narratives that undermine racist propaganda and encourage critical thinking.
  • Tech companies must have dedicated teams reviewing content that is flagged for removal against the prior mentioned rules, and notifications of hate speech must be reviewed by this team within 24hrs of being flagged.
  • Tech companies must have extensive contact with state authorities to prevent the spread of hate speech, and must ally themselves with Civil Society Organizations in order to increase effectiveness and coverage of hateful content reporting. (In this arrangement, an official trusted reporter, external to the tech company, is to be elected and put in contact with government officials.)
  • Regular staff training on current societal developments is required, and cooperation between tech companies to prevent hateful content is compulsory to prevent hateful content going under the radar.
  • Government, Tech companies, and Pro-acceptance organizations must cooperate to deliver training on countering hateful rhetoric online, and government officials are allowed to intervene to promote the above.
  • These agreed terms are to be regularly assessed and updated, based on the impact observed.

image: promesaartstudio/Adobe Stock
image: promesaartstudio/Adobe Stock

How effective are these terms?

Germany has a severe history of hate speech, and as a consequence has developed into a country very conscious of racist propaganda online. An example of which, is the punishment of prison time for denial of the holocaust, websites which promote such denial being amongst the targets of the 2016 legislation (Oltermann, 2016).

The Guardian reports that by September 2016, after the introduction of German hate-speech laws online, the following rates were reported for hate speech deletion by average users; Facebook 46%, YouTube 10% and Twitter 1% (Oltermann). Under half of reported content was removed, and on top of  this there was a severe lack of communication, showing a reluctance on the corporate side, despite threat of fines.

To upset an audience is to ward them away from promoting a social media platform, and so most tech companies attempt to walk an apolitical line, until forced to act otherwise. However, in an interview, German politicians hold the sentiment that community moderation is the responsibility of those who run them, Social Media companies have “social obligations”, as it is in the companies interest that users abide by the law and have constructive engagements (Oltermann, 2016). There is obviously a clash of opinions.


Would terms like these work with social media companies in Australia? And what are Australian Laws on hate speech?

For an agreement like Europe’s 2016 Code of Conduct, first a law must be in place that justifies the agreement to such terms.

Such a law would be the Australian  1975 Racial Discrimination Act, Section 18c. Section 18C reads as follows:

It is unlawful for a person to do an act, otherwise than in private, if the act is reasonably likely, in all the circumstances, to offend, insult, humiliate or intimidate another person or a group of people; and the act is done because of the race, colour or national or ethnic origin of the other person, or of some or all of the people in the group.”

To summarise: If an individual commits an offensive act in public, that is clearly done to target another based on their ethnicity, it is illegal. This Australian law prohibits the same behaviours banned by the European agreements. This being the case, it would be practical to come to a similar agreement with tech companies under Australia law.

image: @BroHilderChump
image: @BroHilderChump

However, Section 18c is regularly contested by conservative political figures. This contention makes it seem unlikely that any laws will be passed about hate speech online. Constant debate over whether these laws should exist threatens the permanence of any law based upon them.

Free speech is often cited as the reason for debate over hate-speech legality, but, as reported by SBS’ Sunil Badami, such politicians are often quick to silence others who offend them with many lawsuits for ‘defamation,’ ‘hurt and anger’ (Badami, 2016).

The Liberal government, according to MP John Alexander, doesn’t see the reform of Section 18c as a pressing issue, and so is unlikely to repeal it, but by extension may not see it’s enforcement online as a priority either (Badami, 2016).

The recent lack of action over Indigenous deaths in custody, which number over 432 since the last Royal Commission 29 years ago, is evidence of such lack of action, as is the destruction of Indigenous heritage while simultaneously protection of colonial heritage (Creamer, 2020).


The atrocity of the Nazi regime is what motivates German and European governments to enforce tough hate-speech laws, so does Australia have a similar motivation for such laws?

In short, yes.

Australia has a history spotted with violence and hate, but indigenous representation in the master narrative of our country did not start occurring until the 1980s (Carlson, 2019). We must ask ourselves what is different, as, similar to Germany, a minority culture was erased enmass.

The difference is the education around the topic of indigenous rights, or rather the lack of which, due to the colonial roots of our civilization, and the nationalistic desire to preserve our countries image, combined with underlying racism (Carlson, 2019). Nazi Germany lost WW2, and thus their actions are universally condemned, conveniently they were also deplorable fascists. In a blunt sense; the Indigenous unjustly lost in the invasion of Australia, and such is the nature of history that the losing side is erased from the master narrative. This was acceptable at the foundation of our stolen nation, and establishing the foundations for white-supremacy in Australia, which allowed for legislation such as the White Australia Policy (Carlson, 2019).


Section 18c brings about the question; does social media count as a public space?

Arguably yes, an inflammatory post to a hashtag or profile can have a far greater audience. Studies show that 62% of American 18-24 year olds encounter racism online (Jakubowicz, 2017).  Another higher figure of 87% was shown in a sample of 3000 interviewed participants (Jakubowicz, 2017). That’s 2610 people, a far greater audience than most physical world scenarios, such as a racist tirade on the bus.

Jakubowicz (2017) gives extensive detail on a 2013 Australian survey with 2000 participants. 40.1% of the racism witnessed online was done so on Facebook, but on top of this 33.7% of participants admitted to believing the following;

“The Australian way of life is weakened by people from minority racial, ethnic or religious backgrounds maintaining their cultural beliefs and values.”

These findings support the position that, while some may agree with anti-hate laws for social media, many modern Australians, and their politicians of choice, do not. It should be kept in mind that this survey is from 7 years ago, and attitudes may have changed.


While this may be a cynical take, online activism, protests, and petitions have been becoming more and more prevalent, with a recent Kevin Rudd led petition for an inquiry into Rupert Murdoch’s media empire, and activist petitions to ban neo-Nazi content online. The signing of which you, the reader, can participate in.

Petition to investigate Murdoch Media

Petition to Ban Neo-Nazi Content here

The majority of Australians seem too preoccupied with matters other than hate speech online, particularly given the current pandemic, and it is unlikely that tech companies will conform to terms quietly. In reality, the Racial discrimination act has been instated for under 50 years, and more ground work in attitudes towards diversity must be done with younger generations before hate-speech can be restricted online.


References:

  1. Code of Conduct on Countering Illegal Hate Speech Online 2016 (Eu.) (Ger.)
  2. Oltermann, P. (2016). Germany to force Facebook, Google and Twitter to act on hate speech. Retrieved 28 October 2020, from https://www.theguardian.com/technology/2016/dec/17/german-officials-say-facebook-is-doing-too-little-to-stop-hate-speech
  3. Racial Discrimination Act 1975 (Cth) s. 18c (Austl.) 
  4. Badami, S. (2016). Comment: Why do we really need to repeal 18C?. Retrieved 28 October 2020, from https://www.sbs.com.au/topics/voices/culture/article/2016/11/16/comment-why-do-we-really-need-repeal-18c
  5. Creamer, J. (2020). This routine lack of justice should shame Australia. Retrieved 28 October 2020, from https://www.theaustralian.com.au/commentary/this-routine-lack-of-justice-should-shame-australia/news-story/633f87a6d241b03cacc9ab7fe77a7f9f
  6. Carlson, B. (2019). Disrupting the master narrative – Griffith Review. Retrieved 28 October 2020, from https://www.griffithreview.com/articles/disrupting-master-narrative-indigenous-tweeting-colonial-history/
  7. Jakubowicz A. et al. (2017) Promoting Resilience Through Regulation. In: Cyber Racism and Community Resilience. Palgrave Hate Studies. Palgrave Macmillan, Cham. https://doi-org.ezproxy1.library.usyd.edu.au/10.1007/978-3-319-64388-5_8