If a newspaper journalist were to publish an article featuring messages of hate, they would face instant reprimand, and social media platforms should be treated the same. Hate speech must be removed from Australian social media and it is the responsibility of the host platform to ensure that cases of illegal hate speech are acted upon.
Hate Speech on Social Media
Online hate speech obviously stems from biases that existed before the online era, however the internet has allowed for faster and more widespread communication (Gagliardone, I. 2015). Forms of online hate speech are extremely prevalent nowadays as platforms such as twitter allow for anyone and everyone to ‘tweet’ whatever they want. In the past, not everyone had a way to be heard. Or truly, even if they did go out and scream their beliefs to the world, not everyone had to listen to what is being said. The introduction of online spaces has given these people a voice. The online sharing of information has moved the microphone away from those who have a reputation of truth and expertise, and has given it to anyone who has a smartphone or a laptop.
In this blog I will firstly be discussing the obligation of social platforms to remove hate speech. I will be discussing how the concept is being shown in practice in German law, as well as looking into Europe’s 2016 Code of Conduct on Countering Illegal Hate Speech Online (EU Commission. 2016). I will then be analyzing the effectiveness of the enforcement of this concept around the world and then further discussing how it would fit in to the Australian context.
Hate Speech vs Free Speech
There is technically no international definition of hate speech, due to the fact that describing something as hateful has different results all around the world. For this text, I will be using the UN’s definition, in their Strategy and Plan of Action on Hate Speech Synopsis (UN, 2019). Hate speech is defined as “any kind of communication in speech, writing or behavior that attacks or uses pejorative or discriminatory language with reference to a person or a group on the basis of who they are, in other words, based on their religion, ethnicity, nationality, race, color, descent, gender or other identity factor”
The key difference between this and ‘free’ speech is that free speech does not impose or discriminate against a said member or group. Free speech can be shocking and disturbing, but it is not hate speech unless there is a targeted group.
European Code of Conduct
When social media first arrived, platforms like Facebook and YouTube allowed for complete freedom of publishing. At first, this was the beauty of social media. It was a free space which allowed people of all sorts to share their opinions and interests. But this also created an opportunity for people to express hateful and targeted messages, with seemingly no reprimand (Banks, J. 2010).
In 2016, Europe created a code of conduct in which they set out a number of guidelines and rules that dictate how government and the social media platforms can each contribute towards the removal of illegal hate speech online (EU Commission. 2016). In this code of conduct the companies involved are referred to as the ‘IT Companies’. The members of that collective are Facebook, Microsoft, Twitter and YouTube. They are all named as the companies sharing the responsibility of promoting freedom of expression online. Within this code of conduct, they discuss what specifically they are aiming to combat, practices which will combat hate speech, and finally there is a list of 12 rules which the ‘IT companies’ committed to in order to combat illegal hate speech.
Monitoring Social Media
A code of conduct like this was essential for maintaining a safe and monitored online environment. We can see that sites such as Twitter have stepped up in terms of monitoring the content which its users post, through its new initiatives such as the recent ‘fact checking’ mechanism which will publicly label any tweets that are making either false or unproven claims. This was possibly most hilariously seen on a number of the American President’s twitter posts. By making steps such as these, we can see that online spaces are becoming more developed in how they can respond to content which is posted. A big part of the code of conduct discusses how there has to be a system in place for reports of hate speech or other illegal material to be reviewed and removed if necessary.
There is NO WAY (ZERO!) that Mail-In Ballots will be anything less than substantially fraudulent. Mail boxes will be robbed, ballots will be forged & even illegally printed out & fraudulently signed. The Governor of California is sending Ballots to millions of people, anyone…..
— Donald J. Trump (@realDonaldTrump) May 26, 2020
Online Hate speech in Germany
Now I want to discuss how Germany in particular is working to eradicate illegal hate speech. Germany has a Network Enforcement Act which was put in place as an attempt to tackle a rise in right wing extremism (Claussen, V. 2018). This act requires platforms to send suspected criminal activity (including hate speech) directly to the Federal Police, as well as requiring social platforms to remove hate speech within set deadlines, or face a fine of up to 50 million euro’s (Lomas. N, 2020). Whilst the benefits of this Act seem obvious in that it discourages and strongly criminalizes hate speech online, there has been a serious amount of backlash due to it, as some question the privacy issues that could arise from it.
Australian Code of Conduct
So what are we doing in Australia? What is our code of conduct when it comes to combating hate speech online? Do we hold social media platforms accountable?
It appears that originally, we had adopted a rather ‘Americanized’ version of free speech which would allow for some cases of hate speech to fall under the protection of free speech (Timofeeva. Y, 2002). Australia has had laws against hate speech since 1977 (Anti-Discrimination Act 1977), and still to this day introduces laws which are non specific to, but definitely applicable to, online spaces (Crimes Amendment Bill 2018). The reason we started to apply a more European style of monitoring is because our system wasn’t working perfectly (Mason. G, 2009). On April 4, 2019, in the wake of the Christchurch Mosque Massacres, Australia passed “legislation to punish individuals, websites and social media platforms that publish and host abhorrent material” (Mitch Fifield on the Sharing of Abhorrent Material Bill, 2019)
Today we passed world-first legislation to punish individuals, websites and social media platforms that publish and host abhorrent material. In the aftermath of the Christchurch shootings, we are taking a zero tolerance approach to sharing such material. pic.twitter.com/CWCxc8JYM5
— Mitch Fifield (@FifieldMitch) April 4, 2019
What could Australia be doing?
I don’t see a way in which we could properly apply the same laws as Germany successfully in Australia, however I think it is very important that we recognize the attitude and importance that other countries are placing into the removal of hate speech online. We need to hold the social platforms accountable for what is broadcasted, but we also need to maintain an acceptable level of privacy online (if there is such a thing) and individual responsibility. Policing of such issues is important, however it would be much more effective in Australia if there were more proactive ways in which we would prevent hate speech online, rather than making social media a policed and overly lawful environment.
In conclusion, we can see how the creation of Europe’s 2016 Code of Conduct on Countering Illegal Hate Speech Online was needed and enforced. We can see how Germany has taken a tough stance on illegal hate speech, cracking down on those who participate in its publishing. We can also see where Australia stands at the moment, and how Australia responds to events which prevail through the weaknesses of internet safety. Taking notes from the actions of other countries in their endeavor to remove hate speech online will be beneficial for Australia. We need to reprimand the individuals who participate in hate speech, we need to hold social media platforms accountable for content which they allow to be posted, and overall we need to be proactive in finding ways to prevent hate speech.
- EU Commission. (2016). Code of conduct on countering illegal hate speech online.
- Banks, J. (2010). Regulating hate speech online. International Review of Law, Computers & Technology, 24(3), 233-239.
- Gagliardone, I., Gal, D., Alves, T., & Martinez, G. (2015). Countering online hate speech. Unesco Publishing.
Accessed from: https://books.google.com.au/books?hl=en&lr=&id=WAVgCgAAQBAJ&oi=fnd&pg=PA3&dq=countering+online+hate+speech&ots=Tc2n4jKNUv&sig=msM7q_1LzU5eAftZYSVlTJRDTW4&redir_esc=y#v=onepage&q&f=false
- Timofeeva, Y. A. (2002). Hate speech online: restricted or protected-comparison of regulations in the United States and Germany. J. Transnat’l L. & Pol’y, 12, 253.
Accessed from: https://heinonline.org/HOL/Page?handle=hein.journals/jtrnlwp12&div=13&g_sent=1&casa_token=zkUG-qy3rwIAAAAA:ObtXU2xTkM0YLipbUw-reTawmZf_dBxhlSMeKecgmzqfgAYq1-IjwA23X-JEUVQVaXrwbDK4&collection=journals
- Claussen, V. (2018). Fighting hate speech and fake news. The Network Enforcement Act (NetzDG) in Germany in the context of European legislation. Rivista Di Diritto Dei Media, 3, 1-27.
- Mason, G. (2009). Hate crime laws in Australia: Are they achieving their goals?. Criminal Law Journal, 33(6), 326-340.
Accessed from: https://poseidon01.ssrn.com/delivery.php?ID=899085122013087081101021003085078065042017091058019089127004094028072013100004015071010022102032058043026095066101087095065120060002071073047088066020025016008094050002036072084101080090096092124091069090088089088003011026010003126112091094117121082&EXT=pdf
- Lomas, N. (2020). Germany tightens online hate speech rules to make platforms send reports straight to the feds. TechCrunch
- New South Wales. Law Reform Commission. (1999). Review of the Anti-discrimination Act 1977 (NSW) (Vol. 92). The Commission.
- UN Strategy and Plan of Action on Hate Speech Synopsis