Under Pressure – Should Australia follow in the footsteps of others regarding hate speech moderation?

The removal and suppression of hate speech and illegal speech from online platforms is a remarkably difficult, but necessary endeavour.

Image: European Commission, © European Union, 1995-2020, Some Rights Reserved

Introduction

The removal and suppression of hate speech and illegal speech from online platforms is a remarkably difficult, but necessary endeavour. Where the ambiguity of offline global hate speech regulation meets the affordances and moderation complications of social media, a unique challenge is created for governments, platforms, and users alike to face – how exactly do you successfully manage the internet? And how should this be done in Australia? Through examining global approaches, such as the German Network Enforcement Laws and the European Code of Conduct, against Australia’s own hate speech management, and assessing current tactics employed by social media platforms, it is clear that the regulation and removal of damaging content online is imperfect at best. Whilst the crisis of online hate and illegal speech must be addressed, this responsibility should not further be placed on the shoulders of social media platforms within an Australian context.

It is a truth universally acknowledged…?

Within the global landscape, both online and offline, there is no single definition of what hate speech is. As a result of this, the term has been outlined in many different ways amongst different cases. Within this essay, as outlined by Gelber and Stone (2007), hate speech can be defined as:

Speech or expression which is capable of instilling or inciting hatred of, prejudice towards, a person or group of people on a specified ground including race, nationality, ethnicity, country of origin, ethno-religious identity, religion, sexuality, gender identity, or gender (p. xiii).

With a lack of single definition, similarly there are no universal legal regulations surrounding the notion where concepts of free speech and censorship come into play.

In the United States, it is impossible for laws prohibiting hate speech to exist, where protections of speech are written staunchly into the constitution through the First Amendment (Gelber & McNamara, 2015). Comparatively, whilst Australia has no such constitutional right, the only federal legislation with relevance to addressing hate speech is the Racial Discrimination Act 1975 s. 18C (Cth), formed on the basis that racism causes substantial harm to an individual’s dignity, well-being, and safety (Mason & Czapski, 2017).

Infographic of the Racial Discrimination Act 1975 created by the author on Canva.

 

Affordances of the internet and hate speech – A perfect storm

Hate speech online has an equal, if not greater impact than offline. Racism online enacts as an extension of racism offline as cyber-racism (Mason & Czapski, 2017). This concept can be further extended to all forms of hate speech, where they also enact as manifestations of offline counterparts.

It has been further suggested that online hate speech takes the form of “hate 2.0” (Oboler, 2014 p.11) creates public normalisation and rapid establishment of hate networks. Where social media platforms act as imagined communities of which user content make up foundation, individuals are given the power to scribe hate into the very roots of the platform, and the essence of hate speech becomes much more direct (Oboler, 2014). Features of online communication including a lack of accountability, and perceived anonymity are what allow levels of hate to thrive, and evolve communities into “toxic technocultures” (Massanari, 2017). Left unchecked, hate speech on social media – where it is becoming increasingly mainstream – arguably presents a larger problem to society than offline, where a lack of comprehensive laws offline combines with the difficulty of managing the unique environment of the internet.

How the Christchurch terrorist used 8chan to connect with neo-Nazis

 

Current global responses

The Code of Conduct

The European Commission’s Code of Conduct was put into place in 2016 with the agreement of Twitter, Youtube, Facebook, and Microsoft in order to help limit the spread of hate speech online. The 5thround of monitoring has revealed companies are reviewing 90% of flagged content within 24 hours, and 71% of content labelled illegal hate speech is removed. Since its introduction, Snapchat, Instagram, Dailymotion, and most recently TikTok have agreed to participation.

Importantly, the code is not legally binding. It is designed to create an environment of voluntary compliance rather than direct enforcement where wording electively stipulates “commitments” of companies rather than “obligations” (Irving, 2019 p.258).

The Network Enforcement Act

In 2017, the Network Enforcement Act (NetzDG) laid out requirements for social media companies in Germany to take on board. Where Germany has had strict laws prohibiting hate speech and volksverhetzung “incitement to hatred” since 1960, NetzDG is an online addition.

Unlike the Code of Conduct, the NetzDG is legally binding, requiring assessment and removal of illegal content from social media platforms (Irving, 2019). If companies fail to remove unlawful content within 24 hours, they could face up to $60 million fines.

 

A case for the introduction of greater obligation or commitment of social platforms to remove hate speech

The internet is forever – Australia’s current regulations are ineffectual

In response to the failure of social media platforms to stop large-scale spread of the livestreaming of the 2019 Christchurch massacre, the Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019was commenced.

Abhorrent Violent Material Act Flowchart, © Commonwealth of Australia 2019.

Whilst this act was formed as a prompt response to incident, there are questions as to whether it was a meaningful one. The evidence of its success is mixed, and racial defamation continues to be the only form of hate speech to have federal coverage (Oboler, 2014).

Infographic of the Online Hate Speech report – eSafety Commissioner created by the author on Canva.

The most recent inquiry into online hate speech in Australia demonstrates we still have a long way to go. In the online environment where content is relatively permanent (Obeler, 2014), incidents of “general circulation” (Gelber & McNamara, 2016 p.325) can continue to hurt communities.

Social media platforms’ continued self-interest

Without clear outlines set from governments, social media platforms often arbitrarily update codes of conduct, and continuously flout them in favour of potential profit.

Thread by Jason Koebler demonstrating the existing cherry-picking of Twitter in regard to their violence code of conduct.

Official response from twitter on the matter with user replies highlighting hypocrisy.

It is necessary for regulation to be enforced in order to make online communities a safer place. But is this best achieved through legislation that pushes further power onto companies?

 

Regulations pushing obligations onto social media platforms are not the solution.

The inadequacies of current moderation strategies

Currently, in-house endeavours from social media platforms to remove harmful and illegal content are not functioning accurately and efficiently. Platforms use a mix of human moderators, and AI systems in order to flag and review content. Both come with major issues. Human moderators are routinely made to work under stressful (and sometimes traumatic) conditions without effective training, in order to help keep social platforms safe.

In terms of AI, it is not the solution for the foreseeable future. Whilst AI systems can learn incredibly quickly, human culture is too complex and ever-changing for algorithms to easily pick up, where machines have difficulty understanding context-dependent content. Between April and June, YouTube’s increased AI moderation failed after replacing many human moderators, where around 160,000 videos were reinstated after being incorrectly taken down. Ultimately where it is unclear how far AI technologies can have success and current human moderation is troubling, changes must be made.

The double-edged sword – Ensuring democracy by threatening democracy

Stopping online hate speech should not be controversial. When considered outside of a US-constitutional lens, the concept of regulation of harmful content online is not inherently problematic, however a major issue lies in the outsourcing of content moderation to social media platforms. Germany has been harshly criticised for the NetzDG, for having “privatised one of [the state’s] key duties: enforcing the law” (Rohleder, 2018) by effectively allowing companies with self-interest to decide what constitutes as unlawful. Hate speech indeed acts as a menace to democratic values where current management of the threat conversely threatens democracy once again by handing over considerable power to controlling the internet.

 

What is the solution?

There is a necessity in Australia to control and diminish hate speech, leaving social media platforms unchecked is far too dangerous to ignore. However, replicating global regulations such as the EU Code of Conduct and NetzDG which bestow further responsibility – and therefore power – onto major companies of social media platforms could have even greater consequence for ordinary users than the current scenario. Until social media platforms can provide greater transparency, openness, and accountability (Wenguang, 2018) and AI software can replace human moderators, involving courts and public prosecutors to enforce obligations (Rohleder, 2018), alongside encouraging individual and peer-to-peer monitoring through public campaigns (Ross, 2018) could be the best way forward in regulating hate speech online.

 

 

 

 

Citations

ABC News. (2019). How the Christchurch terrorist used 8chan to connect and joke with neo-Nazis, Youtube[Video]. Retrieved from https://www.youtube.com/watch?v=44KEmbJelT8&t=218s

Ali, O. (2018). Why online hate speech is more problematic than offline hate speech. The Candor. Retrieved from https://thecandor.wordpress.com/2018/09/12/why-online-hate-speech-is-more-problematic-than-offline-hate-speech/

Anderson, B. (1983). Imagined communities (1991). Imagined communities: Reflections on the origin and spread of nationalism. London: Verso.

Chee, F. Y. (2020). TikTok to join Eu code of conduct against hate speech, Reuters. Retrieved from https://www.reuters.com/article/us-eu-tiktok-hatespeech-idUSKBN25Z17K

Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019 (Austl.)

eSafety Commissioner. (2019). Online hate speech.Retrieved from https://www.esafety.gov.au/about-us/research/online-hate-speech

European Commission. (2020). The EU Code of Conduct on countering illegal hate speech online.Retrieved from https://ec.europa.eu/info/policies/justice-and-fundamental-rights/combatting-discrimination/racism-and-xenophobia/eu-code-conduct-countering-illegal-hate-speech-online_en

European Commission. (2020). 5thevaluation of the Code of Conduct. Retrieved from             https://ec.europa.eu/info/sites/info/files/codeofconduct_2020_factsheet_12.pdf

Gelber, K., & McNamara, L. (2015). The effects of civil hate speech laws: Lessons from Australia. Law & Society Review.49(3), 631-664. doi: 10.1111/lasr.12152

Gelber, K. & McNamara, L. (2016). Evidencing the harms of hate speech. Social Identities.22(3), 324-341. doi: 10.1080/13504630.2015.1128810

Gelber, K. & Stone, A. (2007).  Introduction. In K. Gelber, & A. Stone (Eds.), Hate speech and freedom of speech in Australia.(pp. xiii-xviii). The Federation Press

German Law Archive. (2017). Network Enforcement Act (Netzdurchsetzunggesetz, NetzDG). Retrieved from https://germanlawarchive.iuscomp.org/?p=1245

Hefferman, V. (2018). Ich Bin Ein Tweeter, Wired. Retrieved from https://www.wired.com/story/germany-twitter-social-media-trolling/

Irving, E. (2019). Supressing atrocity speech on social media. AJIL Unbound.113, 256-261. doi: 10.1017/aju.2019.46

Koslowski, M. & Lewis, F. (2020). What is hate speech?, Sydney Morning Herald. Retrieved from https://www.smh.com.au/national/what-is-hate-speech-20200202-     p53wzy.html

Lapowsky, I. (2019). Why Tech Didn’t Stop the New Zealand Attack From Going Viral, Wired. Retrieved from https://www.wired.com/story/new-zealand-shooting-video-social- media/

Mason, G. & Czapski, N. (2017). Regulating cyber-racism. Melbourne University Law Review. 41(1), 1-53.

Massanari, A. (2017). #Gamergate and the fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society.19(3), 329-346.

Merkel, R. (2019). Livestreaming terror is abhorrent – but is more rushed legislation the answer?, The Conversation.Retrieved from https://theconversation.com/livestreaming-terror-is-abhorrent-but-is-more-rushed-legislation-the-answer-114620

Oboler, A. (2014). Legal doctrines applied to online hate speech. Computers & Law.87, 9-15.

Punsman, B. (2018). Three months in hell, Süddeutsche Zeitung. Retrieved from https://sz-        magazin.sueddeutsche.de/internet/three-months-in-hell-84381

Racial Discrimination Act 1975 s. 18C (Cth). Retrieved from https://www.legislation.gov.au/Details/C2016C00089

Rohleder, B. (2018). Germany set out to delete hate speech online. Instead, it made things worse. New Perspectives Quarterly. 35(2), 34-36.

Romano, A. (2018). Richard Spencer isan infamous white nationalist. Twitter says he’s not part of a hate group, Vox.Retrieved from https://www.vox.com/2018/9/4/17816936/why-wont-twitter-ban-richard-spencer-hate-groups

Ross, K. (2018). Hate speech, free speech: The challenges of the online world. Journal of Applied Youth Studies.2(3), 76-81.

Shihipar, A. (2017). Twitter doesn’t need more policies, it needs diverse moderators, QZ. Retrieved from https://qz.com/1106845/twitter-doesnt-need-more-policies-it-needs-diverse-moderators/

The Verge. (2019). Inside the traumatic life of a Facebook moderator, Youtube[Video]. Retrieved from https://www.youtube.com/watch?v=bDnjiNCtFk4

Tusikov, N. & Haggart, B. (2019). Stop outsourcing the regulation of hate speech to social media, The Conversation.Retrieved from https://theconversation.com/stop-outsourcing-the-regulation-of-hate-speech-to-social-media-114276

United Nations. (2019). Strategy and plan of action on hate speech. Retrieved from             https://www.un.org/en/genocideprevention/documents/UN%20Strategy%20and%2      0Plan%20of%20Action%20on%20Hate%20Speech%2018%20June%20SYNOPSIS.pdf

U.S. Const. amend. I.

Vincent, J. (2020). AI won’t relieve the misery of Facebook’s human moderators, The Verge.Retrieved from https://www.theverge.com/2019/2/27/18242724/facebook-moderation-ai-artificial-intelligence-platforms

Vincent, J. (2016). Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day, The Verge. Retrieved from https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist

Vincent, J. (2020). YouTube brings back more human moderators after AI systems over-censor, The Verge. Retrieved from https://www.theverge.com/2020/9/21/21448916/youtube-automated-moderation-ai-machine-learning-increased-errors-takedowns

Wenguang, Y. (2018). Internet intermediaries’ liability for online illegal hate speech. Frontiers of Law in China.13(3), 342-356.

 

 

 

 

Laura Ganley
About Laura Ganley 2 Articles
Second-Year USYD student majoring in Digital Cultures and Marketing. Avid TikTok scroller. Interested in understanding the cultural impacts of social media on our daily lives. Find me making conversation @laurascybers on Twitter!