Should Australia Enforce Stricter Regulation On Hate Speech Online?

The Heated Debate Between Hate Speech, Free Speech and Social Media Regulation

NewsFeed of the Facebook App
NewsFeed of the Facebook App

Online platforms have had an overwhelming impact on communication and social interactions. However, the amount of information circulating on social platforms has weakened mechanisms to govern the healthy exchange of ideas, leading to a rise in hate speech (Chetty and Alathur, 2018, p. 113). Hate Speech is defined by O’Regan (2018) as the speech that “incites hatred… of individuals or groups…based on their race, ethnic origin, sexual orientation, gender identity or other similar attributes”(p. 404).

The increase in online hate has resulted in German Law (NetzDG) and the European Unions’ (EU) Code of Conduct forcing platforms to remove “evidently unlawful material in less than 24 hours or face fines” (Gorwa, 2019, p. 683). NetzDG also makes platforms perform moderation transparency reports (Gorwa, 2019, p. 683). Australia should implement European style regulation to hold platforms accountable for the spread of hate speech. Critiques have stated that laws to regulate social platforms undermine democratic values of freedom of expression.

However, through embracing “digital constitutionalism” (Suzor, 2018, p. 4) through government regulation, it controls social platforms cultural and economic power while helping minority groups feel safe online. This essay will explore these arguments while providing a historical understanding of the underlying tensions of online speech regulation.

Why is Online Speech Regulation an Issue?

Liberal ideals have been at the foundation of the Internet and Web.2.0. This is not any different for social platforms such as Facebook and Youtube, who have maintained their role as “open, impartial and non-interventionist” (Gillespie, 2018b, p. 254-5). The popularity of social platforms have proliferated all speech at a rapid and global reach. This has been supported by the characteristics of social platforms as “networked publics”, in which mechanism such as liking, commenting and sharing, have increased the shareability and scalability of content (boyd, 2010, p. 43).

Dice of Social Platforms
Dice of Social Platforms, Image: Loraine Harrison, All Rights Reserved

The rise of nationalism, xenophobia and anti-immigration rhetoric within Europe has resulted in the rise of online hate speech. However, platforms stance on self-regulation has resulted in a lack of accountability to take down illegal content.  According to an EU Report (2020), sexual orientation and anti-immigration sentiment were the most common reasons for hate speech (Figure 1). Suzor (2018) explains that platforms self-regulation within the Terms of Service do not help these vulnerable communities as it “safeguards the commercial interest of platform”, with users having “little legal redress for complaints” (p. 3)

 

Figure 1: Groups That Are Targeted By Hate Speech
Figure 1: Groups That Are Targeted By Hate Speech, Image: European Union, The EU 2020 Factsheet – 5th monitoring round of the Code of Conduct, All Rights Reserved.

Through this “techlash” (Douek, 2019, p. 1) of hate speech circulating online after events such as the Immigration Crisis, governments such as Germany and the EU have slowly embraced digital constitutionalism and more government regulation to hold platforms accountable. Suzor (2018) states that the push towards regulation allows nations to “rethink how the exercise of power ought to be limited in the digital age” (p. 4). Hence, it is time for Australia to question if platforms should have a responsibility to regulate illegal and hateful speech.

 

Holding Platforms Accountable?

European style regulation will reign the cultural, social and political power social platforms have in spreading hate online. Platforms have always maintained the rhetoric of mediation. However, as Gillespie (2018b) states, platforms do not mediate discourse, “they constitute it” (p. 257). Their power to control algorithms without any accountability poses a risk for individuals lives and the government (Gillespie, 2018b, p. 257). For example, the below Instagram story highlights the inability of the E-Safety Commission to limit online hate speech as a result of policies that favour platform self-regulation.

 

View this post on Instagram

 

A post shared by ABC News (@abcnews_au) on

However, implementing regulation will increase platform accountability. The EU’s Code of Conduct has seen positive results in platform accountability. Figure 3 demonstrates how 90% of reports were reviewed within 24 hours, with 71% of content removed. This highlights the effectiveness of the law in stopping the spread of hate speech as platforms reconcile their economic and cultural power. Thus, platforms must put aside their interests to take greater accountability in regulating speech.

Figure 2: Groups That Are Targeted By Hate
Figure 2: Removal of Hate Speech per IT Company, Image: European Union, The EU 2020 Factsheet – 5th monitoring round of the Code of Conduct, All Rights Reserved.

Through the limiting platforms economic and social control, it will provide minority groups with a safe space to express ideas and uphold fundamental human rights online. Platforms stance as a mediator stems from economic interest, as it is the shareability of content that increases revenue (Matamoros-Fernández, p. 933, 2018). However, minority groups within Australia do not benefit from this liberal model. Within the Australian E-Safety Commission Report (2020),  14% have been experienced hate speech, with 64% taking no actions.

Sydney Swans Adam Goodes Press Tou
Sydney Swans Adam Goodes Press Tour, Image: Hpeterswald, Some Rights Reserved

 

 

This echoes Matamoros-Fernández (2017) idea of “platform racism”, where platforms architecture not only amplify hate speech but provide “arbitrary enforcement of rules” (p. 930) so minorities are not protected. This was evident through hate speech on Facebook targeting Indigenous AFL player Adam Goodes, with liking and sharing mechanism legitimising “racist humour and abuse” (Matamoros-Fernández, 2017 p. 938). The resulted in Goodes deleting Twitter as a result of the platforms inability to control the spread of hateful content. This illustrates the “cultural privilege” (Gillespie, 2018a, p. 8) non-minority groups gain on liberal platforms. However, the implementation of regulation would have stopped the circulation of such content, thus providing minority groups with a safer space to express ideas and identity.

 

 

Impinging on Liberal Values?

The implementation of regulations could not only achieve ineffective results but impinge on the liberal foundation of social platforms. The idea of digital constitutionalism has major tensions with the democratic value of freedom of expression. Through these laws, a concern that overblocking of content will occur as it is in platform’s economic interest to remove a greater amount of content rather than being fined (Theil, 2019, p. 48). This has an “unwelcome intrusion of Western culture and values” (Gillespie, 2018b, p.260), with it being linked to the social and political censorship within the “The Great Firewall of China”. In addition, regulation can impinge deeper into liberal values, as it has the potential to enforce arbitrary policies on certain groups (O’Regan, 2018, p. 427). Gillespie (2018a) states that as “platforms are a product of the company that runs it” (p. 11), economic interest will determine what type of speech should be removed so profits are maintained. This could already be happening to American Republicans, with Ronna McDaniel’s tweet below, expressing concern that their views are being hidden on a left-leaning platform. This could transpire in Australia, with the discriminatory takedown of particular groups an impingement of free speech.

 

The “anonymity, immediacy and global nature” (Banks, 2010, p. 233) of social platforms result in the removal of all hateful speech highly uncertain with or without regulation. If regulation is not effective, the impingement on democratic rights does not justify such action. The anonymity of the Internet poses a major problem to the effectiveness of regulation. As users can create a new profile, it is challenging as platforms engage in an “endless game of whack-a-mole” (Gillespie, 2018b, p. 270). In addition, government regulation may not be flexible for the changing Internet culture and environment. Douek states that governments regulation can “only go so far”, as “online norms of discourse… and coded language” are constantly changing (p. 7). This results in government regulation not being the most effective method for responding to the agile nature of the Internet.

Impact of Online Hate Speech Regulation?

Twitter Logo Birds "Tweeting" At Each Oth
Twitter Logo Birds “Tweeting” At Each Other, Image: opensource.com, Some Rights Reserved

 

 

 

 

 

 

 

 

European style regulation within Australia will have an overall positive impact on ordinary internet users as they will still be able to express their opinion, but not illegal hate speech. It is important to note that the success of speech regulation entirely depends on the values of a nation (Banks, 2010, p. 233). Within the E-Safety Commissioner Report (2019), it was found that a majority of interviewed citizens wanted online regulation for hate speech.

This regulation would have a positive impact on minorities and citizens to not only feel safe but also feel acknowledged as platforms must adhere to a more rigorous moderation and transparency. For ordinary users who are sceptical about the idea of “overblocking”, there has been no evidence of this theory within NetzDG reports (Theil, 2019, p. 48). Hence, platform moderation is not perfect. However, government regulation will deter individuals from posting hateful messages and create a better online culture.

The Need for Hate Speech Regulation

The lack of regulation of hate speech has had negative impacts on users facing abuse. As the EU’s Code of Conduct and NetzDG were some of the first implementations of the regulation, Australia has the advantage to learn and establish a more effective law. Even though “cyberlibertarians” (Banks, 2010, p. 233) will advocate for platforms self-regulation, it is necessary to prevent the spread of hate speech to maintain a civil society and functioning democracy

 

BIBLIOGRAPHY:

Banks, J. (2010). Regulating hate speech online. International Review of Law, Computers & Technology24(3), 233–239. https://doi.org/10.1080/13600869.2010.522323.

boyd, D (2010). “Social Network Sites as Networked Publics: Affordances, Dynamics, and Implications” in Networked Self: Identity, Community, and Culture on Social Network Sites (ed. Zizi Papacharissi): 39-58.

Chetty, N., & Alathur, S. (2018). Hate speech review in the context of online social networks. Aggression and Violent Behavior40, 108–118. https://doi.org/10.1016/j.avb.2018.05.003

Douek, E. (2019). Verified Accountability: Self-Regulation of Content Moderation as an Answer to the Special Problems of Speech Regulation (1-28).Lawfare Institue: Brookings. Available at https://www.lawfareblog.com/verified-accountability-self-regulation-content-moderation-answer-special-problems-speech-0.

Gillespie, T. (2018a). Chapter 1: All Platforms Moderate in A Custodians of the internet: platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.p. 1-23.

Gillespie, T. (2018b). Regulation of and by platforms. In J. BurgessA. Marwick & T. Poell The sage handbook of social media (pp. 254-278). 55 City Road, London: SAGE Publications Ltd doi: 10.4135/9781473984066.n15

O’Regan, C. (2018). Hate Speech Online: an (Intractable) Contemporary Challenge? Current Legal Problems71(1), 403–429. https://doi.org/10.1093/clp/cuy012.

Suzor, N. (2018). Digital Constitutionalism: Using the Rule of Law to Evaluate the Legitimacy of Governance by Platforms. Social Media + Society4(3), 1-11. https://doi.org/10.1177/2056305118787812

Theil, S. (2019). The Online Harms White Paper: comparing the UK and German approaches to regulation. The Journal of Media Law11(1), 41–51. https://doi.org/10.1080/17577632.2019.1666476.

Gorwa, R. (2019). What is platform governance? Information, Communication & Society22(6), 854–871. https://doi.org/10.1080/1369118X.2019.1573914.

Matamoros-Fernández, A. (2017). Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society20(6), 930–946. https://doi.org/10.1080/1369118X.2017.1293130

European Commission (2020). “Countering illegal hate speech online 5th evaluation of the Code of Conduct”. Retrieved at https://ec.europa.eu/info/sites/info/files/codeofconduct_2020_factsheet_12.pdf.

E-Safety Commissioner (2019). “Online Hate Speech: Findings from Australia, New Zealand and Europe. Retrieved at https://www.esafety.gov.au/sites/default/files/2020-01/Hate%20speech-Report.pdf.

Sansitha Iyer
About Sansitha Iyer 6 Articles
A Student at USYD majoring in Politics, International Relations and Digital Cultures. Love learning about new and innovate technologies that are transforming the way individuals live their lives