Fighting hate speech on Australian social media: who’s responsible?

Caption: An example of online hate speech directed at Adam Goodes. Image Credit: Bruce-Smith (2015). All rights reserved.

Introduction

Definitively, the need for Australian legislation requiring social media platforms to remove hate and illegal speech from the feeds of Australian social media users is becoming increasingly vital in order to prevent the irreparable physical and psychological harm caused by such content.

Notably, amid concerns that such legislation would enable a repressive government regime and impinge on citizens’ right to free speech – as well as the fact that in practice, effective moderation of content by platforms is impossible – Australia is yet to take any legal action to oblige platforms to remove hate or illegal speech from their feeds; however, as the following evidence will illustrate, it is imperative that such legislation be introduced immediately in order to prevent future, devastating physical and psychological damage being inflicted upon Australians.

 

Hate and illegal speech: a brief explanation and history

Hate speech includes any form of communication which attacks or discriminates an individual or group on the basis of their ‘race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, ability or disease’ (Matamoros-Fernandez, 2017, p. 936).

Hate speech is endemic online, with the Secretary General of the United Nations proclaiming that the use of the Internet to spread hate and illegal speech presents ‘one of the most significant human rights challenges’ emerging from technology (Alkiviadou, 2019, p. 20).

Increasingly, however, following the creation of the EU’s Code of Conduct on Countering Illegal Hate Speech Online in 2016, states have responded to calls to make social media platforms legally responsible for the removal of any hate or illegal speech content from their platforms; with Germany introducing revolutionary legislation requiring social media platforms to remove hate speech content from their feeds within a 24 hour window or else face multimillion dollar fines (Lomas, 2020).

 

Reducing psychological harm

Hate and illegal speech posted online can have a grave psychological effect upon individuals targeted (Tusikov, 2019); therefore, legal enforcement of social platforms’ responsibility to moderate hate and illegal speech posted online is essential in order to protect the mental wellbeing of Australians.

According to Tusikov (2019), hate speech can lower self-esteem, cause severe mental distress, induce social isolation and increase feelings of vilification and fear in those targeted by such content online.

Particularly, in Australia, Matamoros-Fernandez (2017) notes, the widespread and horrific impact of online hate speech upon the mental wellbeing of Indigenous Australians targeted by hateful content is poignant evidence of this.

Unfortunately, social media is used to ‘amplify racism’ (Matamoros-Fernandez, 2017, p. 933) towards First Nations peoples in Australia through creating and circulating hateful content, with over 88% of Australian social media users having borne witness to hate speech content directed at Indigenous Australians online.

Such content has significantly affected countless targets; most notably, Indigenous AFL star Adam Goodes, who after enduring unrelenting streams of hate speech posted from Facebook and Twitter accounts created solely to racially vilify him, suffered such great harm to his mental health that he was forced to eventually retire from AFL (Matamoros-Fernandez, 2017).

 

Above: An example of online hate speech directed at Adam Goodes. Image Credit: Bruce-Smith (2015). All rights reserved.

 

The potential for online hate speech to cause catastrophic psychological harm to Australian social media users is thus clear – and thereby, the nation’s legal system needs to adapt to place legal responsibilities upon social media platform providers to moderate hate and illegal speech on Australian social media feeds and prevent such harm.

 

Preventing physical violence

Certainly, the total removal of hate and illegal speech from Australian social media feeds by platform providers would significantly reduce the incidence and risk of violence motivated by hatred occurring towards First Nations peoples in Australia (Guiora and Park, 2017; Russell and Cunneen, 2018).

Guiora and Park (2017) concede that the relative anonymity afforded to social media users enables the saturation of platforms with hate speech and calls for violence towards certain groups that indeed, ‘may…in some extreme cases, cause violence’ (Tusikov, 2019, n.p.).

Particularly, in Australia, hate speech inciting violence against First Nations peoples is ubiquitous on social media (Russel and Cunneen, 2018). Increasingly, Russell and Cunneen (2018) relate, neighbourhood crime watch style Facebook groups being used in various towns in regional Australia are functioning to ‘produce…a racialized narrative of crime’ (Russell and Cunneen, 2018, p. 1), with members posting the images and locations of Indigenous Australians online accompanied by derogatory captions accusing them of crime and encouraging vigilantes to ‘track them down’ (Russell and Cunneen, 2018, p. 8) and physically attack those pictured.

 

Above: an example of racist hate speech posted in the Kalgoorlie Crimes Facebook page.
Image credit: Russell and Cunneen (2018).
All rights reserved.

 

Such ‘overt racism’ (Russell, and Cunneen, 2018, p. 8) carries a very real potential to inflict physical damage on its targets (Alkiviadou, 2019) – a risk that would be eliminated if platforms themselves were obliged to take such hate content offline.

 

Fighting terrorism and extremist activity

 Additionally, the need for legislation enforcing social media platform providers’ ethical obligation to remove hate and illegal speech from the feeds of Australian users is imperative in order to minimize the risk of Australians contacting – and consequently perpetrating violence on behalf of – extremist groups and terrorist organizations active on social media; in particular ISIS.

Brooking and Singer (2016, p. 72) admit that ISIS ‘owes its existence’ to social media, utilising multiple social media channels to disseminate a mass of war propaganda depending ‘almost entirely on evocative and shareable images’ (Brooking and Singer, 2016, p. 76) of prisoners, murders and martyrs that has enabled it to recruit in excess of 30,000 foreign fighters to its cause and capture a swathe of key cities across Syria and Iraq (Kranz, 2017) with a combined civilian population of over 12 million (Schmitt et al, 2019).

Above: A typical pro-ISIS tweet.
Image Credit: Al Mahmoud, 2017
All rights reserved.

 

Clearly, Jakubowicz (2017) confirms, existing social media moderators are failing spectacularly in removing the hate and illegal speech spread by ISIS through social media. With estimates that there are more than 70,000 active ISIS Twitter accounts (Lieber and Reiley, 2016) and over 200,000 pro-ISIS tweets successfully posted each day in the Twittersphere (Blaker, 2015), social networking platforms have been ‘easily weaponized’ (Brooking and Singer, 2016, p. 72) by ISIS and other extremists to spread hate and illegal speech attracting followers to their cause, with no effective system of moderation in place to counteract such activity.

To date, over 100 Australian nationals have been radicalized online and left the country to join ISIS militia (Blaker, 2015); therefore, in order to prevent the spread and damage inflicted by terrorist and extremist organizations, it remains imperative that action is taken to make social media platform providers liable under Australian law to remove illegal and hate speech content from the nations’ feeds.

 

Challenges in the fight for online hate speech law

 Indeed, while the need for Australian legislation requiring social media platforms to remove hate and illegal speech from Australian social media feeds remains convincing, the implementation of such a law, in reality, is problematic.

Such legislation, by ensuring that all hate speech is removed from social networks, arguably denies citizens the human right to freedom of expression enshrined in Article 19 of the Universal Declaration of Human Rights (1966) (Mchangama, 2011); a right vital to ‘facilitating differences of opinion’ and thus fostering ‘a healthy society’ (Guiora and Park, 2017). While certainly, democracies assent that there remains a certain limit to such freedoms (Guiora and Park, 2017) for the protection of citizens’ concurrent right to be safe from discrimination (Mchangama, 2011), attempts to introduce an Australian law charging social media platforms with the task of countering online hate speech would most likely be met with extreme resistance from the Australian public due to Western sentiment that freedom of speech is to be privileged above freedom from hate (Jakubowicz, 2017), and the fear that such censorship ‘invites government excess’ (Guiora and Park, 2017, p. 966) in restricting Australian citizens’ speech.

Further problematizing the introduction of such legislation is of course the reality that the moderation of social media platforms typically involves ‘virtual sweatshop labour’ (Martin, 2019, p. 199), where underpaid and untrained ‘moderators’ are repeatedly exposed to distressing content (Martin, 2019) – considering that algorithmic and other automated social media moderation techniques remain ‘notoriously unreliable’ (Gerrard, 2018, p. 4495).

 

Conclusion

 Ultimately, however, in spite of the complex issues surrounding its implementation, the fact remains: Australian legislation is urgently needed to enforce social media platforms’ ethical responsibility to remove hate and illegal speech from the platforms of Australian users. Such legislation would definitively minimize the physical and psychological harm which befalls many Australians, whilst effectively countering rising terrorist and extremist activity online. It is an action which, undoubtedly, would serve to foster a safe, positive online space for Australians one and all.

  

 

References

Al Mahmoud, F. (2017, September 14). Was Twitter right to remove the beheading video of James Foley?. Medium. Retrieved from https://medium.com/

 

Alkiviadou, N. (2019). Hate speech on social media networks: towards a regulatory framework?. Information and Communications Technology Law, 28(1), 19-35. doi: 10.1080/13600834.2018.1494417.

 

Blaker, L. (2015). The Islamic State’s use of online social media. Military Cyber Affairs, 1(1), 1-9. doi: 10.5038/2378-0789.1.1.1004

 

Brooking, E.T., & Singer, P.W. (2016). War goes viral. The Atlantic Monthly, 318(4), 70.

 

Bruce-Smith, S. (2015, July 30). Aussie FC player calls for Adam Goodes to be “deported”, immediately blasted. Pedestrian TV. Retrieved from https://www.pedestrian.tv/

 

Carlson, B., & Frazer, R. (2018, April 5). Indigenous voices are speaking loudly on social media but racism endures. The Conversation. Retrieved from https://theconversation.com/au

 

Gerrard, Y. (2018). Beyond the hashtag: circumventing content moderation on social media. New Media and Society, 20(12), 4492-4511. doi: 10.1177/1461444818776611

 

Guiora, A., & Park, E. (2017). Hate speech on social media. Philosophia, 45(3), 957-971. doi: 10.1007/s11406-017-9858-4

 

Herborn, D. (2013). Racial vilification and social media. Indigenous Law Bulletin, 8(4), 16-19. doi:

 

Jakubowicz, A. (2017). Alt Right White Lite: trolling, hate speech and cyber racism on social media. Cosmopolitan Civil Societies, 9(3), 41-60. doi: 10.5130/ccs.v9i3.5655

 

Kranz, M. (2017, October 25). These maps show how drastically ISIS territory has shrunk since its peak. Business Insider Australia. Retrieved from https://www.businessinsider.com.au/

 

Lieber, P., & Reiley, P. (2016). Countering ISIS’s social media influence. Special Operations Journal, 2(1), 47-57. doi: 10.1080/23296151.2016.1165580

 

Lomas, N. (2020, June 20). Germany tightens online hate speech rules to make platforms send reports straight to the feds. Tech Crunch. Retrieved from https://techcrunch.com/

 

Martin, F. (2019). The business of news sharing. In F. Martin and T. Dwyer (Eds.), Sharing news online: commendary cultures and social media news ecologies (pp. 91-127). Cham, Switzerland: Palgrave Macmillan.

 

Matamoros-Fernandez, A. (2017). Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930-946. doi:10.1080/1369118X.2017.1293130

 

Mchangama, J. (2011). The sordid origin of hate-speech laws. Policy Review, 170(1), 45.

Russell, S., & Cunneen, C. (2018). Social media, viligantism and Indigenous people in Australia. In K. Biber and M. Brown (Eds.), The Oxford encyclopedia of crime, media and popular culture (pp. 1-31). New York, USA: Oxford University Press.

 

Schmitt, E., Rubin, A.J., & Gibbons-Neff, T. (2019, August 19). ISIS is regaining strength in Iraq and Syria. The New York Times. Retrieved from https://www.nytimes.com/

 

Tusikov, N. (2019, April 11). UK and Australia move to regulate online hate speech, but Canada lags behind. The Conversation. Retrieved from https://theconversation.com/au

 

Avatar
About Eliza Fessey 5 Articles
4th year Media and Communications student at the University of Sydney