Social platforms have become breeding grounds for vile prejudices and hate speech. While it is clear that toxic radicalisation is a growing problem online, the best way to combat it is far less so (Massanari, 2017). With the intention of protecting the Australians most at risk of suffering because of inherent characteristics beyond their control, attempts to tackle online vitriol must consider whether they actually achieve this intended goal.
Facing the greatest immigration crisis since WWII, Europe was compelled to implement strict laws upon social platforms that ensured content flagged as hate speech would be swiftly dealt with. For a continent suddenly forced to live alongside people from vastly different ethnic and religious groups, this was critical to ensure new xenophobic ideals did not emerge.
Unfortunately, Europe’s particular approach would not work in an Australian context because our xenophobia has been entrenched for decades. Removing hate speech from social platforms will stop its spread on those platforms, but it will not unsettle the residual hatred embedded in our society (Bilodeau & Fadol, 2011, p. 1089).
Clearly without regulation, social platforms cannot be trusted to operate other than in their own financial interests. However, looking at the way minorities are characterised by mainstream Australian media, it is equally unclear whether forcing platforms to abide by accepted Australian social norms would be any better for these minorities.
As long as Australia refuses to address its latent xenophobia, it appears social platforms have no obligation to police hate speech on their platforms any differently to the way they currently do.
Defining the issue
The issue at hand is whether social platforms like Facebook and YouTube are obligated to remove hate speech and illegal speech from their platforms, and if this approach should be implemented in Australia.
While laws differ in different countries, it is clear that the content presented to Australians should not include speech deemed illegal in Australia. Regarding hate speech, the problems are more complex because of the vagueness of the term “hate speech” and the variety of obligations.
In the eSafety Commissioner’s Hate Speech report, it was found that Australian adults considered hate speech to refer to “anything negative directed at another person”. This definition is closer to bullying than hate speech. A better definition is provided in Facebook’s community guidelines, where hate speech is defined as a “direct attack on people based on…protected characteristics”, such as ethnicity or sexual orientation. Below is a recent post by the commissioner on the issues faced by minorities due to COVID-19.
#COVID-19 has applied a magnifying glass to society, bringing into sharp focus the best parts of humanity, whilst highlighting the disadvantage & inequality faced by many. This is particularly on SM-surfacing the forces of misogyny & racism fuelling this targeted online invective https://t.co/KzmMNPiaha
— Julie Inman Grant (@tweetinjules) August 11, 2020
Australia’s hate speech legislation is broader still, considering hate speech to refer to “any act reasonably likely to offend or intimidate another” on the basis of attributes including race and gender identity. It has been noted that prosecution of hate speech in Australia is incredibly rare, but this has not stopped critics of Section 18C suggesting it is too broad.
As a platform, Facebook would seem to have a moral obligation to ensure its users feel safe, and yet the Cambridge Analytica scandal showed their willingness to betray user privacy in pursuit of commercial interests. If we cannot trust social platforms to follow their moral compass, then in order to pull them in line with Australian values we must erect clear legal obligations, and it is here that the problem lies.
A bit of context
Social platforms have shirked their responsibility to regulate their content by hiding behind a law from the 1990s aimed at online message boards that could not possibly have imagined what these platforms would become. Section 230 of the 1996 Communications Decency Act stated that the provider of a web service would not be considered the publisher of content uploaded by a user of the service.
Progress has been made and platforms do now attempt to remove manifestly illegal content from their sites, but they have been dragged kicking and screaming. Facebook still claims their primary role is to allow users to share content. From their perspective, they provide a bookshelf that allows users to read whatever they wish to. Regulators are concerned with the order in which the books are displayed on the shelf.
For Germany, their modern approach to regulating hate speech is inseparable from their horrific past. Since the end of WWII Germany has readily accepted responsibility for the Holocaust, and officially criminalised Holocaust denial. With the strictest hate speech laws in the world, Germany remains committed to ensuring it does not repeat the mistakes of its past.
Despite legislative attempts by governments to force social platforms to cooperate, nothing has had a greater impact on their behaviour than advertiser boycotts. YouTube’s swift response to ads running on extremist videos was to demonetise all sensitive content in an attempt to bury these instances of hate speech.
Accountability: Why state regulation can work
Allowing social platforms to police their own content has often been unsuccessful in removing hate speech. It was suggested earlier that Facebook presents a useful definition of hate speech in its community guidelines. Unfortunately, their enforcement of this guideline leaves much to be desired.
In 2017 it was revealed that Facebook’s secret internal rules for censoring hate speech left a US congressman’s call for the murder of all “radicalised” muslims on the platform but removed a Black Lives Matter protestor’s post calling “all white people racist”. To Facebook, the second post was worse because it made an unfair generalisation. Others might contest that it is less reprehensible than a politician calling for mass murder.
In contrast, when Germany established the Network Enforcement Act in 2017, Facebook was found to remove 100% of content that was flagged by users as hate speech. All it took was the threat of a 50 million euro fine for them to get their act together.
Codified Racism: Why Australian regulation won’t work
While Facebook is compelled to remove a video explicitly inciting violence against Muslims, The Australian has faced no penalty for publishing an article about the need for the West to strike back against Muslims.
Likewise, social media might have made it easier for racists to exercise their bigotry, but it did not create them. When Cody Walker chose not to sing the national anthem before State of Origin because he felt it did not represent him, he was relentlessly abused for disrespecting his country. It was the users of social platforms, not the sites themselves, that attempted to curtail his rights to speech
Adam Goodes was labelled a “flog” by AFL fans because he dared to call out a girl for calling him an ape. Her mother told him to suck it up and apologise because her daughter did not understand the connotations. But this is the main issue: her daughter merely parroted the beliefs instilled by a society that refuses to take seriously the trauma it continues to inflict upon Aboriginal people. This undercurrent of racism extends to all non-whites, as Andrew Bolt’s most recent tirade against “african gangs” shows.
Germany knows what hate speech left unchecked can lead to and has regulated accordingly. If we can’t even pin down what qualifies as hate speech in public discourse, there is no way that a piece of legislation could be robust enough to govern private online communities in a meaningful way.
Allowing corporate interests to self-regulate essential online spaces is unsustainable in the face of their capacity to fuel divisive rhetoric, but it would be a tragedy for human rights if these regulations were built on the Australian understanding of hate speech. The future of successful regulation of multinational social media companies will involve external oversight, but in the Australian context hate speech content will live on forever unless we deal with our ingrained xenophobia, whether we can see the posts or not.
Angwin, J., & Grassegger, H. (2017, June 28). Facebook’s Secret Censorship Rules Protect White Men From Hate Speech But Not Black Children. ProPublica. Retrieved from https://www.propublica.org/article/facebook-hate-speech-censorship-internal-documents-algorithms
Bilodeau, A., & Fadol, N. (2011). The roots of contemporary attitudes toward immigration in Australia: contextual and individual-level influences. Ethnic and Racial Studies, 34(6), 1088–1109. https://doi.org/10.1080/01419870.2010.550630
eSafety Commissioner. (2019). Online Hate Speech: Findings from Australia, New Zealand and Europe. Retrieved from https://www.esafety.gov.au/sites/default/files/2020-01/Hate%20speech-Report.pdf
European Commission. (2020). Countering illegal hate speech online: 5th evaluation of the code of conduct. Retrieved from https://ec.europa.eu/info/sites/info/files/codeofconduct_2020_factsheet_12.pdf
Ergas, H. (2020, October 22). Islam and the West: We must strike back or soon we’ll all be Samuel Paty. The Australian. Retrieved from https://www.theaustralian.com.au/commentary/islam-and-the-west-we-must-strike-back-or-soon-well-all-be-samuel-paty/news-story/819a11ed49c1d2ccd7b19af38f3a08c5?commentId=0d83f388-5008-4891-bc9c-63a411dfc08e
Moon, L. (2019). A New Role for Social Network Providers: NetzDG and the Communications Decency Act. Transnational Law & Contemporary Problems, 29(1), 611-633.
UNHCR. (2014). UNHCR Global Trends: Forced displacement in 2014. Retrieved at https://www.unhcr.org/556725e69.html