Hate Speech Regulation in Australia: Should Social Platforms be Responsible?

Reddit hate speech
Image: Hitesh Sonar for The Swaddle/Reddit

Hate speech, a prevalent form of online violence, can be harmful to victims at very low cost due to the nature of online speech. Hate speech regulations have received debates and controversies of whether these regulations interfere with freedom of speech. To prevent harmful consequences of hate speech, different forms of regulations, both legal or platform imposed, have been proposed to set out guidelines for inappropriate use of online languages. The Australian government has established legal regulations to create federal guidelines to protect the general public from harmful effects of hate speech through the Racial Discrimination Act. Worldwide, there is an increasing trend to create compulsory laws to require social platforms to regulate online violence (Europe’s 2016 Code of Conduct on Countering Illegal Hate Speech Online). This trend is partially in response to a growing number of cases that resulted in detrimental consequences to individual members or a group of users being attacked online. While some arguments believe that hate speech regulations should be primarily the responsibility of the government and users’ self-regulations, this post argues that social platforms should take major responsibilities in regulating hate speech due to the effectiveness of regulations and unique characteristics of individual platforms.

Current status of hate speech regulations in Australia

Figure: European Commission, 2020

Regulations in the Internet space raise increasing attention in the past decade due to the increasing uses of social platforms. As online communications can take up equal importance as in-person communication, verbal violence becomes more prevalent in the Internet space due to the anonymous nature of the attackers and difficulties in regulating hateful speech online. In Australia, hate speech regulation was first introduced in an offline context to prevent adverse consequences of discrimination against gender, religion, race, and related issues. In addition to the original Racial Discrimination Act 1975, at the federal level, Criminal Code Act 1995 rules that non-physical harassment can be considered federal crime during a federal case. At the state-level, the scope of regulations gradually expanded to cover Internet or email-related issues. However, regulations on enforcing social platforms to provide effective regulations on hate speech remains missing in the legislation. The United Union introduced Code of Conduct since 2016 to provide detailed guidelines, and the recently published result shows that social platforms have met the requirement to check over 90% of reported content within one day and to remove 71% of the hate speech content. While the result suggests effective removal of the majority of online hate speech, the following sessions will introduce controversies related to enforcing social platforms to regulate online abusive comments.

Clean Internet 

Barbara Boxer, No Laughing Matter

While government effort has been trying to protect victims of hate speech, Banks (2010) refers to government regulations as ‘unilateral efforts’, which have limitations because they have ‘limited jurisdictional reach and conflict that has occurred when states have sought to enforce laws extraterritorially into other jurisdictions’ (p. 234). Banks’ argument highlights the major limitations of the government’s effort to use legislation to tackle the problem of hate speech, especially because the phenomenon is too wide-spread to be assessed as individual federal offense cases. Because major technology companies, such as Google and Facebook, are based in the U.S. while having millions of users overseas, businesses have no commitment to comply with Australian laws due to the absence of bilateral extradition treaties. The Internet extends beyond physical boundaries of countries, which makes the issue of jurisdictional reach challenging to resolve. Multilateral regulations through collaborations between government and businesses are required to provide effective results.

User report function available on Youtube videos

Internet platforms have the capacity to provide more effective and direct monitoring of hate speech through Code of Conduct. Most of the social platforms provide a function to allow users to report abusive content, which can effectively narrow down the range of content to filter through. Government authorities, on the other hand, have limited access to the design and implementation of deleting comments or blocking users, making it difficult to effectively restrict the impact of hate speech in a timely manner (Alkiviadou, 2018, p.33). The time frame is essential because information can go viral on the Internet in seconds, possibly resulting in irreversible consequences. Enforcing social platforms to achieve such a task in a timely manner, however, requires strong external enforcements on top of a sense of ethical responsibility. With legal requirements to increase the effective blocking of hateful content, the task can achieve optimal outcomes without requiring the government to interfere with the operations of social platform companies.

Free Internet  

Harry Miller, a former police officer, argued that the “non-crime hate incident” on his police record was a violation of his human rights

Hate speech regulation, even before the Internet became viral, raised numerous ethical debates. Philosophical debates as well as legal cases centering on ethical concerns occurred to raise distinct views. Many liberals argue that hate-speech regulation, especially on-campus regulations, can be interpreted as ‘a form of illegitimate control by the community over individual liberty of expression’ (Altman, 1993, p. 302). The concept of neutrality has been an integral part of the principle of personal freedom. In Australia, even though there is no explicit law stating freedom of expression, the legal court holds an implicit view on the freedom right of expression, especially for expressing political views. However, Altman argues that the argument of justifying hate speech using freedom of speech is narrow-viewed because ‘rules against hate speech are not viewpoint-neutral’ (1993, p. 305). Not to discourage hate speech is implicitly biased in neglecting the negative consequences and harms caused by personal attacks.  

Image: Internet Freedom Foundation, self-censorship by Hotstar and the IAMAI’s self regulation code

Another growing concern rises when mass-use technology is adopted to regulate hate-speech because of the lack of transparency and the tendency to develop self-censorship. There is a subtle line between censorship and removal of abusive content due to the intransparent nature of hate speech removal. In the extreme case where government has the authority to implement regulations in the Internet space, the Internet in China for example, Internet users adopt the habit of ‘self-censor’ under strict censorship environment (Tkacheva et al., 2013, p.97). When businesses are required to impose strict rules to remove hate speech, this action may lead to the habit of self-censorship to discourage innovation and open statement. The Internet has been established as a space for people to express themselves, and the anonymous nature makes users more willing to share online. It remains unclear the algorithms adopted by businesses to provide objective criterias of what should be considered hate speech due to the complex of online linguistics. 

Implications…

While this article endorses the introduction of legal regulations to better enforce social platforms to regulate hate speeches, this issue has wider applications beyond the region of Australia. Popular social platforms have a substantial number of users worldwide, making hate speech regulations a relevant discussion worldwide. Extreme hatred comments exist anywhere among users globally, yet the difficulties may be technical limitations on the effective detection of comments in different languages. Complexity of linguistics, especially concise Internet speeches, require more extensive research and attention globally. Additionally, hate speeches on different platforms exhibit different characteristics, which can be studied to better enhance technology for content detection.

Potential solutions?

Due to the unique challenges of regulating online hate speech, collaboration and framework that involved both public and private sectors should be established. Social platforms have inherent strength of direct and effective implementation of policy and apply sophisticated detection algorithms. These IT businesses should be obligated to take part in the actual execution of hate speech regulation. Meanwhile, legal regulation has the advantage in protecting public interests rather than protecting business interest. Consequences, Australia should introduce similar laws and regular assessment to ensure social platforms to perform effective detecting and filtering of illegal hate content.    

References

Massanari, A. (2017). Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329–346.

Federal Register of Legislation – Australian Government. (2016). Racial Discrimination Act 1975. https://www.legislation.gov.au/Details/C2016C00089

Banks, J. (2010). Regulating hate speech online. International Review of Law, Computers & Technology, 24(3), 233–239. https://doi.org/10.1080/13600869.2010.522323

Altman, A. (1993). Liberalism and Campus Hate Speech: A Philosophical Examination. Ethics, 103(2), 302–317. https://doi.org/10.1086/293497

Reynders, D. (2020, July). Countering illegal hate speech online: 5th evaluation of the Code of Conduct. European Commission. https://ec.europa.eu/info/sites/info/files/codeofconduct_2020_factsheet_12.pdf

Tkacheva, O., Schwartz, L. H., Libicki, M. C., Taylor, J. E., Martini, J., & Baxter, C. (2013). RR-295-DOS Internet Freedom and Political Space (Illustrated ed.). Rand. https://apps.dtic.mil/dtic/tr/fulltext/u2/a589997.pdf

European Commission. (2020, June 22). Commission publishes EU Code of Conduct on countering illegal hate speech online continues to deliver results. https://ec.europa.eu/commission/presscorner/detail/en/ip_20_1134

Federal Register of Legislation – Australian Government. (2020, July 20). Federal Register of Legislation. https://www.legislation.gov.au/Series/C2004A04868

Alkiviadou, N. (2018). Hate speech on social media networks: towards a regulatory framework? Information & Communications Technology Law, 28(1), 19–35. https://doi.org/10.1080/13600834.2018.1494417