Social media platforms required to remove hate speech: Should Australia follow suit?

Free to use photo by Christian Wiediger on Unsplash 

The obligation of social media companies to remove illegal hate speech on their platforms has escalated globally in recent years. Should Australia be following suit?

Social media platforms in Australia are primarily self-regulated, and there has been no specific criminal code regarding online hate speech until recently. Although hate speech regulation is addressed by platforms and governments, the effectiveness of current regulation has been called into question after recent events including the broadcasting of the Christchurch Massacre on social media platforms. German law and Europe’s 2016 code of conduct provide examples of public authorities enforcing strict responsibilities upon social media companies to remove hate speech.

Although there are benefits to this kind of enforcement including maintaining public order, there are also arguments against this regulation such as the importance of free speech. I will argue that due to the scale of content online, government’s, communities and platforms need to work together to develop moderation tools rather than placing all responsibility on the media companies.

 

History of Hate speech regulation

The responsibility of social platforms to regulate hate speech and illegal speech has developed recently, albeit mainly on a reactionary basis. O’Regan says hate speech regulation is vital to maintain public order and prevent imminent violence (2018). Since the initial rise of social media, companies have benefited from self-regulating content.

Recent stories have exposed the inconsistencies in content regulation on social media platforms:

  • 2012: The Online hate prevention institute reported several anti-Semitic messages on Facebook including a picture of Holocaust victim Anne Frank alongside the caption “What’s that burning? Oh it’s my family”. These reports were rejected by Facebook and remained on the platform.
  • 2018: Reuters found over 1,000 examples of content on Facebook attacking the Rohingya and other Muslims.
  • 2018: following a shooting in a Pittsburgh synagogue an Instagram search for “Jews” came back with over 11.5k posts with the hashtag #jewsdid911

The background of online hate speech regulation can be investigated by comparing German Law, Europe’s 2016 code of conduct and Australia’s history of content regulation.

 

Diagram created by the author using Canva

Relevant Laws & Concepts

Germany

The EU

Australia

Why Australia should follow suit

1. Overall Benefits of Hate Speech Regulation

Anti Hate Speech sign by Ashley Marinaccio on Flickr. Some rights reserved

Considering the exchanges of power offered by social media networks, attention must be paid to content moderation. Social media companies have the power of hosting public discourse. Governments and society at large benefit from this social interface. It is in the interest of all parties to regulate content on the platform that may be harmful or threatening to public order.

There is a range of criminal prohibitions worldwide that address attacks on specific groups, especially when the speaker intends to harm the targeted person/s or group (O’Regan, 2018). There is no question as to whether hate speech is or is not regulated; it is instead a question of how regulation occurs.

2. Platforms Taking Responsibility

One of the key arguments for media companies regulating their platforms is the importance for these companies to acknowledge that there is a problem with hate speech online and take actions to monitor this. As stated by Gillespie “Social media platforms put more people in direct contact with one another, afford them new opportunities to speak and interact with a wider range of people” (2018, pg.1). Media companies must recognise their role in public discourse and implement moderation and regulation to protect public interests.

Facebook’s Community Standards outline different tiers of hate speech attacks that are not acceptable on their platform. They define hate speech as;

“ a direct attack on people based on what we call protected characteristics – race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity and serious disease or disability.”

These terms of service only state that this content “may” be removed. The flexibility of language used in these kinds of regulatory codes often means that hate speech removal can be evaded. Where the terms of service fail government pressure helps ensure companies take a more hard-line approach towards regulation.

Why Australia should not

1. Freedom of Speech vs. Censorship

Although governments continue to place pressure on tech companies to regulate their platforms, there is a concern that this level of restriction may threaten free speech. American tech companies have an increased interest in protecting free speech due to US constitutional values. This interest inherently influences their terms of service and codes of regulation. There is a broader issue surrounding the decision to regulate and what kind of content comes under the definition of “hate speech”. as regulating bodies and media platforms have differing definitions of hate speech. The decision to remove and regulate content based on these definitions is undoubtedly complex as platforms want to avoid the idea of censorship.

The following Tweet demonstrates the difficulties platforms face in determining what content classifies as hate speech.

2. Private companies regulating a Public forum?

Another issue surrounding the push for social platforms regulating their platforms is the idea of a private corporation regulating a public forum as opposed to public authorities. Laws like those applied in Germany can be seen as a policing exercise towards tech platforms rather than a collaborative effort to regulate a public platform. Gillespie says “content moderation receives too little public scrutiny even as it shapes social norms and creates consequences for public discourse, cultural production, and the fabric of society” (2018, p.1). Because the discourse that takes place on social media is central to how people communicate, many believe that governments have a duty as a public authority to take part in regulation.

Mark Zukerberg recently called for global regulations including overarching rules for hate speech stating;

“Internet companies should be accountable for enforcing standards on harmful content. It’s impossible to remove all harmful content from the internet, but when people use dozens of different sharing services — all with their own policies and processes — we need a more standardized approach.”

The techniques used to moderate by these social media platforms also present a problem. Current moderation of hate speech includes a combination of artificial intelligence which picks up on key terms and phrases as well as human moderators who monitor content for illegal hate speech. Human moderators handle thousands of instances of extreme material daily for minimal pay, and many are psychologically scarred. Platforms also heavily rely on community flagging in order to bring their attention to hate speech in the first place.

Video giving further insight into the psychological effect of moderation, Standard YouTube License

The Verdict

I argue that Australia should not focus on implementing legislation that places responsibility on social media platforms to remove hate speech. Instead, Australia needs to work with these platforms to develop better means of moderation and removal of hate speech.

Two significant issues that prevent platforms alone from achieving perfect regulation of hate speech are the continually shifting concept hate speech which differs between cultures and changes over time and the sheer scale of content published online.

Gillespie notes that the majority of platforms have embraced a “publish-then-filter” approach whereby user posts are immediately public, without review, and platforms can remove questionable content only after the fact (2018). For example, every second, on average there are 6,000 tweets posted on Twitter. This scale makes detection, moderation and removal of hate speech extremely challenging

Instead of placing all responsibility on the media companies the government and community need to work with the social platform to develop new systems of tackling targeted hate on social media. Artificial intelligence is an important technology currently used by media companies to monitor and regulate hate speech (Accenture, 2018). Although this technology is currently not 100% effective, with more government-led initiatives the next generation software may reduce the need for low-paid and psychologically damaging human moderation

Final Thoughts

Australia has already attempted to place the responsibility of removing online hate speech on Social Media companies through hastily produced criminal code. This legislation is not a long-term solution for the issue of illegal hate speech. Instead, more needs to be done to develop new methods of moderation that are better suited to the scale of content on social platforms and the human moderators that regulate social media.

 

 

Reference List

Article 19 (2018). Germany: Responding to ‘hate speech’. [online] London: Article 19. Available at: https://www.article19.org/wp-content/uploads/2018/07/Germany-Responding-to-%E2%80%98hate-speech%E2%80%99-v3-WEB.pdf.

 Böttcher, L. (2017). » Network Enforcement Act (Netzdurchsetzunggesetz, NetzDG) German Law Archive. [online] Germanlawarchive.iuscomp.org. Available at: https://germanlawarchive.iuscomp.org/?p=1245.

Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019

European Commission (2016). European Commission and IT Companies announce Code of Conduct on illegal online hate speech. [online] Available at: https://europa.eu/rapid/press-release_IP-16-1937_en.htm.

Facebook.com. (2019). Community Standards | Facebook. [online] Available at: https://www.facebook.com/communitystandards/hate_speech.

Frenkel, S (2018). On Instagram, 11,696 Examples of How Hate Thrives on Social Media. The New York Times. [online] Available at:

https://www.nytimes.com/2018/10/29/technology/hate-on-social-media.html

Gillespie, T (2018). Custodians of the internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media: Yale University Press. Retrieved from

http://search.ebscohost.com.ezproxy1.library.usyd.edu.au/login.aspx?direct=true&db=nlebk&AN=1834401&site=ehost-live

Mozur, P. (2019). A Genocide Incited on Facebook, With Posts From Myanmar’s Military. The New York Times. [online] Available at: https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html.

NBC (2019). New Zealand mosque shooting in Christchurch. [online] Available at: https://www.nbcnews.com/news/world/new-zealand-mosque-shootings.

 Online Hate Prevention Institute. (2012). Facebook Fails Review. [online] Available at: https://ohpi.org.au/facebook-fails-review/.

O’Regan. C. Hate Speech Online: an (Intractable) Contemporary Challenge?, Current Legal Problems, Volume 71, Issue 1, 2018, Pages 403–429, https://doi.org/10.1093/clp/cuy012

 Stecklow, S. (2018). Why Facebook is losing the war on hate speech in Myanmar. [online] Reuters. Available at: https://www.reuters.com/investigates/special-report/myanmar-facebook-hate/.

Tusikov, N. (2019). U.K. and Australia move to regulate online hate speech, but Canada lags behind. The Conversation. [online] Available at: http://theconversation.com/u-k-and-australia-move-to-regulate-online-hate-speech-but-canada-lags-behind-115212.

 volksverhetzung. (n.d.). Definitions.net. Retrieved October 02, 2019, from https://www.definitions.net/definition/volksverhetzung.

Zuckerberg, M. (2019). Four Ideas to Regulate the Internet | Facebook Newsroom. [online] Newsroom.fb.com. Available at: https://newsroom.fb.com/news/2019/03/four-ideas-regulate-internet/.

 

 

Be the first to comment

Leave a Reply

Your email address will not be published.


*