Stopping Problematic Content Online. Who should be responsible? How should they do it?

Assignment 2 ARIN2610

Social Media Icons Color Splash Montage – Banner by Blogtrepreneur is licensed under CC by 2.0

The internet created a space for a plethora of content to flow and opened new ways to circulate that content. Benkler (2006) referred to it as ‘The Networked Information Economy’, where users were afforded the power to generate and share information at a hybrid level. The optimistic view of the internet as a fresh method to share information and culture (Benkler, 2006) has since been met with the reality of the problems which occur when individuals are given too much personal power without any regulation or limits. On 15 March 2019, a terrorist attack on a Mosque in Christchurch was livestreamed on Facebook. Facebook did not act to remove the video for about one hour, and many blame the platform for its subsequent circulation following the removal of the original footage.

“The Internet’s role in networking individuals is a double-edged sword” (Dutton, 2009, p. 11).

In this statement, Dutton (2009) reflects on the historically optimistic nature of the internet to be a space of cultural expression and sharing of knowledge and ideas and compares it to the contemporary issues accompanied with affording individuals with so much free reign.

Kim Phuc – The Napalm Girl In Vietnam by David Erickson licensed under CC by 2.0

In addition to the detrimental and toxic content such as the Christchurch massacre, there is also content online which poses questions to what is or is not acceptable. There is a blurred line between the right to freedom of expression and the right to a protection from harm, and regulating content is therefore often difficult. Demonstrated by Facebook Vice President Justin Osofsk (as cited in Gillespie, 2018b, p. 1) with reference to The Terror of War a.k.a. “Napalm Girl” is the conflict between which should be prioritised; freedom of speech or protection from harmful content, “In this case, we tried to strike a difficult balance between enabling expression and protecting our community and ended up making a mistake.” This reveals the tension which arises when moderating content online, and the issues created when moderators judge who will be most affected by the content being taken down or kept online.

On 10 April 2019, Israel Folau made an Instagram post, stating:













These words were in accordance with his religion, however being such a public figure playing the Australian Rugby Union team, the post held a lot of power. Instagram did not remove the post, nor did Folau, but he was dismissed from the Team. This example shows one of the ways in which cultural issues arise from loose moderation online – Folau’s post was an expression of his beliefs, but it also caused harm to the already marginalised LGBTQIA+ community, as well as others mentioned.

8 12 09 Bearman Cartoon Freedom of Speech by Bearman2007 is licensed under CC by 2.0

Events similar to these contributed to the global techlash, a backlash against big tech companies for their mishandling of power in the new media landscape. Within the techlash, there has been a focus on big tech companies’ liability to the content on their sites, including the regulation and moderation of content (Hemphill, 2019). One key reason why digital platforms have managed to come so far in the history of the internet without strong content moderation is due to the Communications Decency Act Section 230, 1996 stating No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider’ s. 230 (c)(i). This legislation was intended for regulation of Internet Service Providers but has created the safe harbour for platforms today not to be held accountable for the behaviour of their users, nor for their moderation practices (Flew, Martin, & Suzor, 2019). Digital platforms have been adamant about their presentation of themselves not as publishers, avoiding the legal status of media companies and remaining in this intermediary status. Articulated by The Economist (2018),

“The laws and precedents that free you from liability for the content that you host have been a boon; but they were not set up for a world in which your platforms have become essential media properties in their own right”

This self-presentation by platforms has not deceived many, as seen through the techlash. Gillespie (2018b, p. 19) argues “platforms don’t make the content; but they do make important choices about it”. Gillespie expresses this contemporary need to shift the out-dated idea that only creators of content should be held accountable for their actions, instead digital platforms which provide space to publish content, and decide if that content stays online, should be making stricter moderation decisions. Picard and Pickard (2017, p. 6) accentuate this point, “these firms are increasingly monitoring, regulating and deleting content, and restricting and blocking some users, functions that are very akin to editorial choices”.

A motivator for digital platforms lying within the intermediary status is keeping users on their platform. One key issue with moderating content is the economic advantages and disadvantages involved in regulating people’s behaviour. Users online are more likely to move across platforms if they do not approve of the loose or restrictive regulations on a platform (Gillespie, 2018a).

One step in the right direction for content moderation has been seen with the Facebook Oversight Board, including 40 representatives from a range of different backgrounds who review content in accordance with Facebook policies. However, this is a very rare case of self-governance seen in digital platforms.

In addition to the lack of self-governance by digital platforms, there is an added issue of what content is prioritised for different nations. Freedom of speech and protection from harm are key in this debate. In the US, there is more focus on the freedom of speech, as seen in the upheld First Amendment of the Constitution of the United States (1791) which applies to online media. In the EU, there has historically been an aim to maintain freedom of speech, but also holds individuals accountable for abuses of this, seen in Article 11 of the Declaration of the Rights of Man and the Citizen (1789). This idea has been brought to the current media environment with the EU Digital Services Act 2022, which provides

  • possibility for users to challenge platforms’ decisions to moderate their content
  • a ban on advertising that targets marginalised or at-risk groups
  • prevention of abuse, mitigation of disinformation, violence against marginalised groups, and harm to minors, “carefully balanced against restriction of freedom of expression”

This type of legislation shows how nations have been forced to mediate the actions of digital platforms and users in their nation because digital platforms have been insufficient.

China is aiming to impose strict rules for restricting user content, imposing content moderation teams for all platforms to review any user posts prior to publishing to ensure representation of China’s socialist values, a step further than their previous regulation of posts only with regard to news information (Feng, 2022). China also has the Golden Shield Project which regulates the content allowed into China from other nations.

International Flags at Gyeongju Hilton by InSapphoWeTrust is licensed under CC BY-SA 2.0

A major issue with these Government interventions is the reliance on their relative nations’ values. In a global online environment, if nations become separated by different regulatory practices, there is a big risk of the internet becoming a ‘splinternet’. This detrimental effect could cause digital platforms to become geo-specific, and lose the fundamental use of the internet as a mass-sharing tool, warned by Lemley (2021).

What is seemingly more efficient for moderating content is co-governance, such as State-Firm-NGO co-governance, seen with the Christchurch Call following the Christchurch terrorist attacks, aiming to eradicate extremist and terrorist content online. Similarly, the firm-NGO co-governance seen with Contract for the Web, particularly with reference to harmful content in Principle 8, aims to educate the current and next generation about inclusivity and respect for all communities online. With co-governance, nations can uphold their respective values, in co-operation with NGOs, private companies, and big tech companies. With this co-operation among the many parties, content moderation can be a much simpler issue to confront.



Benkler, Y. (2006). The networked information economy. In Y. Benkler, The wealth of networks: How social production transforms markets and freedom contract: Freedom in the commons (pp. 29-34). Yale University Press.

Dutton, B. (2009). The fifth estate emerging through the network of networks. Prometheus, 27(1), 1-15.

Feng, C. (2022, June 18). China to tighten grip on social media comments, requiring sites to employ sufficient content moderators. South China Morning Post.

Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal of Digital Media & Policy10(1), 33–50.

Gillespie, T. (2018a) Regulation of and by Platforms. In The SAGE Handbook of Social Media (pp. 254–278). SAGE Publications Ltd.

Gillespie, T. (2018b). All Platforms Moderate. In Custodians of the Internet (pp. 1–23). Yale University Press.

Hemphill, T. A. (2019). “Techlash”, responsible innovation, and the self-regulatory organization. Journal of Responsible Innovation6(2), 240–247.

Lemley, M. A. (2021). The splinternet. Duke Law Journal, 70(6), 1397-1427.

The Economist. (2018, January 20). The techlash against Amazon, Facebook and Google—and what they can do.

Picard, R. G., & Pickard, V. (2017). Essential Principles for Contemporary Media and Communications Policymaking. Reuters Institute for the Study of Journalism: University of Oxford.