Harmful content – Who should be responsible?

Bullying, harassment, violent content, hate, porn and other problematic content circulates on digital platforms. Who should be responsible for stopping the spread of this content and how?

Introduction

The rise of digital platforms have dramatically increased the number of illicit, harmful and disturbing content online. In an eSafety Commissioner report (2021) supported by the Australian government, research showed that within the 6 month period leading to September 2020, 4 out of 10 Australian teenagers faced one or more unpleasant online experience (p.5). The experiences include online bullying, abuse, unwanted encounters of pornographic or violent content and more (eSafety Commissioner, 2021). As large number of individuals encounter harmful content in the online environment, the circulation of these content is an issue today.

In the posting of harmful content, there is no doubt that the individual content creators must be held responsible as they produce and release the content on digital platforms for the world to see. However with the large amount of people using the internet today, it is inevitable that a portion of people will post disturbing content. This leads to the question of  “who should be responsible for stopping these contents from spreading further” and “what is the best way to do this in order to create a safer online environment?”. There are various actors that can be involved in the governance of the internet including the digital platforms, governments and public communities such as non-government organisations. However, all of these actors each have their limitations. Explaining these limitations, this essay will show that multi-stakeholder governance is the best way to limit problematic content from circulating. All stakeholders have a key role in stopping harmful content. They all must take responsibility and cooperate with each other to create a safer online space.

“Social media” by Semtrio licensed under CC BY 2.0

Actor 1 – Digital Platforms

The digital platforms unquestionably have responsibility. These platforms are able to connect people positively (Gillespie, 2018) but they also provide a space for individuals to spread harmful content. Although platforms do not make most of the content themselves, they do have the power to decide which content can be distributed and how their users connect with each other (Gillespie, 2018). This creates responsibility for platforms to eliminate harm as much as possible.

Today, digital platforms do moderate their content for mainly two reasons. The first is to protect their commercial interest, as maintaining a pleasant online experience for their users, advertisers and partners can protect brand image (Gillespie 2018; Roberts, 2019). The second is to comply with any government laws or regulations (Roberts, 2019). This means that platforms will naturally try to regulate harmful content in their own interest. An example can be seen in the incident that occurred in the Euro 2020 Championships of soccer. After England lost to Italy, the three black players on the English team faced harsh racist comments on platforms such as Twitter, Instagram and Facebook (Perrigo, 2021). The platforms took down over 1,000 of these racist posts and suspended a portion of the accounts (Perrigo, 2021). This incident shows the moderation process of digital platforms in action. However, they are not completely successful at regulating harmful content as after the incident users expressed their frustration, which is shown below (Perrigo, 2021).

 

There is a limit to the extent platforms can moderate content. One reason for this is because moderation of content is immense human labour (Gillespie, 2018). In order to conduct this, platforms hire low-wage workers who must face disturbing content such as animal or child abuse and racist photos or videos frequently (Robert, 2019). Furthermore, there is limitation when judging the content. Deciding to pass or ban a content can be difficult as there is no definitive line and different countries can have different standards (Gillespie, 2018). There is constantly the question of who’s standard should be applied especially as majority of workers in the digital giants are educated and wealthy white males with American principles such as free speech (Gillespie, 2018). These limitations show that platforms cannot moderate content alone and cannot be responsible for stopping all harmful content.

Actor 2 – Government Bodies

The second actor that must be responsible are government bodies as they have legitimate power to establish official regulations concerning harmful content. In fact, there is a rising trend to increase responsibility of government bodies, showing the high demand for governments to take greater action against harmful online content (Helberger, Pierson, & Poell, 2018 as cited in Gorwa, 2019). With the spread of the internet as a public service, the government role in regulating harmful content cannot be ignored any further.

The power and ability that government bodies hold is unique and strong compared to other actors since they legally regulate content and platforms. For example, the European Union introduced the Digital Services Act in 2020, with the aim of building a safe digital space and holding platforms as well as authorities responsible for creating a user centered environment (Bromell, 2022). The act includes rules such as removing illegal online content, preventing digital platform misuse and increasing platform transparency surrounding algorithms, data, advertising and more (Bromell, 2022).

However, content regulation by government bodies alone can be a harm to society. Governments may use their powers not only to regulate harmful content but to control political or public discussion through content regulation (Bromell, 2022). This could lead to mistrust by the public and become a problem especially to countries with democratic societies as it undermines free speech (Bromell, 2022).

 

Actor 3 – Public Communities

The final actor discussed in this essay is the public community which includes non-government organisations and online communities. Public communities have responsibility since they have the ability to both create or prevent harmful online communities. Examples can be shown by incidents facilitated by Reddit, a platform which allows users to create communities called subreddits and connect with other users of the same interest (Massanari, 2017). The problematic nature of the platform allowed sexist communities to develop and contributed to the #Gamergate and The Fappening incidents shown in the video below (Massanari, 2017). Although the creation of these toxic communities were advanced by the inherit structure of Reddit (Massanari, 2017), the participants of the communities and the community itself must be held responsible for the harm as well.

 

The possibility of creating a harmful online environment is a large limitation however the unity of public communities is also their strength. With unity, they can create positive communities and hold each other accountable for distribution of harmful content. Especially official communities such as NGOs have the ability and resources to research into harmful environment, inform and become voices for the public and eventually hold both platforms and governments accountable for their actions (Massanari, 2017). They cannot enforce specific rules or regulations but they can become supplements by increasing public pressure (Massanari, 2017).

Conclusion – Multi-stakeholder Governance

All actors have unique roles in stopping the spread of harmful content. Digital platforms have the most expertise on the workings of a platform and the feasibility to stop the spread of harmful content (Gorwa, 2019). If they do not act, these contents will continue to disturb online users. Governments have the ability to change regulations and the power to hold platforms accountable. Currently, there is a “trust gap” as digital giants are not transparent in their platform operations and governments bear down on these tech companies (Bromell, 2022, p.98) however, they need to surpass this and work together in order to efficiently stop the further spread of harmful content. Public communities have the role of representing public opinion and applying pressure to both government and platforms. Holding all these actors responsible and ensuring cooperation is the most effective approach. Multi-stakeholder governance is the key to minimizing harmful content from distributing online.

 

References

Bromell, D. (2022). Deplatforming and democratic legitimacy. In Regulating free speech in a digital age (pp. 81-109). Springer, Cham. https://doi-org.ezproxy.library.sydney.edu.au/10.1007/978-3-030-95550-2_4

eSafety Commissioner. (2021). The digital lives of Aussie teens.https://www.esafety.gov.au/sites/default/files/2021-02/The%20digital%20lives%20of%20Aussie%20teens.pdf

Gillespie, T. (2018). All Platforms Moderate. In Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media (pp. 1-23). Yale University Press. https://doi-org.ezproxy.library.sydney.edu.au/10.12987/9780300235029-001

Gorwa, R. (2019). The platform governance triangle: Conceptualising the informal regulation of online content. Internet policy review, 8(2), 1-22. https://doi.org/10.14763/2019.2.1407

Massanari, A. (2017). #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New media & society, 19(3), 329-346. https://doi-org.ezproxy.library.sydney.edu.au/10.1177/1461444815608807

Perrigo, B. (2021, July 14). Black England Soccer Players Are Being Racially Abused on Social Media. How Can These Platforms Do Better? Time. https://time.com/6079964/england-footballers-racist-abuse-social-media/

Roberts, T. S. (2019). Understanding Commercial Content Moderation. In Behind the screen: Content moderation in the shadows of social media (pp. 33-72). Yale University Press. https://ebookcentral-proquest-com.ezproxy.library.sydney.edu.au/lib/usyd/detail.action?docID=5783696&pq-origsite=primo