
Over the past two decades, the expansion of social media platforms has introduced a diversity of unprecedented and complex ramifications for their online users. According to Gillespie (2017), platforms are online sites and services that facilitate the creation and circulation of users’ uncommissioned content and social interactions. Simultaneously, these platforms are also “integrated software systems … [that] provide infrastructure, business models and cultural conditions” (p. 36) for data processing, advertising, and profit to communications markets (Flew, Martin, & Suzor, 2019).
Correspondingly, the insurgence of social media usage has also caused the rising circulation of challenging and troubling content for these digital platforms. This includes troubling and disturbing content, such as pornography, hate speech, representations of violence and obscenity. Whilst these categories of content are often prohibited by community guidelines, the self-regulatory approach that social media platforms have taken is generally inadequate at managing and regulating this content.
To effectively moderate and manage content on social media platforms, a co-regulated, multistakeholder regulatory approach would be most effective method as several stakeholders have a responsibility to protect and sustain these online communities. This essay will highlight the effectiveness of a multi-stakeholder regulatory approach by emphasising the necessity for content moderation as well as highlighting the challenges in moderating explicit online content.
“Facebook Application Screengrab” by icon0.com is licensed under CC BY 4.0
WHY DO WE NEED CONTENT MODERATION?
Content moderation is an essential practice for digital platforms, who often solicit content and personal information from users as a component of their online experience. According to Roberts (2019), Content moderation is “the organised practice of screening user-generated content [that has been] posted to internet sites, social media and other online outlets” (p. 33). For monumental digital platforms, such as FAANG (i.e., Facebook, Apple, Amazon, Netflix & Google), there are a multitude of motivations that drive their content moderation – including economic considerations.
Platforms must moderate to:
- Protect users from exposure to offensive, vile, or illegal content.
- To expand their user base rather than deter users.
- To present their best image to advertisers, business partners, new users, and to the public overall.
Gillespie (2017) articulates that platforms are also concerned about content moderation as problematic content can scare off advertisers. Large-scale digital platforms profit from the content and data that their users produce and circulate. Indirectly, this is evident through processes such as programmatic advertising, data mining and real-time bidding. Therefore, there is a sense of public obligation for platforms to nurture a safe and respectful community where they can expand the userbase whilst simultaneously levelling criticisms from dissatisfied users, critics, and journalists (Gillespie, 2017).
For example, American rapper and fashion designer, Kanye West, was recently suspended from his Twitter and Instagram account after he published a series of antisemitic posts on these platforms. These tweets are highlighted in the Youtube video below. Between both platforms, West has accrued almost fifty million followers – many of which are impressionable, young fans.
“Associated Press slammed for saying Kanye West’s tweet about Jews ‘deemed anti-Semitic” by the New York Post
By suspending his racist spiel and removing the posts for hate-speech, these platforms have helped to protect their diverse community. As stated in the Instagram Community guidelines, the platform has fostered “a safe and open environment for everyone” (Meta, 2022, para. 3). To manage the reputation of the platform whilst simultaneously protecting their users and stakeholders, it is important for digital platforms to moderate their user-generated content.
THE KEY CHALLENGES IN CONTENT MODERATION:
Before highlighting the effectiveness of a multi-stakeholder regulatory approach for content moderation, it is crucial to outline the challenges that platforms face in adopting a self-regulatory approach. Gillespie (2018) states that an open platform that completely fulfils the notions of democracy and community without content moderation is a utopian fantasy. Whilst it is crucial for platforms to moderate, content moderation is difficult due to its time-consuming and resource-intensive nature. It is also manifest in various forms.
“Black and Green Typewriter from White Paper” by Marcus Winkler is licensed under CC BY 4.0
To self-regulate, digital platforms have developed community guidelines and terms of service to direct online users’ behaviour and content. However, for moderators to locate explicit, vile, and guideline-breaching content, many online platforms rely on user-based systems – such as ‘flagging’ or reporting to moderators (Poplett, 2022). Platforms are unable to support a proactive review system whereby moderators examine each individual post, photograph, or video before it is posted due to the scale of these platforms. Therefore, platforms shift the responsibilities back to the online user by enabling them to determine what is disturbing or breaches terms of service (Poplett, 2022). By doing so, platforms can outsource their labour to online users whilst simultaneously escaping being held accountable for their actions. This system is beneficial to digital platforms and their economic imperatives as commercial content moderation requires large labour forces to engage in repetitive and confronting content (Roberts, 2019). However, this content moderation system of self-sufficiency has also struggled to be effective.
For instance, on the 15th of March 2019, a white supremacist stormed into two mosques in Christchurch, New Zealand, and committed the worst massacre in the nation’s history (Cave & Saxton, 2020). The massacre, as reported on in the video below, resulted in the murder of fifty victims and wounded forty others (Cave & Saxton, 2020). Brenton Tarrant, the shooter, livestreamed the entire attack on Facebook Live through his mobile phone. The video of this attack was first reported by another online user almost thirty minutes after it had been posted – and twelve minutes after the broadcast had ended (Kharpal, 2019). However, according to Kharpal (2019), Facebook did not remove the video for over an hour after it had been broadcasted, and until it had garnered a total of four-thousand views.
Both Facebook and Youtube faced intense criticism for not only failing to halt the distribution of such violent, horrific content on their platforms, but they also failed to moderate the circulation of the shooter’s 74-page manifesto that condemned immigrants and Muslims (Timberg, Harwell, Shaban, Tran, & Fung, 2019). This public shock highlights how despite platforms’ technological advancement and self-regulation, they played a significant role in publicising violence, extremism, and hate-speech. Additionally, they faced no consequences for facilitating and disseminating the shooter’s vile brutality to other users.
The decentralised nature of large-scale digital platforms, such as TikTok & Facebook, has enabled for a bottom-up structure of content creation and distribution (Flew, Martin, & Suzor, 2019). Due to extensive user bases, these platforms have struggled to implement a proactive review system for content moderation due to the scale of their platform. There is too much content to moderate at once.
Another key challenge of content moderation is that governments and platforms cannot apply traditional forms of broadcast and media policy to their digital, social media counterparts (Flew, Martin, & Suzor, 2019). This is because the technological and socio-economic advancements of digital platforms often render traditional legislation to become outdated – especially as the lines between traditional telecommunications and broadcasting are evolving also (Flew, as cited in Flew, Martin, & Suzor, 2019). To effectively address these key challenges in content moderation on digital platforms, it is crucial for users, platforms, governments, and other stakeholders to unite in a co-regulated, multi-stakeholder approach.
WHAT’S THE SOLUTION?
A diversity of controversies from digital platforms, such as the 2016 Cambridge Analytica Scandal and the numerous Facebook data leaks, have fuelled a growing global ‘techlash’. Flew, Martin, & Suzor (2019) argue that these public shocks have “triggered a growing public expectation that digital platforms need to be held accountable to the public interest” (p. 45).
Platforms are complex socio-technical institutions that mediate between the producers and audiences, as well as other stakeholders, such as policymakers, governments, citizens, and activist groups. Furthermore, the evolution of social media platform technology has facilitated for the convergence of producers and audiences – leading to the notion of the produser (Bruns, 2012). Therefore, the most appropriate and effective approach to regulating social media platforms and moderating content would reflect these elements.
“Group of Person Sitting Indoors” by Fauxels is licensed under CC BY 4.0
All stakeholders have a responsibility to uphold a safe and satisfied community of users, whether they feel ethically obliged or are securing advertisers and business partners. However, the self-regulatory approach that social media platforms as well as the traditional communications legislation have both proven to be insufficient and outdated respectively. According to the Internet Society (2022), a multistakeholder approach is a method of internet governance that employs the perspectives of various stakeholders to participate in discussion, decision-making, as well as the creation and implementation of policy and solutions for platform issues.
The benefits of a multistakeholder approach include:
- Holding platforms and other stakeholders accountable
- Enabling content moderation through various perspectives
- Upholding an open and collaborative digital space with global interoperability
- And protecting & further expanding upon the complex digital systems in an effective manner.
A co-regulation, multi-stakeholder solution ensures that platforms, public institutions, governments, and users acknowledge and undertake their cooperative responsibility to moderate content with public values (Stockmann, 2022).
CONCLUSION:
Overall, the continuous rise of public shocks and circulation of disturbing content on digital platforms has triggered online users to ask a key question: who should be responsible for stopping the spread of this content and how? To effectively moderate and manage content on social media platforms, a co-regulated, multistakeholder regulatory approach would be most effective method as several stakeholders have a responsibility to protect and sustain these online communities. This is particularly relevant given the necessity for content moderation and the key challenges that platforms have faced previously with their self-regulatory approach.
You Can’t Post That: Who is responsible for Content Moderation? © by Lena Xia 2022 is licensed under a CC BY-NC-ND 4.0
REFERENCES:
Bruns, A. (2012). RECONCILING COMMUNITY AND COMMERCE?: Collaboration between produsage communities and commercial operators. Information, communication & society, 815-835.
Cave, D., & Saxton, A. (2020, August 26). New Zealand Gives Christchurch Killer a Record Sentence. The New York Times, p. 10.
Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal of Digital Media & Policy, 10(1), 33-50. doi: 10.1386/ jdmp.10.1.33_1.
Gillespie, T. (2017). Governance by and for platforms. The SAGE Handbook of Social Media, 254-278. doi: 10.4135/9781473984066.n15.
Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. New Haven: Yale University Press.
Kharpal, A. (2019, March 19). Facebook says video of New Zealand mosque shootings was viewed 4,000 times before being removed. Retrieved from CNBC: https://www.cnbc.com/2019/03/19/facebook-new-zealand-mosque-shootings-video-viewed-4000-times-before-removal.html
Meta. (2022). Community Guidelines. Retrieved from Instagram Help Centre: https://help.instagram.com/477434105621119
Poplett, R. (2022). Is content moderation a concern that is best left to platforms? Ethica, 73-88.
Roberts, S. T. (2019). Understanding Commercial Content Moderation. In S. T. Roberts, Behind the Screen : Content Moderation in the Shadows of Social Media (pp. 33-72). New Haven: Yale University Press.
Stockmann, D. (2022). Tech Companies and the Public interest: the role of the state in governing social media platforms. Information, Communication & Society, 1-16. doi: 10.1080/1369118X.2022.2032796.
Timberg, C., Harwell, D., Shaban, H., Tran, A. B., & Fung, B. (2019, March 15). The New Zealand shooting shows how YouTube and Facebook spread hate and violent images — yet again. The Washington Post, p. 1.