To Govern or Not to Govern?

Bullying, harassment, violent content, hate, porn and other problematic content circulates on digital platforms. Who should be responsible for stoping the spread of this content and how?

"Online marketing secrets" by Internet marketing secrets is licensed under CC BY-NC 2.0.

The founders of the internet, predominantly white, cis-gendered men in Silicon Valley, initially hoped that the internet would serve as a respite from the mediation of broadcasters and publishers that dictated mainstream media by providing a space facilitative of participation in ‘free’ communication. Analysis of the rhetoric currently used in promotional material by major social media platforms seems to affirm these initial ideals; Facebook claims to foster ‘Connection and Community’, Instagram aims to ‘bring the world closer together’, Twitter assures users that ‘Here, your voice matters’. Themes of communication, connection, and community recur – but in the fine print, scribbled in invisible ink beneath such claims of unity and ‘user connection’, is written – at what risk? 

To Govern or Not to Govern? 

Gillespie (2018) asserts that the movement of sociality onto online spaces is profoundly altering the nature of our ‘connections, creations, and interactions’, and Gorwa (2019) warns against the dangers of framing platforms as mere ‘hosts’ of such communication. Indeed, platforms centre themselves as the facilitators of public expression and as impartial, open, and non-interventionalist entities (Gillespie, 2018), but such labels are extremely reductive of the power they hold beyond hosting; as adjudicators of dispute, as norm-setters, as arbiters of taste (Gillespie, 2018; O’Hara & Hall, 2021). Inevitably, the affordances offered en masse and facilitated through such platforms are often abused and used for purposes beyond innocent communication and sharing; such as illegal content, violent and graphic content, abusive and hateful content, pornographic and explicit content. Furthermore, as platforms are increasingly relying upon user-generated material for means of profit and business revenue and actively encourage the sharing and production of material, harmful content is only becoming more prolific across all domains of the internet, further facilitated through participatory culture of the Web 2.0. There are therefore widespread contentious debates pertaining to who should be held responsible for such content, and the ways in which it should then be governed as a major international regulatory issue (Gorwa, 2019).

Issues of content moderation span across a wide domain – from trans-gender creators having content taken down for the same amount of nudity that a Playboy model may exhibit with no repercussions, to debate pertaining to content depicting breastfeeding mothers, to graphic coverage of the war in Ukraine or protests in Iran, to Instagram threatening to ban Kanye West for making anti-semitic statements. Moderation is not straight forward – it must take into consideration legal mandates and regimes, and social expectations, tolerances, commercial and cultural values. 

 

Is Anyone Up? A Cautionary Tale 

A prime example of the discrepancies in content moderation and how a lack of responsibility from platform hosts may directly harmful to internet users, is the ‘Isanyoneup?’ internet scandal of the 2010s. Now depicted in a popular Netflix series called The Most Hated Man on the Internet, the site was created in 2010 by Hunter Moore, and functioned as a non-consensual pornographic platform, upon which sexually explicit photographs and videos were shared and posted, often alongside personal information such as full names, contact information and addresses. Such content is often referred to as ‘revenge porn’, or image-based sexual abuse (Bailey et al., 2021). Despite the blatantly harmful and exploitative nature of the site and Moore’s self-appointed title as a ‘professional life ruiner’, it remained active for 16 months – in part due to Moore’s claims that he acted solely as the ‘host’, and therefore held no responsibility in moderating or removing any content uploaded, even while knowing that multiple victims had attempted suicide as a direct result of his actions. Moore was eventually charged for email-hacking and the mass photo piracy scheme, but the punitive measures were not spearheaded by a regulatory force such as the government, but rather initiated by the mother of one of the victims, Charlotte Laws, who fought for months to avenge Moore and get the attention of law-enforcement. It was only after Laws proved that the photos were obtained by hacking that the FBI got involved – if the content was only obtained through means of uploading, regardless of consent, it may have been impossible to remove the site by law.

‘Bringing Down the Revenge Porn King – Fakes, Frauds & Scammers’ by Vice. All rights reserved. Retrieved from https://www.youtube.com/watch?v=yHZTyU8bRRw&ab_channel=VICE

Research by Bailey and colleagues (2021) pertaining to current policies and practices enforced by digital platforms reveals four main short-comings in the regulation of image-based sexual abuse; firstly, that policies tend to have reductionist, inconsistent and ambiguous language. Secondly, they found that there was a stark gap between policy and actual practice of content regulation. Thirdly, they identified imperfect technology for the detection of abuse, and lastly, faulted the responsibility of users in reporting and therefore preventing abuse. Bailey (2021) asserted that until such shortcomings are remedied, platforms are complicit in the perpetuation of such image-based sexual abuse, and harm that ensues.

Thus, as readily as platforms are to take responsibility for positive content and interactions; they should be equally held accountable for the exact opposite. Gillespie (2018)  asserts that ‘platforms no longer simply mediate public discourse – they constitute it’. The role of platforms in moderation should then be considered as central and fundamental to their purpose, rather than an inconvenience existing in their periphery, and treated accordingly. (Gillespie, 2018). Moderation, however, is no easy task, with massive volumes of material being uploaded to the internet constantly, and platforms relying upon mass collaboration for profit purposes (Roberts, 2019). Moderation may be carried out by users, through tools such as reporting, flagging, or blocklists (Jhaver et al., 2018), but ultimately, two levels of governance determine the process of moderation; governance by platforms, and governance of platforms. 

Governance by Platforms 

Currently, the dominant model for content moderation is the self-regulatory model (Gorwa, 2019), in which platforms act as individual private entities and enforce user adherence to their own guidelines and rules to ensure compliance (Roberts, 2019). A prominent strategy for such regulation is commercial content moderation, in which platforms as companies employ moderators across four main work situations; in-house moderators (employees that work within the company headquarters), boutique workers (firms specialising in social media brand management), call centre workers, and micro-labor websites (employing workers to complete small, repetitive, clerical tasks) (Roberts, 2019).

“Instagram and other Social Media Apps” by Jason A. Howie is licensed under CC BY 2.0.

Large platforms often empty a hybrid of such methods, and by distributing workers across global time zones, can ensure 24-hour task completion (Roberts, 2019). This strategy for content moderation is problematic, however, in that the labour is often extremely psychologically taxing upon workers, and exploitative in nature, with many workers reporting inadequate working conditions (Roberts, 2019). Workers in Manila, in the Philippines, report the psychological burdens of content moderation, entailing consequences such as ‘nightmares, paranoia, obsessive ruminations, sometimes leading to suicide attempts’, upon being regularly exposed to content depicting horrors such as bestiality, child-abuse, and suicide (The Washington Post, 2019). 

Governance of Platforms

While commercial content moderation is a primary strategy for regulation within platforms, Gorwa (2019) identifies a shift away from the self-regulatory model and towards governmental intervention, as observed through an increase in transnational regulatory schemes over recent decades (Gorwa, 2019). As platforms now hold the combined ability and powers of traditional ‘publishers, media companies, telecommunications providers, and other firms’, it is becoming increasingly crucial that platforms themselves are governed. The traditional methods of government enforcement such as punitive legal and financial penalties are no longer effective with industry lobbying, thus the regulatory landscape is becoming increasingly polycentric, requiring collaborating through multiple actors. Main actors include; firms, including industry associations and individual companies; non-governmental organisations, such as activist investors and academic researchers, and the state – including individual governments and organised groupings of governmental bodies. Gorwa (2019) further identifies four central factors for successful implementation of regulations; independence, representativeness, expertise, and operational capacity. Single-actor schemes may be more or less competent in specific factors, thus it is argued that long-term success requires collaboration across and between such actors, and new frameworks that consider disparate forms of governance to accomodate for coordination across governmental and private bodies, and across borders (O’Hara & Hall, 2021; Gorwa, 2019).

Moving Forward 

Contrary to initial ideas portraying the internet as a utopian, ‘unmediated public sphere’, the internet is defined and shaped by content moderation (Gillespie 2018; Bellanova & Goede, 2021). The responsibility of ‘cleaning’ the internet of inevitably problematic and harmful content, however, cannot be attributed to any individual platform, government, or social group, but has the most effective long-term benefits when approached as a united effort from multiple angles. To strive towards a safer, more inclusive internet across cultural, geographical, and socio-economic divides, will take recognition from NGOs, firms, and states asto their individual limitations and the power of collaboration.

References

ADL. (2022, October 19). Unpacking Kanye West’s Antisemitic Remarks. Retrieved from https://www.adl.org/resources/blog/unpacking-kanye-wests-antisemitic-remarks

Bailey, J., Flynn, A., & Henry, N. (2021). The Emerald International Handbook of Technology Facilitated Violence and Abuse. Van Haren Publishing.

Content moderators and Post Traumatic Stress Disorder with Janine Berg. (2020, November 26). Digital Future Society. Retrieved from https://digitalfuturesociety.com/qanda/content-moderators-and-post-traumatic-stress-disorder-with-janine-berg/

Dwoskin, E., Whalen, J., & Cabato, R. (2019, July 25). Content moderators at YouTube, Facebook and Twitter see the worst of the web — and suffer silently. Washington Post. Retrieved from https://www.washingtonpost.com/technology/2019/07/25/social-media-companies-are-outsourcing-their-dirty-work-philippines-generation-workers-is-paying-price/

Gillespie, Tarleton. “CHAPTER 1. All Platforms Moderate”. Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media, New Haven: Yale University Press, 2018, pp. 1-23. https://doi.org/10.12987/9780300235029-001

Gorwa, R. (2019, June 30). The platform governance triangle: conceptualising the informal regulation of online content. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1407

Jhaver, S., Ghoshal, S., Bruckman, A., & Gilbert, E. (2018, April 26). Online Harassment and Content Moderation. ACM Transactions on Computer-Human Interaction, 25(2), 1–33. https://doi.org/10.1145/3185593

LeGardye, Q. (2022, July 27). Where Is Hunter Moore From “The Most Hated Man on the Internet” Now? Marie Claire Magazine. Retrieved from https://www.marieclaire.com/culture/where-is-hunter-moore-the-most-hated-man-on-the-internet-netflix/

Leiner, B. M., Cerf, V. G., Clark, D. D., Kahn, R. E., Kleinrock, L., Lynch, D. C., Postel, J., Roberts, L. G., & Wolff, S. (2009). A brief history of the internet. ACM SIGCOMM Computer Communication Review, 39(5), 22–31. https://doi.org/10.1145/1629607.1629613

Protest, Social Media, and Censorship in Iran. (n.d.). Center for Strategic and International Studies. Retrieved from https://www.csis.org/analysis/protest-social-media-and-censorship-iran

Roberts, S. T. (2019, June 25). Behind the Screen: Content Moderation in the Shadows of Social Media (Illustrated). Yale University Press.

War in Ukraine is the latest platform moderation challenge. (n.d.). Columbia Journalism Review. Retrieved, from https://www.cjr.org/the_media_today/war-in-ukraine-is-the-latest-platform-moderation-challenge.php [Original source: https://studycrumb.com/alphabetizer]