Bullying, harassment, violent content, hate, porn and other problematic content circulates on digital platforms. Who should be responsible for stopping the spread of this content and how?

Developed over time as a means for world-wide broadcasting, information dissemination, and a medium for collaboration and interaction between individuals (Leiner et al. 2009), the internet has enabled global communication on an unprecedented scale. With over 5 billion internet users worldwide (Statista, 2022), the access of information and communication is merely clicks away for most of the planet’s population. However, the accessibility presented by the global network introduces a dilemma for consumers of digital media as damaging content including bullying, harassment, violent content, hate, and porn is circulated more than ever before. The allowance of anonymity for perpetrators in conjunction with the expansion of internet usage has predicated an acceleration of hateful content with 34% of cyberspace users experiencing cyberbullying at some point in their lives (Patchin & Hinduja, 2016). Further, studies depict that 59% of the world’s population use social media (Chaffey, 2022) and almost 10% of those are children under the age of 18 (McLachlan, 2022). Evidently, as the internet becomes increasingly unavoidable and paramount in everyday life, focus must turn tothe question – Who is responsible for the moderation of damaging content online?

 

Moderation by Government

The constant enhancement and ever-increasing reach of the internet has seen direct correlation to an increase of digital violence, the illegal spread of pornographic material, and cyberbullying, causing many to question the role of Government bodies in the regulation of damaging online content. However, there are several reasons to argue legislative digital moderation, namely, the ambiguous nature of damaging content in tandem with differing platform identities. Governmental action can be problematic in the censorship of content due to the subjectivity involved in identifying what may be categorised as bullying, harassment, or violent content. Further, legislative action has potential to restrict platforms that have been established to account for the limits of other platforms. For example, OnlyFans is an internet content subscription service used primarily by sex workers (OnlyFans, 2022), allowing nudity and pornographic material of which other social media platforms such as Instagram and Facebook rule against. Whilst the Government is an unfeasible option as a stand-alone regulatory body, it does play an important role in promoting transparency amongst platforms in terms of their regulatory standards and guidelines (Lin, 2022). This would entail social media platforms being required to regularly publish their rules and regulations surrounding damaging content. This has already been introduced by the Australian government through the Online Safety Act. The act was introduced in 2021 and holds significant implications for online platforms in an effort to hold them more accountable for content distributed and is the first piece of legislation clearly outlining expectations for digital moderation. The act enforces industry standards referred to as the Basic Online Safety Expectations, essentially forcing platforms to employ steps to minimise the risk of harm towards content consumers (Campbell, 2021). Whilst the regulations are difficult to enforce legally, the act at least subjects digital service providers to public scrutiny in the face of damaging content. This doesn’t rule government and legislation out of flexing their legal muscles with the power to act reactively to certain extreme situations. This is currently evident through the Alex Jones trial where the podcaster faces a billion-dollar lawsuit in reaction to his disgusting claims that the victims of the 2012 Sandy Hook shootings were “paid actors” (NY Times, 2022) and that the shootings never took place. Jones’ comments resulted in extreme emotional damage to the families of the victims and supporters of Jones threatened and even congregated with guns outside the homes of these families. Although Jones is likely to be charged close to a billion dollars in damages, the reactionary nature of the case certainly isn’t optimal as a means of content regulation. In order for governmental action to reach optimal effectiveness, the actions taken must be proactive in preventing the spread of damaging content.  Evidently, a government-based process for digital regulation would be plausible under the condition that it is operating under the broad interests of its constituents (Lin, 2022).

(Jury awards nearly $1 billion to Sandy Hook families in Alex Jones case, October 13th, 2022, CNN)

Moderation by Platforms

Whilst government input is necessary, moderation by individual platforms is the most desirable means to regulation as it enables website autonomy, including specified regulations to account for differing site identities. Social media platforms have faced scrutiny over recent years for “aggressively combatting” (United Nations Human Rights, 2021) harmful content, which would legally be classified as free speech. The most recent example of this censorship can be seen through the ‘deplatforming’ of Andrew Tate whose Facebook, Instagram, Twitter, TikTok, YouTube, among other accounts were deleted. The banning of Tate’s accounts has been in reaction to his blatant misogyny through his content, disgustingly claiming that female sexual assault victims must bear some responsibility and that they have “no innate responsibility or honour” (Van Boom, 2022). Fans have leapt to the defence of Tate claiming that he hadn’t broken any laws surrounding speech. However, whilst the regulation of digital content must adhere to laws surrounding media distribution of pornography or violence, platforms must stretch beyond laws surrounding speech in order to account for the interests of the digital audience (Fagan, 2017). As the onus is turned towards platforms to moderate their own sites, increased money must be spent on damaging content defence as the circulation of this content increases. At the forefront of this are both professional content moderators and artificial intelligence algorithms. The largest social media network in the world with 2.9 billion registered accounts (Lua, 2022), Facebook is leading from the front with over 15,000 employed content moderators who account for around 3 million moderations every day (Koetsier, 2020), including swift reaction to the reporting of content from users of the platform. In addition, Facebook employs elite algorithms that can be used to identify any material that has already been banned as well as new posts that are identified through signals indicating malicious content (Lynn & Bancroft, 2021). Whilst there is currently no real opportunity for complete eradication of damaging content due to the reactionary nature of algorithms and professional moderators, social media companies have been largely effective. However, as other major digital platforms follow suit, it is important that companies individualise their content guidelines in accordance with their brand identity as to not ‘over censor’ digital material.

(The secret life of Facebook moderators, February 25th, 2019, Casey Newton)

There is evidently no concrete or definitive means to the eradication of damaging content including bullying, harassment, hate, pornography, or violent content but it is clear that the most effective means of regulation is embedded in a harmonious collaboration between government legislation and individualised platform regulation. Exemplified through the implications of the Australian Online Safety Act on social media platforms, under the surveillance of governmental bodies, a largely self-regulatory model should prevail as algorithmic detection and professional content moderation abilities become increasingly enhanced.

References

21 Top Social Media Sites to Consider for Your Brand –. (2022, March 15). Buffer Library. https://buffer.com/library/social-media-sites/#:~:text=1.

Boom, D. V. (2022, September 26). Why Andrew Tate Was Banned From Almost Every Social Media Platform. CNET. https://www.cnet.com/culture/why-andrew-tate-was-banned-from-almost-every-social-media-platform/

Campbell, M. (2022, May 21). Tightening the law around online content: Introduction of the Online Safety Act 2021 (Cth) – Security – Australia. Www.mondaq.com. https://www.mondaq.com/australia/security/1197494/tightening-the-law-around-online-content-introduction-of-the-online-safety-act-2021-cth#:~:text=to%20avoid%20harm.-

Chaffey, D. (2022, August 22). Global Social Media Research Summary 2021. Smart Insights. https://www.smartinsights.com/social-media-marketing/social-media-strategy/new-global-social-media-research/

Fagan, F. (2017, August 22). Redirecting… Heinonline.org. https://heinonline.org/HOL/Page?collection=journals&handle=hein.journals/dltr16&id=379&men_tab=srchresults

Latest Thinking. (2021, December 9). Herbert Smith Freehills | Global Law Firm. https://www.herbertsmithfreehills.com/latest-thinking/policing-the-internet-australia%E2%80%99s-developments-in-regulating-content-moderation

Leiner, B. M., Cerf, V. G., Clark, D. D., Kahn, R. E., Kleinrock, L., Lynch, D. C., Postel, J., Roberts, L. G., & Wolff, S. (2009). A brief history of the internet. ACM SIGCOMM Computer Communication Review, 39(5), 22. https://doi.org/10.1145/1629607.1629613

McLachlan, S. (2022, March 24). Instagram Demographics in 2021: Important User Stats for Marketers. Hootsuite Social Media Management. https://blog.hootsuite.com/instagram-demographics/

Moderating online content: fighting harm or silencing dissent? (2022). OHCHR. https://www.ohchr.org/en/stories/2021/07/moderating-online-content-fighting-harm-or-silencing-dissent

Patchin, J. (2019, July 10). Summary of Our Cyberbullying Research (2004-2016). Cyberbullying Research Center. http://cyberbullying.org/summary-of-our-cyberbullying-research

Williamson, E. (2022, October 12). Alex Jones Ordered to Pay Sandy Hook Victims’ Families Nearly $1 Billion. The New York Times. https://www.nytimes.com/live/2022/10/12/us/alex-jones-verdict-sandy-hook