Who should regulate problematic content online?

Cyberbullying, would you do it?” by kid-josh is licensed under CC BY-NC-SA 2.0.

This article is licensed under a CC BY-NC-SA 2.0 license

This article discusses the following types of content: cyberbullying, suicide, death, and gore.

“go die, evry1 wuld be happy”

These were the words posted to the site ASKfm (Henely, 2013). The site is an anonymous question-and-answer social media, which rose to prominence in the 2010s. Soon after receiving many messages alike and including the one seen above, a 14-year-old girl was found hanging in her room, having committed suicide.

Following this event, ASKfm would conduct an internal investigation, in which they found insufficient evidence that the site led to the girl’s death (Smith, 2014). However, ASKfm would still go on to make changes to its safety policies. These included better blocking and reporting features, a stronger moderation team, and the need to create a profile before engaging with posts (Jones, 2013).

Ultimately, issues of bullying, harassment, violent content, hate, porn, and other problematic content run rampant in the online community.

While I’ll be drawing focus on cyberbullying and violent content, the real question is: should online content be regulated, and if so, who should have this absolute authority?

Cyberbullying

Much research was conducted on the potential for positivity within the rise of the internet. These included revolutionising the economy, creating a field for the sharing of thoughts and ideas, as well as the potential for entertainment. However, there equally came the recognition of negative consequences. Some of these included technostress, technology addiction, and most notably, cyberbullying (Camacho et al., 2018).

Cyberbullying has been a difficult issue to measure due to the lack of agreeance on a fixed definition. A vast variation in figures from 8% to 59% of college students have been described as experiencing cyberbullying (ibid.). It has most notably been a challenge to relate how real-life bullying can be applied to the phenomenon. Bullying is defined as being intentional, repeated, and an abuse of power (Olweus, 2013). Smith et al. do, however, offer a comprehensive definition based on these principles.

“An aggressive, intentional act carried out by a group or individual using electronic forms of contact, repeatedly, and over time against a victim who cannot easily defend him or herself” (2008, p 376).

Experiences of cyberbullying are not uncommon and can have varied effects on victims. Patchin and Hinduja find that there is a clear link between experiences of cyberbullying and low-self esteem (2010). More severe outcomes are outlined as cyberbullying has been linked as a major contributing factor to the high suicide rates in young people (Hinduja & Patchin, 2010).

Ultimately, cyberbullying is a serious and repeated issue across the internet. Without proper regulation, it can happen almost anywhere, to almost anyone. Furthermore, vile comments and messages can be posted anonymously on certain social media, which really begs the question, how can responsibility be placed? How can blame be put solely on anonymous individuals for committing acts of bullying online, and what can platforms do to combat it? These questions will be discussed further in the regulation subtopic.

Violent content

Violent content equally runs rampant in the online space. However, its issue is slightly more unique and multifaceted than that of cyberbullying. While cyberbullying is clearly something that is wrong that may be able to be helped through some online restrictions, violent content online is often participatory. This can be seen in the success and popularity of sites such as LiveLeak, and the Reddit 50/50 subreddit. These sites were popular in the early-mid 2010s and would offer extremely graphic and gory content to users.

LiveLeak grew out of a trend of ‘shock sites’ in the early stages of the internet (Vincent, 2021). Websites like this would host the violent and pornographic content that mainstream media would never touch, with the aim of disgusting visitors. However, LiveLeak would slowly show signs of censorship. In 2014, they decided to host no future beheading videos from ISIS. Similarly, in 2019, the site chose not to host the recent Christchurch shooting stream. The site would then shut down in 2021, however, many similar still exist if you actively look for the content. Entering the URL will now forward you to the site ItemFix (enter at your own risk).

Reddit 50/50 would host similar content, however, with the twist of a game. Reddit users on the subreddit would be faced with a prompt, offering either the title for an enticing image or one of gore (or other violent content). Users would then click the post, and be linked to an image or video relating to either of the titles.

reddit

The worrying fact about the site is that while it asks if you’re over 18, and warns you about the content, there are virtually no real barriers to anyone viewing the extreme content. Furthermore, much of the virality of the site came from popular YouTubers and other content creators making videos exploring the site. While much of the content was blurred for the YouTube terms of service, the mixture of excessive reactions and impressionable audiences would entice children to the site. Below is Markiplier’s video from 2017 on the subreddit, since garnering over 11 million views.

Content Regulation

The internet was founded on an ideology of freedom, separate from the restrictions of geography or governments (Castells, 2011). As the internet grew to be the contemporary form of mass media, so did the economic opportunities. Web 2.0 would find the rise of the “networked information economy” (Benkler, 2006, p. 3). This describes sites that, rather than creating their own content, hosted a platform for others to share their work. Some of these sites would grow into the media giants we know today, like  YouTube or Facebook. As these companies would grow their economic needs would become paramount, deprioritizing public interest in favour of commercial interest. Furthermore, these companies would use the power of their platforms in lobbying around government regulation and law-making, so as to favour profit (Popiel, 2018).

While the extreme freedom of speech would offer great opportunities for frequently silenced voices to be heard, not all that was said was positive. This creates a point of tension between the requirement for moderation or regulation, and the clear anti-bureaucracy and anti-government sentiments of the internet.

Gillespie finds that all platforms must, to some degree, moderate their content (2018). This is due to both protecting users from one another, and external antagonists, as well as potentially worrying, vile, or illegal content. Furthermore, platforms must moderate in order to present their best face to new users, advertisers, partners, and the larger public. Content moderators have the difficult job of navigating the grey zones of questionable online content, in relation to both laws and platforms’ terms of service. Furthermore, many platforms must moderate to stay more broadly appealing, while equally disavowing a restriction of free speech so as to appeal to internet traditionalists.

Currently, regulation of the internet finds influence from three major stakeholders. These include states, NGOs, and firms (Gorwa, 2019). States refer to both individual governments and groupings of governments, such as the EU or UN. NGOs, or non-government organisations, refer to a broad grouping of advocates, academics, and so on. Firms refer to individual companies and their internal moderation boards. Content governance groups exist all over this landscape, with many intersections. While I would encourage this tight connection between institutions to continue, one issue seems clear; there are few standardised rules that apply to all online organisations. This is partially what leads to the relentless bullying of children online, and the easy access to wildly inappropriate content.

My solution is simple, for there to be either a nationally or internationally enforced online identity. Content will then be restricted based on the age of the individual. Similar action has been seen in America with their COPPA regulation. These rules protect children from ages under 13 from having their personal information collected (Complying with COPPA: Frequently Asked Questions, 2020).

An online identity would require individuals to use their personal information to log into internet browsers, which can then restrict age-inappropriate content if needed. Furthermore, the identification could help in increasing individual responsibility for cyberbullying, as well as restricting repeat offenders. While no solution is not all-encompassing, this can certainly help in slowing some of the major issues online. Ultimately, no one party should have overruling power over all of the internet and its content. However, states, NGOs, and firms should continue to work together in creating a safer online environment, introducing strategies such as an online identifier.

References

All Platforms Moderate. (2018). In Gillespie, Custodians of the Internet : Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media, 1–23, Yale University Press. https://doi.org/10.12987/9780300235029

Benkler, Y. (2006). The wealth of networks : How social production transforms markets and freedom contract: freedom in the commons, 1-29, Yale University Press. https://ebookcentral-proquest-com.ezproxy.library.sydney.edu.au/lib/usyd/detail.action?docID=3419996.

Camacho, S., Hassanein, K., & Head, M. (2018). Cyberbullying impacts on victims’ satisfaction with information and communication technologies: The role of Perceived Cyberbullying Severity. Information & Management, 55(4), 494–507. https://doi.org/10.1016/j.im.2017.11.004

Complying with COPPA: Frequently Asked Questions. (2020). Retrieved from https://www.ftc.gov/business-guidance/resources/complying-coppa-frequently-asked-questions

Gorwa, R. (2019). The platform governance triangle: Conceptualising the informal regulation of online content. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1407

Henley, J. (2013). Ask.fm: is there a way to make it safe? The Guardian. Retrieved from https://www.theguardian.com/society/2013/aug/06/askfm-way-to-make-it-safe

Hinduja, S., & Patchin, J. W. (2010). Bullying, Cyberbullying, and Suicide. Archives of Suicide Research, 14(3), 206–221. https://doi.org/10.1080/13811118.2010.494133

Jones, S. (2013). Ask.fm pledges cyberbullying reform after Hannah Smith death. The Guardian. Retrieved from https://www.theguardian.com/society/2013/aug/19/ask-fm-cyberbully-hannah-smith-death

Markiplier. (2017). Reddit’s 50/50 CHALLENGE. YouTube. Retrieved from https://www.youtube.com/watch?v=dQV8SNJc15o&ab_channel=Markiplier

Olweus, D. (2013). School Bullying: Development and Some Important Challenges. Annual Review of Clinical Psychology, 9(1), 751–780. https://doi.org/10.1146/annurev-clinpsy-050212-185516

Patchin, J. W., & Hinduja, S. (2010). Cyberbullying and Self-Esteem. The Journal of School Health, 80(12), 614–621. https://doi.org/10.1111/j.1746-1561.2010.00548.x

Smith, D. (2014). Hannah Smith inquest: Teenager posted ‘online messages’. BBC. Retrieved from https://www.bbc.com/news/uk-england-leicestershire-27298627

Smith, P.K., Mahdavi, J., Carvalho, M., Fisher, S., Russell, S. and Tippett, N. (2008), Cyberbullying: its nature and impact in secondary school pupils. Journal of Child Psychology and Psychiatry, 49: 376-385. https://doi.org/10.1111/j.1469-7610.2007.01846.x

The Culture of the Internet. (2002). In Castells, The Internet galaxy reflections on the Internet, business, and society, 36–63. Oxford University Press, https://doi-org.ezproxy.library.sydney.edu.au/10.1093/acprof:oso/9780199255771.001.0001

Vincent, J. (2021). LiveLeak, the internet’s font of gore and violence, has shut down. The Verge. Retrieved from https://www.theverge.com/2021/5/7/22424356/liveleak-shock-site-shuts-down-itemfix