

It needs to stop! Who is responsible for regulating the harmful content circulating digital platforms?
Introduction
Social media is a core aspect to a majority of individual’s lives with the frequent use of social media platforms part of daily routines, it comes as no surprise that users are continuously subjected to online harassment, cyber bullying, fake news and a variety of other harmful content. As these digital platforms continue to rise and become ever so relevant, the harmful content equally becomes exposed and prevalent to these platforms (Shahi, Tsopleleck, 2022), which raises questions that scholars have been vigorously trying to answer, who is responsible in regulating this content and how is it done? Through the discussion of cyber bullying and fake news and the effects they have on individuals, it is possible to understand why regulation is needed on the Internet. Three main ways that regulation exists will be explored, government regulation, regulation by the social media platforms themselves and users regulating themselves. These three different forms of regulation all approach the topic in slightly different ways, opening up the room for debates and arguments on which type needs to be implemented more in society and what these different ways do, to stop the harmful content from spreading further than it already is.

Cyber-bullying and fake news: the effects are real!
Cyber bullying and fake news are two of the most prevalent aspects of the Internet, particularly on social media platforms where people of all ages can be subjected to this harmful harassment. There is a variety of harmful content that is spread throughout social media platforms, with cyber bullying being one that affects so many individuals, leading to long term affects such as poor metal health and self-esteem. A study done by Pew Research Center (2018) discovered that “59% of teenagers who use the internet have experienced some form of cyberbullying” which often lead to long term effects as “68% of kids who have been harassed online have mental health issues” (Social Media Victims Law Center, 2022). This number shows how prevalent and serious the spread of cyberbullying is, and without the proper regulation this will continue to occur. While fake news is slightly different to cyberbullying, it can do equal amounts of harm depending on the information spread. Since the 2016 US election, the spread of fake news has continued in its uprise (Marchant, 2021), often surpassing real and true news. Olan et al., (2022) discusses how the spread and rise of fake news can be harmful by affecting society’s values and beliefs, altering important facts that hold significant importance to the workings of society as well as re-defining critical issues. The close inspection of how fake news is disintegrating society, which is leading individuals to slowly think, act and believe in ways they wouldn’t usually unless they were exposed to fake news via social media platforms (Olan et al., 2022). This exploration of both cyberbullying and fake news and the effects they present to internet users only furthers and pushes the argument that these social media platforms need to be regulated more so than ever, by either governments, platforms themselves or users.

Government regulation: Do they?
‘Platform governance’ defined by Gowra (2019) to be legal, economic and political relationships that helps structure different entities (like users and companies) in the platform ecosystem, is becoming increasingly sought after as some regulation methods begin to shift away from self-regulation, into the hands of government regulation (Gowra, 2018). The increase of harmful content on the Internet is treated and regulated in significantly different ways across countries and societies around the world. Brown and Peters (2018) explore the role that the US government plays in regulating social media, analysing their role through focusing on fake news. Brown and Peters (2018) dissect how while Congress in the United States has the power to regulate and dismantle harmful content that is spread throughout social media (under the Commerce Clause), it will often be discarded by the Supreme Court due to the First Amendment. This propels the argument that if governments have the power and option to aid in regulating the Internet, will they value free speech over the regulation of harmful content? The pressure and favour of freedom of speech often clouds the notions and happenings of what freedom of speech can lead to. Online where people are allowed to say what they please, it often leads to cyberbullying, fake news etc, and if the government chooses to prioritise free speech and ignore the different ways that this freedom can exist without harming the individuals that make up the very society that free speech serves, then the US government may not be fit to have the power to regulate. The United Nations (2021) explores how other governments are addressing the needed regulation, with different countries all over the world taking different approaches. The UN (2021) discusses how they believe that governments should focus more so on the processes of regulation, instead of continuing to add more restrictions and laws around social media platforms. While many countries may be open to this, China opts to do the opposite.
What moves are China are making?
The Chinese Government is known to have a very constricted grip on what type of media they allow their country access to (Albert, Xu, 2017). They regulate this in a variety of ways such as strict bans on social media platforms and extreme firewalls. These strict laws still allow aspects of freedom of speech that countries like the US value so highly, but it is allowed in ways the Chinese government control, such as creating their own social media platform (WeChat) to replace Western ones. Comparing the different ways that the US government compared to the Chinese government regulate the Internet, highlights the increasing potential importance the government plays in regulating the Internet.
Should it be up to the social media platforms to regulate themselves?

When it comes to social media platforms where majority of the harmful content is spread and produced, it is typical to assume that it becomes the platforms job to regulate what happens on their sites. With the increase of social media, content moderation must occur with regulations put in place to care for and help its users (Balkin, 2021). Balkin (2021) explores how if we give the power to regulate and impose norms to private sectors (social platforms), then these regulations and norms have to be similarly consistent across most private sectors. If there is a lack of consistency across these sectors it makes it aggressively harder to regulate users as they are accustomed to varying different restrictions. This consistency feeds into the discussion that while these social media platforms should regulate themselves, as they hold a majority of the responsibility in regulating their users, they must simultaneously be more transparent with the processes of creating and enforcing these restrictions (Gillespie, 2018). Gillespie (2018) raises the discussion that these social media platforms are often being opaque about the workings of their platform, opting to give vague reasoning for removing or blocking content. This fuels the argument that if these platforms were more transparent, as Gillespie (2018) argues they should be as they have so much access to our data, then potentially users could cause less harm as they have more education on the behind-the-scenes workings. This may be what social media platform users hope to achieve, however Cusmano et al., (2022) analyses how a lot platforms don’t moderate the way they should due to the financial aspect and gain involved with spreading of harmful content.

Is it now left up to the users?
After exploring both government regulation and self-regulation by the platforms, it is deductible that regulation by the users themselves must be exercised. Users regulating themselves may not be enough to stop the spread of harmful content, but will significantly decrease the type of content that is shared. Users setting boundaries for themselves and regulating their actions may be due to government enforcing stricter rules or the social media platforms minimising actions allowed to take, but either way users self-regulating hopefully minimises the progression of harmful content. Schemes by different sectors with different interest and outcomes desired, have increased their pursuit for users to self-regulate (Communication, 2022).
Conclusion
Regulating the harmful content that circulates the Internet is an increasingly difficult feat. The analysis and exploration of the three different types of regulation, government regulation, self-regulation by platforms and suers regulating themselves, allows for a deeper understanding that deciding who and how these different sectors regulate the Internet is not a simple and easy answer. All three types of regulation play a part in defeating and minimising the spread of harmful content, this will continue to be a debate as long as the Internet progresses or a clear answer emerges.

References
Albert, E., & Xu, B. (2017). Media Censorship in China. Council on Foreign Relations. Retrieved 7 October 2022, from https://www.cfr.org/backgrounder/media-censorship-china.
Anderson, M. (2018). A Majority of Teens Have Experienced Some Form of Cyberbullying. Pew Research Center. Retrieved 9 October 2022, from https://www.pewresearch.org/internet/2018/09/27/a-majority-of-teens-have-experienced-some-form-of-cyberbullying/.
Balkin, J. M. (2021). How to regulate (and not regulate) social media. J. Free Speech L., 1, 71.
Brown, N. I., & Peters, J. (2018). SAY THIS, NOT THAT: GOVERNMENT REGULATION AND CONTROL OF SOCIAL MEDIA. Syracuse Law Review, 68(3), 521–.
Cusumano, M., Yoffie, D., Gawer, A., Ho, D., Marcus, D., & Ray, G. et al. (2022). Pushing Social Media Platforms to Self-Regulate | The Regulatory Review. The Regulatory Review. Retrieved 12 October 2022, from https://www.theregreview.org/2022/01/03/cusumano-yoffie-gawer-pushing-social-media-self-regulate/.
Effects of Cyberbullying: What Parents & Teenagers Need to Know | Social Media Victims Law Center. Social Media Victims Law Center. (2022). Retrieved 11 October 2022, from https://socialmediavictims.org/cyberbullying/effects/.
Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. Yale University Press. DOI:10.12987/9780300235029
Gorwa, R. (2019). The platform governance triangle: conceptualising the informal regulation of online content. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1407
Marchant, R. (2021). The Serious Danger Of Fake News On Social Media [+Examples]. Media Heroes. Retrieved 8 October 2022, from https://news.mediaheroes.com.au/blog/fake-news-on-social-media-marketing.
Moderating online content: fighting harm or silencing dissent?. United Nations. (2021). Retrieved 6 October 2022, from https://www.ohchr.org/en/stories/2021/07/moderating-online-content-fighting-harm-or-silencing-dissent.
NBC News. (2021). China Moves To Control Tech’s Influence With Strict Regulations [Video]. Retrieved 14 October 2022, from https://www.youtube.com/watch?v=nRd8l2KPpw8.
Olan, F., Jayawickrama, U., Arakpogun, E. O., Suklan, J., & Liu, S. (2022). Fake news on social media: the Impact on Society. Information Systems Frontiers, 1-16. https://doi.org/10.1007/s10796-022-10242-z
Self-Regulation of The Media. Communication. (2022). Retrieved 11 October 2022, from http://communication.iresearchnet.com/communication-and-law/self-regulation-of-the-media/.
Shahi, G. K., & Tsoplefack, K. (2022). Mitigating Harmful Content on Social Media Using An Interactive User Interface. In International Conference on Social Informatics. Springer.