
"Social Media Logos" by BrickinNick is licensed under CC BY-NC 2.0 .
Introduction

The emergence and development of the Internet and social media platforms has undeniably brought convenience to people, such as bringing people from all over the world together to communicate more easily. However, this has also brought some problems and issues at the same time as people are too free to speak on the Internet and some of the content can cause discomfort to others, such as bullying, harassment, violent content, hate, pornography and other problematic content, so the Internet needs to be regulated (Gillespie, 2018). This paper argues that platforms, users and governments all need to take some approaches and to some extents are responsible for stopping the spread of these problematic contents.
Platforms self-regulation

As a medium for information dissemination, platforms can stop the dissemination of offending content through self-regulation. The early controversy was raised by a photo called napalm girl taken in 1972 which was about the Vietnam War and published in the New York Times. It was considered by some audiences to be obscene or offensive and should be removed and social media now faces the same problem (Holland, 2022). The changes on the Internet have made the speed of spreading of information faster and accessible by more audiences and the anonymity of the Internet has created more social and cultural problems. For example, it is estimated that hate speech against Chinese people on Twitter increased by 900% during Covid-19 and the instigators behind it exploited users’ nervousness to incite discriminatory behaviour or racist abuse against Asians (L1ght, 2020). While platforms enable people’s right to free speech, they should also act at such moments to stop the continued spread of such speech. Since the mid-1990s, Internet governance has followed the core construct of decentralised systems which undoubtedly gives a foundation for self-regulation of platforms (Flew et al., 2019).
Unlike traditional media, where content presented to viewers is censored and therefore filtered out of harmful or offending content, content on electronic platforms is visible to all users without censorship, which can lead to the appearance and distribution of some offending content (Gillespie, 2018). Platforms can use algorithms to develop automatic detection or review functions to identify information that needs to be blocked and removed before content is posted and distributed (Gillespie, 2018). This type of regulation can block content such as hate, pornography, etc. while also ensuring fairness because human bias is removed (Gillespie, 2018). Therefore, platforms have the responsibility and can stop the spread of offending content in the form of self-regulation.
Users’ supervision

Due to the large number of users and the amount of content that needs to be reviewed by platforms, users may face long waiting times and the accuracy of the review cannot be guaranteed by the platform at present. Therefore, it is necessary to raise ethical awareness of users themselves who need to take responsibility to help stop the spread of offending content by regulating their own behaviour and monitoring each other. Users need to monitor themselves first and according to a survey by the anti-bullying organisation Ditch the Label (2017), 69% of people admitted to being abusive online and 15% said they had been involved in cyberbullying online. This means that users can stop the spread of such offending content by refusing to participate and are responsible for doing so as an integral part of the community. Changes to the internet and the advent of social media have increased the potential for harassment and for example, 15-year-old Amanda Todd was harassed by a stranger and flashed her breasts in front of a camera and she committed suicide after the stranger posted the photo online (Ingham, 2020). Therefore, users need to be aware of the need to protect their privacy while speaking freely and to eliminate the act of digging or sharing the privacy of others so that none of the users are harmed.

In addition to regulating their own behaviour, users can also better control the dissemination of offending content by monitoring each other. One of the social and cultural issues brought about by changes on the Internet is the prevalence of cyber violence and hate speech, with 36.5% of people believing they have been cyber bullied (BroadbandSearch, 2022). Therefore, in addition to the need for platforms to establish appropriate policies, there is also a need to instil in users an awareness of reporting and to inform them of what is harmful and violated content. For example, in the video, YouTube informs users how to report content that is not allowed involving child safety, extreme violence, hate speech, etc. and tells them what actions platform will take (YouTube Viewers, 2018). The platform encourages users to follow the rules while also reporting to stop the spread of violating content and maintain community harmony. Therefore, it is the responsibility of users to regulate their own behaviour and monitor others to reduce the number of negative comments and harmful content on digital platforms.
‘The Life of a Flag‘ by YouTube Viewers. Retrieved from: https://www.youtubecom/watch?v=WK8qRNSmhEU&t=198s
Government’s intervention

Government intervention can help stop the spread of harmful information faster and better in specific cases, such as when individuals or organisations take advantage of the anonymity of the Internet to post content that is illegal or harmful to national security. For example, according to Chinese regulatory documents, it is mandatory to display locations based on IP addresses on major online platforms to prevent impersonation and rumour spreading (Yip, 2022). This move did limit the appearance and spread of pornography, violence and harassment, but it also exposed more privacy of users and limited freedom of speech to some extent. Another reason for government regulation is due to cultural and political issues, such as the fact that homosexuality is legal in some countries but prohibited in others. China is sensitive to political content so the Chinese government searches for political criticism on social media and blocks some websites and keywords, but this is not the case in the United States (Gillespie, 2017). Such examples demonstrate the need for some level of government regulation in order to make content on digital platforms in line with national policies. The government can have control through algorithms, but excessive intervention is not advisable because it violates civil rights such as freedom of speech.
Potential problems and further actions
Some may argue that platforms self-regulation has some potential problems and drawbacks, such as relying too much on algorithms but the current technology is not mature enough and still needs continuous improvement for better screening. Automatic detection can accurately identify offensive content under normal circumstances, but users can still adopt strategies to avoid identification and sometimes content that does not violate the rules can be deleted by mistake (Gillespie, 2018). The technology at this stage sometimes provides users a bad experience and does not prevent the spread of harmful and offending content in a timely manner. The amount of content reviewed is huge therefore any content that is not addressed in time can lead to the spread of offending content such as pornography, bullying and hate speech which cause harm to users.

However, these problems can be solved by taking further actions, such as adding human review to platforms to speed up the review and gain more accurate results. Combining and balancing automated and manual reviews can better control the posting and distribution of offending content. Platforms have the responsibility and more incentive to regulate even though it may go beyond the legal requirements, because financially, platforms may lose some advertisers and sponsorships or even risk losing customers due to the inclusion of content such as pornography, violence, bullying, etc. (Gillespie, 2017). Therefore, platforms play an important and integral role in being responsible for stopping the distribution of offending content.
Conclusions
In conclusion, platforms, users and governments are responsible for stopping the spread of bullying, harassment, violent content, hate, pornography and other problematic content. Although some argue that platforms self-regulation has some drawbacks due to the lack of technological sophistication, it is undeniable that self-regulation has its advantages and can be enhanced by adding human review. Users can help reduce the existence and spread of violated content by self-regulation and monitoring each other through reporting. The government can intervene to a certain extent in some special cases but needs to avoid excessive intervention.
References
BroadbandSearch. (2022). All the Latest Cyberbullying Statistics 2022. https://www.broadbandsearch.net/blog/cyber-bullying-statistics
Ditch the Label. (2017). The Annual Bullying Survey 2017. https://www.ditchthelabel.org/research-papers/the-annual-bullying-survey-2017/
Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal of Digital Media & Policy, 10(1), 33–50. https://doi.org/10.1386/jdmp.10.1.33_1
Gillespie, T. (2017). ‘Governance by and through Platforms’, in J. Burgess, A. Marwick & T. Poell (eds.), The SAGE Handbook of Social Media, London: SAGE, pp. 254-278.
Gillespie, T. (2018). All Platforms Moderate. In Custodians of the Internet : Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (pp. 1–23). New Haven: Yale University Press. https://doi.org/10.12987/9780300235029
Holland, O. (2022). “Napalm Girl” at 50: The story of the Vietnam War’s defining photo. CNN. https://edition.cnn.com/style/article/napalm-girl-50-snap/index.html
Ingham, A. (2020). 7 Real Life Cyberbullying Horror Stories. Family Orbit Blog. https://www.familyorbit.com/blog/real-life-cyberbullying-horror-stories/
L1ght. (2020). L1ght Releases Groundbreaking Report On Corona-Related Hate Speech and Online Toxicity. https://l1ght.com/l1ght-releases-groundbreaking-report-on-corona-related-hate-speech-and-online-toxicity/
Yip, W. (2022). China’s social media platforms, including the country’s TikTok and its version of Instagram, are set to show users’ locations based on their IP addresses. Insider. https://www.insider.com/china-social-platforms-to-make-user-locations-visible-ip-addresses-2022-4