
“76.366 Freedom of Speech?” by HelenHates Peas is licensed under CC BY-NC 2.0.
Introduction
Article 19 of the Universal Declaration of Human Rights (1948) recognizes that freedom of expression is one of the indispensable human rights of mankind, but that it will be limited when in conflict with other rights. With the advent of web 2.0, people have moved their offline conversations to online social media. And with the open, participatory nature of online platforms allowing personal expression to reach everyone at breakneck speed, this means that users can receive any information on the internet, including harmful information such as bullying, harassment, exposure and pornography. In fact, these problematic messages are not allowed on the Internet, but the line between freedom of expression and the dissemination of pornography and bullying is currently blurred on the Internet, making it difficult to distinguish between regulatory responsibilities. It becomes particularly important to know who is responsible for stopping the spread of information. This paper will first distinguish between freedom of expression and cyberbullying, and from this distinction will argue for a discussion of what the government, internet technology companies and users need to do to stop the spread of such content.
“Freedom of expression” and cyber-bullying
Current internet platforms do not mandate real-name accounts, which allows users in virtual identities to show the side of themselves that they wish to show without the moral constraints of facing familiar people in real life. The ‘comment culture’ in social media is the result of groups of users responding positively or negatively to each other’s posts, blurring the line between freedom of expression and cyberbullying in the comments and allowing users to criticize what they perceive as biased public figures (Clucas, 2020). In 2019, South Korean actress Shirley committed suicide at her home due to depression. She had been subjected to countless malicious comments such as rage, threats, pornography and death threats under every post she made during her lifetime, and the psychological stress caused by this cyberbullying caused her to suffer from depression and eventually to die. The immoral internet users who post malicious comments hide under an anonymous account and categorize these comments as their “freedom of speech” without paying any price for their consequences. Freedom of expression is no excuse for cyberbullying, and behavior on the internet needs to be regulated and taken seriously. Cyberbullying takes the form of hate speech, harassment and trolling, and exists in many forms, with cyberbullying being judged by whether it is “overt, deliberate judgement of others” (Xu & Trzaskawka, 2021). The consequences of bullying, harassment, violent content, hate, pornography and other problematic content spread across digital platforms can vary widely and can be life-threatening, which is why stopping the spread of these harmful messages has become particularly important in this age of the Internet.
“Cyberbullying, would you do it?” by kid-josh is licensed under CC BY-NC-SA 2.0.
Who is responsible for stopping the distribution of this content and how?
The existence of multiple stakeholders in the Internet complicates the maintenance of public order online, and the absence of any one party’s work can make efforts to stop the spread of harmful content face failure. Responsibility can probably be divided into three directions; the government, which is responsible for making laws, the platform companies, which can strictly enforce them, and the public users, who are responsible for abiding by the law.
- Government
The government, as the dominant player in the multiplicity of interests, should use its power to bear the brunt in stopping the spread of undesirable content on the Internet. The impact of bad information on the Internet is not only threatening to individuals, but can even cause fear and unrest in society. In the public interest and for a healthy personal online experience, the government should enact laws to regulate internet users’ online behavior and even take enforcement measures to take down or ban objectionable videos. For example, in 2021 Australia passed the Online Safety Act, which allows people who have been cyberbullied to report the incident to the eSafety Commissioner to take action to have the social media removed within 24 hours, or the platform will face a hefty fine. In the US, even though the federal government has not yet passed a national cyberbullying law, every state has bullying legislation to protect minors who are being cyberbullied. However, the laws vary greatly from state to state, and passing a national cyberbullying law as soon as possible is still something the US federal government needs to focus on. Germany has also passed strict speech laws to impose high fines on platforms that fail to remove hate speech in a timely manner (Flew et al., 2019). It is worth noting that government interventions need to balance the need to maintain a good environment for platforms with the need to see them as a stage for politicians to perform. For example, in the 2016 US election the government used social media platforms to promote ‘fake news’ to manipulate the election (Flew et al., 2019).
- Internet Company platform
“networkunlockedcloseup.jpg” by CyberHades is licensed under CC BY-NC 2.0.
Social media platforms offer new opportunities for people to communicate, but the platforms need to do a better job of regulating the information disseminated in them. Internet companies today prefer to present themselves as communication intermediaries where users provide content to be distributed, rather than as media companies (Flew et al., 2019). Internet platforms were originally designed with the intention of providing a venue for communication for all people, so there was a preference for deregulation of communication in their early development. For example, when Google conducted a basic survey of Youtube ads, they found that many of them were terrorist-produced ads, and they would be far quicker to deal with copyright violation information than with content involving hate or illegal content (Flew et al., 2019). The notion that internet platforms place their own management in an open, impartial and non-interventionist position, yet react quickly in the face of matters involving their own interests, is largely an escape from responsibility and accountability (Gillespie, 2019).
Self-censorship by the platforms alone is not enough; media companies need to maintain order on their platforms under the policies set by the government in order to bring a good online experience to every user. The most critical issue facing platforms in content review is how to identify and address bad content, and YouTube provides a good example of this by leaving it up to users to decide if content is problematic. In addition to AI review, when a video or post receives enough user flags or complaints it goes into a manual review channel for review (Flew et al., 2019). At the same time, Internet companies can work with government departments, within their capacity, to adopt technology in order to reduce online abuse of undesirable information. Microsoft donated a technology it developed called PhotoDNA to the National Center for Missing and Exploited Children (NCMEC) in 2009 to combat child pornography on the Internet, and has reviewed 30 million images of child pornography in nearly six years.
- The user
“cafnr app_0031” by Mizzou CAFNR is licensed under CC BY-NC 2.0.
Social users need to comply with their country’s laws and the platform’s online compliance when using social media. Users should proactively regulate the content they post to not contain bullying, harassment, violent content, hate, pornography, etc., as these messages are likely to harm others while also harming themselves, as in the case of internet trolling. In the Internet, users are able to be divided into two simple categories. One category of user is the cyberbully, while the remaining regular users will be automatically classified as the bullied. If platform users see bad information and turn a blind eye to it, without flagging the information or informing the platform administrators, then they are actually contributing to cyberbullying. Each user needs to exercise self-censorship when posting any information and imagine how the recipient of the information would feel (A. E. Marwick & boyd, 2010), and work together to create a harmonious and beautiful online environment, rather than hiding under an anonymous account and attacking others.
Conclusion
Overall, stopping the spread of bullying, harassment, violent content, hate, pornography and other problematic content on digital platforms requires a concerted effort by individual governments, internet media platforms and internet users, one without the other. Governments need to intervene with appropriate powers to regulate platforms and enact laws so that platforms have laws to rely on for regulation. Social platforms need to put aside their avoidance of responsibility and adopt laws and technology to provide stricter scrutiny of content. Users should regulate their online conduct under self-censorship, and should put a stop to the spread of harmful information on the internet in a timely manner. Only if multiple stakeholders put aside their self-interest, can a safer, healthier internet environment be created in the fully-fledged digital age.
Reference List:
Clucas, T. (2020). ‘Don’t feed the trolls.’ In Violence and Trolling on Social Media (pp. 47–64). Amsterdam University Press. http://dx.doi.org/10.2307/j.ctv1b0fvrn.6
Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal of Digital Media & Policy, 10(1), 33–50. https://doi.org/10.1386/jdmp.10.1.33_1
Gordon, S. (2019, March 16). Laying down the law for cyberbullying. Verywell Family. https://www.verywellfamily.com/cyberbullying-laws-4588306
Gillespie, T. (2019). All Platforms Moderate. In Custodians of the Internet (pp. 1–23). Yale University Press. https://doi.org/10.12987/9780300235029-001
Geier, T. (2019, October 14). Sulli, Korean pop star and actress, dies at 25. TheWrap. https://www.thewrap.com/sulli-korean-pop-star-and-actress-dies-at-25/
Marwick, A. E., & boyd, danah. (2010). I tweet honestly, I tweet passionately: Twitter users, context collapse, and the imagined audience. New Media & Society, 13(1), 114–133. https://doi.org/10.1177/1461444810365313
Microsoft and National Center for Missing & Exploited Children push for action to fight child pornography. (2009, December 15). Stories. https://news.microsoft.com/2009/12/15/microsoft-and-national-center-for-missing-exploited-children-push-for-action-to-fight-child-pornography/
Taylor, J. (2022, January 22). How will new laws help stop Australians being bullied online? The Guardian. https://www.theguardian.com/media/2022/jan/23/how-will-new-laws-help-stop-australians-being-bullied-online
Universal declaration of human rights at 70: 30 articles on 30 articles – Article 19. (2018, November). OHCHR. https://www.ohchr.org/en/press-releases/2018/11/universal-declaration-human-rights-70-30-articles-30-articles-article-19
Xu, Y., & Trzaskawka, P. (2021). Towards descriptive adequacy of cyberbullying: Interdisciplinary studies on features, cases and legislative concerns of cyberbullying. International Journal for the Semiotics of Law, 34(4). https://doi.org/10.1007/s11196-021-09856-4