With the advent of the Web 2.0 era, the combination of technology and human imagination provides a huge unrestricted network platform similar to Utopia, where people can speak freely and share everything in a digital virtual world. But unlimited freedom also means that the dark side of speech freedom and sharing resources is magnified, such as the massive amount of pornographic violence on the Internet and the round after round of online violence led by hate speech. Also, as Gillespie (2018) says, social media was born out of the sheer chaos of the web. While acknowledging that the digital age has brought great convenience to people, it cannot be denied that the network environment has worsened in recent years. In the face of this lousy network environment, many people are responsible for preventing these phenomena. Discussing how to prevent these destructive online phenomena can be started with compelling content moderation of user content posts. Also, some platform users and accounts that significantly negatively impact the Internet platform can be deplatformed. Moreover, it promulgates and revises laws related to maintaining the environment of Internet platforms. This essay will specifically discuss who should prevent this phenomenon and discuss the above three methods to prevent the spread of harmful content on the Internet.
Everyone has the responsibility to stop harmful content
Everyone on the digital platform of the Internet has a responsibility to stop the spread of harmful content such as pornographic violence and cyber violence. The first to bear the brunt are thetech giants such asGoogle, Reddit, Twitter, and Facebook, the iconic digital platforms of the era. Because a lot of pornographic and violent content online violence is sweeping these digital platforms, as the leader of Internet companies, they have a responsibility to prevent the spread of this phenomenon. For example, the photo of the Napalm girl spread on the Internet; although it was to expose the cruelty of the war, it still involved nudity and violence against minors, which led Facebook to delete this post. (Gillespie, 2018) The reason this post was deleted is a good explanation that the platforms need to maintain an exemplary network environment. Still, many people refute this image as representing the war’s historical significance. Although the post was reinstated finally, it is also an excellent reflection of how Facebook decision-makers are responsible for responding positively when faced with this kind of problem. Even if it is difficult, the managers of these platforms are looking for ways to maintain the online community. Besides, users in the melee must stop delivering this lousy content. The unserious realm of digital media gives people freedoms not usually found elsewhere: an outlet, a space to play with norms, rules, and restrictions. (Kuipers, 2006) The spread of harmful content is accelerated by users liking and retweeting comments on online pornographic, violent, and offensive posts. In the United States, individuals are convicted of selling obscene material on the Internet. In other words, the law also proves that even tiny individual users are responsible for spreading harmful content on the platform, not to mention that everyone on the Internet platform is responsible for preventing the spread of such harmful content. Consequently, from big technology giants to small Internet platform users, it is their responsibility to prevent the continuation of this lousy network environment because only in this way can a good environment for Internet platforms be maintained more effectively and stably.
Content moderators and online community content guidelines
There are three approaches worth discussing to improve the spread of harmful content and cyberbullying on the Internet. The first is the content moderators, which are more common in major Internet platforms. For example, Facebook’s most commonly used algorithmic tool automatically flags group joins requests from suspected spam accounts, but the vast majority of moderation decisions are still made by users manually or through independently developed bots. (Seering et al., 2019) The more common harmful content on the Internet can be reduced through a combination of manual and algorithmic review systems. For example, in the first quarter of 2019, Facebook removed about 4 million pieces of content for hate speech. (Gonçalves et al., 2021) Although content moderation is very common in digital platforms, there are still many problems because artificial intelligence is not perfect. Some users were removed because the platform or the company’s algorithms were slightly politically biased. For example, one user said, “I suspect Facebook has one or more male patterns that are hostile to feminists and minorities in general.” (Myers West, 2018) When many users question the platform’s content moderation standards, the automated moderation of this combination of algorithms and humans is already very problematic. To address this, research shows that increasing perceptions of justice and fairness can reduce future violations on social media platforms. (Gonçalves et al., 2021) To be specific, from content review relying on artificial intelligence to platform users’ heartfelt compliance with the fair and just online community content guidelines formulated by managers, users are guided to standardize their online behavior from the root, thereby reducing online violence and vulgarity dissemination of content.
Deplatformization
The second method discussed is deplatforming, which is to stop users who violate policies through social networking platforms. Each digital platform has its policies and regulations. When there are users who post violence, pornography, and hate speech, the platform will establish a series of sanctions related to de-platforming. For example, platforms often issue warnings first by flagging or removing content sections or temporarily suspending someone’s account before deciding to ban them permanently. It has to be said that this is a very effective way for users who violate the policy and maintain the network environment, not only depriving them of the right to speak and online identity in the digital world but also depriving them of the opportunity to pollute the network environment. For example, in 2021, former US President Donald Trump was permanently banned from Twitter, and his accounts on Facebook, Instagram, and YouTube were suspended (Van Dijck et al., 2021); his posts related to the George Floyd protests, The post glorified violence. (Gonçalves et al., 2021) To a large extent, this ruling method can inhibit those who promote pornographic and violent content and provoke cyber violence in the online world. Because while these users are being de-platformed for violating platform policies, the decisions issued by the platform also deter users who want to break the rules.
Legislation about the online environment
Another method worth discussing is laws to prevent harmful content from continuing on online platforms. In the years of Internet development, technology has been updated with time, and some legal treaties on the Internet have been gradually updated. Many countries are paying more and more attention to speech and content in the virtual world, and there are corresponding laws and regulations for pornographic and violent. Other content, as well as cyber violence, constrain users to regulate their behavior on digital platforms. For example, Australia released the Online Safety Act 2021 last year, a series of plans to keep Australians safe online, including mechanisms to remove seriously abusive and harmful content. (Australian Government Department of Infrastructure, 2021) While the law provides security for Internet users, it also means that those who violate it may face being suspended from the platform and held responsible for their online behavior by going to jail and paying a hefty fine. For instance, Aydin C received an 11-year sentence in Amsterdam after being convicted of online fraud and extortion. He faces additional charges of cyberbullying for harassing Amanda Todd online, forcing his victims to perform sex acts in front of a webcam. Therefore, the weapons of the law can also help the Internet block these harmful behaviors and content. Although there are still many places on the dark side of the Internet that cannot be covered by the law, it cannot be denied that the role of the law and that the country is actively attaching importance to the network environment.
Overall, more and more harmful content and bullying have been rampant on digital platforms, and the dark side of the Internet has continued to expand at the same time as simultaneously with the rapid development of technology. Anyone in this virtual world is responsible for maintaining this utopian world, whether those tech giants or tiny individual internet users. Content moderation combined with community guidelines, deplatformization, and laws, although not perfect, can help the Internet prevent the spread of harmful content and maintain a good environment on the Internet.
List of reference
Amanda Todd case: Accused Dutch man jailed for cyberbullying. (2017, March 16). BBC News. https://www.bbc.com/news/world-us-canada-39295474
Donald Trump: US President permanently banned from Twitter – CBBC Newsround. (n.d.). Www.bbc.co.uk. https://www.bbc.co.uk/newsround/55600246
Department of Infrastructure, T. (2021, September 20). Current legislation. Department of Infrastructure, Transport, Regional Development and Communications, Australian Government. https://www.infrastructure.gov.au/media-technology-communications/internet/online-safety/current-legislation
Facebook. (n.d.). Facebook Community Standards | Transparency Center. Transparency.fb.com. https://transparency.fb.com/en-gb/policies/community-standards/
Gillespie, T. (2018). All platforms moderate, Custodians of the internet: platforms, content moderation, and the hidden decisions that shape social media. Yale University Press. https://doi.org/10.12987/9780300235029
Gonçalves, J., Weber, I., Masullo, G. M., Torres da Silva, M., & Hofhuis, J. (2021). Common sense or censorship: How algorithmic moderators and message type influence perceptions of online content deletion. New Media & Society, 146144482110323. https://doi.org/10.1177/14614448211032310
Kuipers, G. (2006). The social construction of digital danger: debating, defusing and inflating the moral dangers of online humor and pornography in the Netherlands and the United States. New Media & Society, 8(3), 379–400. https://doi.org/10.1177/1461444806061949
Myers West, S. (2018). Censored, suspended, shadowbanned: User interpretations of content moderation on social media platforms. New Media & Society, 20(11), 4366–4383. https://doi.org/10.1177/1461444818773059
Seering, J., Wang, T., Yoon, J., & Kaufman, G. (2019). Moderator engagement and community development in the age of algorithms. New Media & Society, 21(7), 1417–1443. https://doi.org/10.1177/1461444818821316
Kleinman, Z. (2016, September 9). Fury over Facebook “Napalm girl” censorship. BBC News. https://www.bbc.com/news/technology-37318031
Van Dijck, J., de Winkel, T., & Schäfer, M. T. (2021). Deplatformization and the governance of the platform ecosystem. New Media & Society, 146144482110456. https://doi.org/10.1177/14614448211045662
What is illegal and restricted online content? (n.d.). ESafety Commissioner. https://www.esafety.gov.au/report/what-is-illegal-restricted-content