Bullying, harassment, violent content, hate, porn and other problematic content circulates on digital platforms. Who should be responsible for stoping the spread of this content and how?

Self portrait – We all get sucked in” by MattysFlicks is licensed under CC BY 2.0

Digital platforms allow people to frequently engage with each other and connect them as Internet citizens (Gillespie, 2018). Although it seems that digital platforms offer many benefits, as more people use the Internet, its risks are gradually emerging. A telling example is the impact of Watch People Die, an online platform that allows users to watch videos of other people’s deaths casually. According to research, Reddit receives over 200,000 user clicks and plays per day, meaning that 200,000 people watch the deaths of others on the digital platform every day and make it a habit (Dahl, 2018). Prolonged viewing of these violent and gory videos can make people cold-blooded and, which is not conducive to the development and harmony of society. Therefore, the following content is the obligation of social media platforms, governments, and online users to modify the distribution of harmful content. Social media platforms must implement a suitable combination of manual and automated moderation and impose banning and blocking penalties; governments, as representatives of power, can enforce laws to control objectionable content; and users must be selective about what they post and should monitor each other with other users.

Social media platforms and necessary solutions

Cyber Bullys” by Adam Clarke is licensed under CC BY-NC-SA 2.0

Digital platforms are tools for online information dissemination, and it has the responsibility and obligation to stop the spread of harmful content. Content vetting is a crucial solution for social platforms to adjust online information. Platform information not thoroughly vetted can efficiently deliver harmful information to users, whether it is abusive comments or pornographic and violent content, which can cause psychological damage to online users. When online users realize their content is inappropriate, social media platforms need to be responsible for curating content and regulating users’ activities, like using manual and automated audits to detect the quality of content on digital platforms (Gillespie, 2018). However, there are some conflicts with these audit methods. There is no exact gold standard for manual review because reviewers are partially from different cultures, and reliability varies significantly between them, and the inability to accurately judge such content can easily cause confusion and controversy on digital platforms (Webber, 2011). On the other hand, automatic moderation provides overly strict and uniform standards that can easily affect users’ freedom of expression and privacy, and this type of speech-stifling censorship has been protested by many people (OHCHR, 2021). Therefore, to avoid these protests, digital platforms must combine manual and automated audits to avoid stifling citizens’ speech and vetting discrepancies and to effectively address the dissemination of inappropriate content on an uncontroversial basis.

Furthermore, for some online users who post violent and pornographic content, or even abusive behaviour, social media platforms can set up some systems to punish these users, even using different levels of punishment to determine. For example, a firewall, China’s Internet censorship system, blocks account for content that is anti-communist and detrimental to society (Yaqiu, 2020). Although these direct punishments may seem like a stifling of freedom of expression, it is the most efficient way to stop the spread of harmful information.

Government and relevant authoritative solutions         

Cyberbullying, would you do it?’ by kid-josh is licensed with CC BY 2.0

The government exists to maintain the stability of society, and it is responsible for controlling the spread of harmful information on digital platforms. Furthermore, compared to the platforms, the government operates with more authority and can solve these problems more efficiently with government involvement (Bowie & Jamal, 2006). For example, Megan, a 14-year-old American teenager, committed suicide in her home because of bullying on the social networking site MySpace. The outrage of a large number of citizens across the country after this suicide time pushed the U.S. to enact the《Megan Meier Cyberbullying Prevention Act》, which reduced the number of people having suicides due to cyber violence, preventing more lives from being lost (Abera, 2020). It means that the government has a vital role to play in the context moderate of digital platforms, and it needs to make some laws to make online users pay more attention to the quality of online content. Although these practices are contrary to the promotion of freedom of expression, the point of freedom of expression conflicts with the public interest, there is an unresolve imbalance between digital users’ right and the public interest while government enact the content regulation policy.

In addition, to take full advantage of the government’s rights, it could be considered to cooperate with the platforms to jointly control users and reduce young people’s access to the Internet. For example, French President Emmanuel Macron tweeted in 2021 that all digital platforms would be mandated to install ‘parental controls’ on children in the face of the threat of technological and harmful information on the Internet for young people (Laura, 2021). Although this regulation has received many questions, the government can only use this mandatory approach to ensure the interests of young online users in the face of the ongoing threat of the Internet.

Inculcating digital citizenship in Vicenza, Italy” by USAG Italy is licensed under CC BY-NC 2.0

Online users and how they do

Online users are the most central component of the digital platform, and the data belongs to the individual and not the social platform; the user is the one who has the opportunity to choose what to access, so stopping the spread of harmful information on the Internet users is the most crucial (Yeoman, 2018). According to real news, a large number of teenagers are vulnerable to verbal abuse on digital platforms, similar to the Q&A site ‘ask.fm’, where online users can receive many anonymous comments, and Jessica, a 16-year-old girl, ended up taking her own life because of the verbal abuse she received on ask.fm (Tait, 2017). As the abusive content does not involve sensitive words, like ‘can you kill yourself ‘, it is difficult for the platform to detect problems with these words; users also failed to reasonably judge the harmful nature of the speech, which led to the tragedy. Therefore, because the number of users is massive on Internet, other regulations cannot be relied on to stop the spread of problematic content. Users should be selective in posting comments and content through relevant education and internet usage codes, thus effectively reducing inappropriate content and phenomena.

On the other hand, online users should also monitor other users when using social media platforms. Because it is difficult for the platform to propose a uniform standard for some inappropriate content, users, as the most intuitive perceiver, have the right and obligation to judge and report to each other (Kettemann & Tiedeke, 2020). Many digital platforms, such as Facebook, have a reporting mechanism for users, and Facebook has partnered with the UK Child Exploitation and Online Protection Centre to provide online users with an emergency reporting button so that users can quickly respond to lousy content such as cyberbullying on digital platforms (Luxton et al., 2012). These new implementations recognize the importance of mutual monitoring, meaning that mutual monitoring among users can help stop the spread of harmful content on digital platforms.

In response to the massive dissemination of inappropriate content on the internet, social media platforms, government and online users should all be responsible for stopping the spread of such content. Social media platforms should increase their auditing efforts by integrating automatic and manual auditing to prevent dissemination of problematic information and enforce banning and blocking systems to regulate users’ speech. As the rights representative, the government should regulate digital platforms by law and cooperate with the platform to regulate youth behaviour. Furthermore, online users should consider selective publishing content and monitoring other problematic users.

 

Reference list:

 Abera, Arsema. (2020). Bullying gone Technological: The Tragic Death of Megan Meier. StMU Research Scholars. https://stmuscholars.org/bullying-gone-technological-the-tragic-death-of-megan-meier/

 

Bowie, N. E., & Jamal, K. (2006). Privacy rights on the internet: self-regulation or government regulation?. Business Ethics Quarterly, 16(3), 323-342. https://doi.org/10.5840/beq200616340

 

Dahl, Kieran. (2018). Exploitation on the internet? The morality of watching death online.The guardian.

https://www.theguardian.com/technology/2018/oct/12/reddit-r-watch-people-die

 

Gillespie, T. (2018). All Platforms Moderate. In Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (pp. 1-23). Yale University Press. https://doi.org/10.12987/9780300235029-001

 

Kayali, Laura. (2021). Macron pushes parental control for internet access. Politico.

https://www.politico.eu/article/france-parental-control-internet-access-emmanuel-macron/

 

Kettemann, M. C., & Tiedeke, A. S. (2020). Back up: Can users sue platforms to reinstate deleted content?. Internet Policy Review, 9(2), 1-20. https://doi:10.14763/2020.2.1484

 

Luxton, D. D., June, J. D., & Fairall, J. M. (2012, May). Social Media and suicide: A public health perspective. American journal of public health. Retrieved October 6, 2022, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3477910/

 

Moderating online content: Fighting harm or silencing dissent? OHCHR. (2021, July 23). Retrieved October 8, 2022, from

https://www.ohchr.org/en/stories/2021/07/moderating-online-content-fighting-har m-or-silencing-dissent

 

Tait, Amelia. (2017). “we’re not freaks, we’re not weirdos”: The Online Community that watches people die. The New Statesman.

https://www.newstatesman.com/science-tech/2017/09/we-re-not-freaks-we-re-not-weirdos-online- community-watches-people-die

 

Webber, W. (2011). Re-examining the effectiveness of manual review. In Proc. SIGIR Information Retrieval for E-Discovery Workshop (p. 2).

 

Yeoman, Simon. (2018). Will self-regulation fix the Internet? Data Center Dynamics.

https://www.datacenterdynamics.com/en/opinions/will-self-regulation-fix-the-int ernet/

 

Ya Qiu, wang. (2020). In China, the ‘great firewall’ is changing a generation. Human Rights Watch. Politico.

https://www.hrw.org/news/2020/09/01/china-great-firewall-changing-generation