Bullying, harassment, violent content, hate, porn, and other problematic content circulates on digital platforms. Who should be responsible for stopping the spread of this content and how?

The proliferation of social media platforms and the development of digital applications has brought with it the consequence of harmful interactions and illegal content circulating online. People from varied socioeconomic and cultural backgrounds interact with one another through online conversations, and the exchange of diverse ideas, videos, and photographs. The current legal innovations have attempted to make certain provisions for online infractions, and in the case of certain publicized cases, the offenders have been persecuted. However, in a wider context, the regulation of such content is left largely to the responsibility of social media providers and users by employing the online features of blocking, reporting, and removing. The practical and political concerns associated with this self-regulatory process highlight the need for more concerted efforts to stop the spread of such content (Yar, 2018).

 

A rather lenient screening method of these digital platforms has allowed children and young adults to have easy access to these media platforms along with exposure to harmful interactions such as bullying and harassment, and illegal content such as violent videos, hate speech, and pornographic content. Although, digital platforms provide a safe space for the exchange of ideas and to voice opinions publicly, however, it has become infested with a prevalence of violent ideologies. Numerous academic studies and governmental reports focus on the experience of individuals in today’s digital age. It is interesting to note that a majority of these publications fail to narrate a hopeful story of freedom of expression, personal autonomy, online security, or social media privacy (Jorgensen, 2019).

 

The concept of Frankenstein Syndrome was introduced by Neil Postman (1994) and it captures the dilemma of digital technologies quite appropriately as “once the machine is built, we discover – sometimes to our horror, usually to our discomfort, always to our surprise – that it has ideas of its own” (1994, p. 21). This is quite relevant to the digital media technologies of today as it has resulted in numerous unforeseen problems such as cyberbullying, organized criminal activity, promotion of violent content, cancel culture, and pornography. Various sensitive issues with a trigger factor are openly discussed, promoted, and shared. Violence is another aspect of digital platforms that can be best understood through the concept of Internet banging which is the phenomenon of “gang affiliates using social media sites such as Twitter, Facebook, and YouTube, two trade insults or make violent threats that lead homicide or victimization” (Patton et al., 2013). This concept of Internet banging is associated with a cultural phenomenon stemming from increased social media usage and the newly emerging digital trends.

The fast-paced dissemination of hate speech, violent content, and other information is a characteristic of social media. Apart from these platforms, video games are another digital tool that promotes profane language and illicit content such as violence and lewd images. There is an increase in gender violence and racial hate speech as the “radical right populists and alt-right demagogues” use social media platforms to circulate racist and misogynous hate speech while effectively mobilizing supporters and forming online mobs (Saresma et al., 2021). The findings of the report led by Alex Chalk and conducted by the two charities, The Children’s Society and YoungMinds, indicate the implications of digital media for children. According to the report, although social media platforms have an age restriction, yet about 61% of young adults joined social media at the age of 12 years or less. Moreover, cyberbullying was reported as a growing problem, resulting in mental health issues. The report concluded that the social media platform’s efforts toward these issues have remained “inconsistent and inadequate” (Chalk et al., 2017). Despite the evident problems stemming from the abuse of digital media platforms, there remains a difficulty in managing these outlets. The problem of regulating social media stems from practical concerns about the extent of resources required to monitor online activities, and political concerns such as extensive state surveillance and lack of free speech especially in liberal democracies (Yar, 2018).

 

The protection of social media users’ right to free speech and ensuring that the spread of all illicit content is prevented must be the primary responsibility of cyber safety authorities and service providers. A joint effort from the internet service providers and authorities is required to address this issue. Treating online abuse as an offense rather than a misdemeanor is the first step to ensuring that people comprehend the gravity of their online actions. In 1981, “The Convention for the Protection of Individuals with Regard to Automatic Processing of Personal Data” was enacted by the Council of Europe. This treaty “protects the right to privacy of individuals, taking account of the increasing flow across frontiers of personal data undergoing automatic processing” (Council of Europe, 1981). This treaty can serve as a model for the preservation of users’ rights for other governmental agencies around the world.

 

The United States Assembly asserts that social media networks should revise and modify the internal regulations for better user experience while maintaining users right to free speech and information access. A restructuring in terms of business model must also be considered as the current business model has enabled social media corporations to amass billions of users and huge revenues at the expense of abuse of technology. This current business model provides a win-win situation for everyone from the users who receive free service to the brand advertisers who are able to gain advertising reach, and from the platforms that are able to earn revenue to the content creators that receive funding. However, the exploitation of this engagement leads to risks that outweigh the market efficiency advantages of this model.

 

The promotion of Internet safety is not the responsibility of a single individual or a certain organization, rather it requires global efforts and large-scale collaborations. In this regard, an important initiative, the “Global Alliance for Responsible Media (GARM)” was created to regulate collaborations across the global media industry. The GARM has also partnered with the World Economic Forum for the improvement of digital environment safety and to address misleading and harmful media while safeguarding brands and consumers (Lalani & Li, 2021).

 

https://www.youtube.com/embed/YTCWiLKGGJ0?feature=oembed

 

Video 1: What is the Global Alliance for Responsible Media?

(World Federation of Advertisers, 2019)

In conclusion, it can be established that the fast-paced development and adoption of digital networking platforms have brought to the foreground a range of problems such as bullying, harassment, cancel culture, racial and gender violence, pornography, and hate speech. The seriousness of these issues ranges from common misdemeanors to criminal offenses. In countries with a Liberal Democratic governmental system the individual’s right to First Amendment is given high regard however, measures should be taken to ensure that free speech does not border upon cyberbullying or hate speech. Currently, social media platforms are reliant on individual-level regulations through features such as content filtering, blocking, and reporting. However, the seriousness of the issue requires large-scale corrective efforts which range from government policymaking to a restructuring of the business model adopted by social media platforms. Only through combined efforts can this social evil be truly eradicated.

 

 

 

 

 

 

 

 

 

 

 

 

References

Chalk, A., Brennan, S., & Reed, M. (2017). Safety Net: Cyberbullying’s impact on young people’s mental health. The Children’s Society and YoungMinds.

Council of Europe. (1981). Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (No. 108).

Jorgensen, R. F. (2019). Human Rights in the Age of Platforms. MIT Press.

Lalani, F., & Li, C. (2021). How to help slow the spread of harmful content online. World Economic Forum. https://www.weforum.org/agenda/2020/01/harmful-content-proliferated-online/

Patton, D. U., Eschmann, R. D., & Butler, D. A. (2013). Internet banging: New trends in social media, gang violence, masculinity, and hip hop. Computers in Human Behavior, 29(5), A54–A59. https://doi.org/10.1016/j.chb.2012.12.035

Postman, N. (1994). The Disappearance of Childhood. Knopf Doubleday Publishing Group.

Saresma, T., Karkulehto, S., & Varis, P. (2021). Gendered Violence Online: Hate Speech as an Intersection of Misogyny and Racism. In M. Husso, S. Karkulehto, T. Saresma, A. Laitila, J. Eilola, & H. Siltala (Eds.), Violence, Gender and Affect: Interpersonal, Institutional and Ideological Practices (pp. 221–243). Springer International Publishing. https://doi.org/10.1007/978-3-030-56930-3_11

World Federation of Advertisers. (2019, July 3). What is the Global Alliance for Responsible Media? https://www.youtube.com/watch?v=YTCWiLKGGJ0

Yar, M. (2018). A Failure to regulate? The demands and dilemmas of tackling illegal content and behavior on social media. The International Journal of Cybersecurity Intelligence and Cybercrime, 1(1), 5–20. https://doi.org/10.52306/01010318RVZE9940