Who should be responsible for stopping the spread of problematic content and how?

Bullying, harassment, violent content, hate, porn, and other problematic content circulates on digital platforms. Who should be responsible for stopping the spread of this content and how?

"Content regulation on social media platform" by Sanskriti IAS is licensed under CC BY 2.0.

2012 Green Heart Schools public speaking competition” by Brisbane City Council is licensed under CC BY 2.0.

People are born with freedom of speech. In the 18th century, France wrote in the Declaration of the Rights of Man and the Citizen: “The free communication of ideas and opinions is one of the most precious of the rights of man. Every citizen may speak, write, and print with freedom, but shall be responsible for such abuses of this freedom as shall be defined by law.”. People are free to raise their voices does not mean they can say whatever they want without considering the effect of their words. With digital platforms’ development, the influence range seems to get bigger. There is even a saying: “Bad news has wings”, meaning bad words and images can have unimaginably tremendous impacts. Besides the positive content created on these platforms, disputable content is becoming extremely popular. Bullying, harassment, violent content, hate, and porn online are extremely detrimental to society.

Why negative content on online platforms needs to be eliminated immediately? Obviously, because of its harm to society. First of all, it affects people’s physical and mental health. “We have known since the first world war that continuous exposure to violent and disturbing images can cause chronic damage to people’s mental health.” (Pikes). The Lancet has also written about post-traumatic stress disorder that emerges while responding to traumatic life events and has a 1–8% population prevalence and up to 50% prevalence in mental health facilities. More dangerously, hate on the Internet can lead to the direct destruction of people’s physical condition. When COVID-19 first broke out in China, Asian people received lots of discrimination from others. Some of them were attacked and killed due to unclear information that Asia countries were those who caused this global pandemic (Hong, Bromwich).

Post Traumatic Stress Disorder” by Truthout.org is licensed under CC BY-NC-ND 2.0.

Secondly, inappropriate content such as violent or sexual content is not under proper control; consequently, children and adolescents may be negatively affected. Nowadays, in the era of digitalization, children can get in touch with smart devices and online platforms at an early age. At this young age, they are not fully aware of right and wrong and are very likely to imitate what they see with their eyes. According to the study by Browne and Hamilton-Giachritsis about the influence of violent media on children and adolescents, young children who watch action movies behave aggressively or fearfully more often, especially boys. They also found out that after excluding socioeconomic status and various factors relating to family or education, children who had been in contact with violence tended to have antisocial behaviors such as aggression, assault, fights resulting in injury, or committing crimes in their adolescence early adulthood.

scream and shout” by mdanys is licensed under CC BY 2.0.

Finally, relationships among people can be broken with the spread of malicious online behavior. People begin to lose trust and commitment to others, and their social communication is impoverished (Quaglio, Millar). They create walls and barriers to isolate themselves to avoid the hurt and sorrow they may get from cyberbullying and hostility on online platforms.

Problematic content has too many adverse effects on society. Therefore, authorities and people with significant influence on digital platforms must stop the spread of this content category as soon as possible.

The first one responsible for this issue is the government. Digital platforms operate in every country around the world, so the government of every nation must join hands to control the content posted on these sites. Policymakers have to put forward a stricter penalty for who is responsible for creating chaos on the Internet so they can not recommit crime in the future. Laws regarding this issue must be put into effect as soon as possible. Many nations have already taken action and published a set of security rules and regulations to protect Internet users and society.

Ensuring National Cybersecurity: Protecting Critical Infrastructure (Kaspersky)” by ITU Pictures is licensed under CC BY 2.0.

Furthermore, children and vulnerable people are most affected by harmful content from digital platforms. Consequently, ensuring their well-being is a fundamental principle that policymakers should emphasize. Society has to provide a safe environment for the young and those who cannot protect themselves. As aforementioned, having interacted with media devices early, the media significantly affects children’s development, health, and well-being (Strasburger et al. 2010). On that account, authorities should only let children watch content with bright and suitable meaning. To protect vulnerable people, who are unable to live fully independently due to limited physical or mental capacity, the disturbing and violent content must be restricted to ensure they can not be agitated. Media must focus more on vulnerable people’s well-being and how they may be triggered by questionable content, manipulated by dirty marketing campaigns, or embarrassed by others’ words about them on the Internet. Whether vulnerable people are taken care of by family members or caregivers, media use should always be monitored to reduce their exposure to negative content. More notably, the policies must be explained carefully to ensure that vulnerable people understand all the information.

Media and people in crucial areas of public life should be more socially responsible for their speech and actions. With the development of media and many aspects of the media industry created, anyone involved in this tool must have more responsibilities towards their audience and readers. On social media, talk is rife with fake news, filter bubbles, misinformation, doxxing, trolls, electoral manipulation, and the online alt-right (Flew, 2019, p.96). Therefore, they must avoid manipulating information to decrease controversy and a hostile environment on online platforms. In “Custodians of the Internet”, Gillespie recommended applying governance on platforms to restrict illegal content and public controversies to enhance users’ experience in the online environment. “Platforms find that they must serve as setters of norms, interpreters of laws, arbiters of taste, adjudicators of disputes, and enforcers of whatever rules they choose to establish.” (Gillespie, 2018, p.5). Therefore, people in the media must be the pioneers to erase the negativity created by problematic content.

Famous people ought to equip themselves with knowing how the content they create on digital platforms daily can affect viewers and what they can do to stop the disputable content. Only the slightest reaction from them can impact thousands of people. Tarana Burke started the #MeToo movement in 2006 to acknowledge people with experiences of sexual assault and harassment against women. The campaign gained more attention when Harvey Weinstein – a famous director in Hollywood was accused of sexual harassment and abuse by many actresses (Murphy). Then celebrities began to join the movement. The audience listened to famous people’s stories and was inspired to share their experiences, so this traumatic experience will not happen to anyone else anymore. Gradually, news about sexual attacks and abuse may disappear from the Internet.

#Metoo: how it’s changing the world

The digital platform can be a tool itself to erase problem content. How to reduce violence against women is a challenging question to which people have always sought answers. Women are vulnerable and easily affected by other people, so they make up most of the victims of receiving harassment or hate on digital platforms. Dr. Fraser and Enye have worked on evaluating the effectiveness of digital platforms in reducing VAWG (violence against women and girls) in Bangladesh and have produced several interesting findings. They mentioned the DFID’s Voices for Change (V4C) program. This study found that individuals participating in the Purple Academy and Purple online (Purple E-Spaces) felt more willing to speak up against violence, gender equality, and VAWG. In other words, accessing online platforms and being exposed to various positive content on digital sites can slowly change people’s mindsets for the better.

Women in Bangladesh. Photo by WorldFish, 2004” by WorldFish is licensed under CC BY-NC-ND 2.0.

Another important factor is Internet users – a large part of society- are influenced by everything displayed on this platform. They need to be aware of how dirty content on the Internet can spread negativity and hatred. Hence, they can participate in the journey of ending questionable content on online sites with the government and people working in the media area.

In conclusion, controversial content needs to be terminated at any minute, or people can not control its destruction to people’s physical and mental health. The government and everyone involved in digital media should collaborate to protect society against cyberbullying, hatred, violence, or sexual videos on online platforms.

REFERENCES

Flew, T. (2019). Guarding the gatekeepers – Griffith Review. Griffith Review. Retrieved 9 October 2022, from https://www.griffithreview.com/articles/guarding-gatekeepers-trust-truth-digital-platforms/.

Burgess, J., Marwick, A., & Pöll, T. (2018). The Sage handbook of social media (pp. 254-278). SAGE Publications Inc.

Gillespie, Tarleton. (2018) All Platforms Moderate. In Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. Yale University Press. pp. 1-23. https://doi.org/10.12987/9780300235029

Browne, K., & Hamilton-Giachritsis, C. (2005). The influence of violent media on children and adolescents: a public-health approach. The Lancet, 365(9460), 702-710. https://doi.org/10.1016/s0140-6736(05)17952-5

Murphy, M. (2019). Introduction to “#MeToo Movement”. Journal Of Feminist Family Therapy, 31(2-3), 63-65. https://doi.org/10.1080/08952833.2019.1637088

Pikes, C. (2021). Toxic online content harms moderators’ mental health – IT Supply Chain. IT Supply Chain. Retrieved 23 September 2022, from https://itsupplychain.com/toxic-online-content-harms-moderators-mental-health/.

Strasburger, V., Jordan, A., & Donnerstein, E. (2010). Health Effects of Media on Children and Adolescents. Pediatrics, 125(4), 756-767. https://doi.org/10.1542/peds.2009-2563

Fraser, E and Enye, C (2018) Effectiveness of Digital Platforms to Reduce VAWG, VAWG Helpdesk Research Report No. 224. London, UK: VAWG Helpdesk. https://assets.publishing.service.gov.uk/media/5c768fe1ed915d355558eac7/VAWG_Helpdesk_Report_224_Effectiveness_of_Digital_Platforms-conf.pdf

Picard, R. G., & Pickard, V. (2017a). Essential Principles for Contemporary Media and Communications Policymaking. Reuters Institute for the Study of Journalism: University of Oxford. https://reutersinstitute.politics.ox.ac.uk/our-research/essential-principles-contemporary-media-and-communications-policymaking

Hong, N., & E. Bromwick, J. (2021). Asian-Americans Are Being Attacked. Why Are Hate Crime Charges So Rare? (Published 2021). Nytimes.com. Retrieved 23 September 2022, from https://www.nytimes.com/2021/03/18/nyregion/asian-hate-crimes.html.

Tworek, H. (2021). The Dangerous Inconsistencies of Digital Platform Policies. https://www.cigionline.org/. Retrieved 23 September 2022, from https://www.cigionline.org/articles/dangerous-inconsistencies-digital-platform-policies/.

Hoge, C., Riviere, L., Wilk, J., Herrell, R., & Weathers, F. (2014). The prevalence of post-traumatic stress disorder (PTSD) in US combat soldiers: a head-to-head comparison of DSM-5 versus DSM-IV-TR symptom criteria with the PTSD checklist. The Lancet Psychiatry, 1(4), 269-277. https://doi.org/10.1016/s2215-0366(14)70235-4

Maercker, A., Cloitre, M., Bachem, R., Schlumpf, Y., Khoury, B., Hitchcock, C., & Bohus, M. (2022). Complex post-traumatic stress disorder. The Lancet, 400(10345), 60-72. https://doi.org/10.1016/s0140-6736(22)00821-2

Napoli, P. (2019). What If More Speech Is No Longer the Solution? First Amendment Theory Meets Fake News and the Filter Bubble. Federal Communications Law Journal, 70(1), 57–104. http://www.fclj.org/wp-content/uploads/2018/04/70.1-Napoli.pdf

Quaglio, G., & Millar, S. (2020). Potentially negative effects of internet use. EPRS | European Parliamentary Research Service. Retrieved from https://www.europarl.europa.eu/RegData/etudes/IDAN/2020/641540/EPRS_IDA(2020)641540_EN.pdf