Bullying, harassment, violent content, hate, porn, and other problematic content circulates on digital platforms. Who should be responsible for stopping the spread of this content, and how?

Introduction

The ongoing technological advancement has promised a digital society where individuals form online communities. Although digital platforms have provided fast social interaction, the media cannot remain untouched the negative aspects like bullying and harassment. Due to this reason, the primary digital platforms, Facebook, Amazon, Apple, Netflix, and Google, are undergoing heated tech lash for, the audience or the public.

Social Media Apps
Social media giants 

Digital platforms like Google and Amazon have increased economic activities and social participation in the digital domain. The audience or the public is equally interested in Facebook, Google, Apple, Amazon, and Netflix (Xu, 2021). As per reports, it has been claimed that the public is angry with digital platforms because of the lack of control and regulation. The increasing social issues demand a quick response from the contemporary digital platforms, which they have failed to meet. Unlike the conventional media platforms of any organization, which regulate media content, digital platforms have evolved as content circulators.

Major Issues with Digital Platforms

Since any policies and strategies do not guide digital platforms, the primary concern is the lack of regulation. This absence of regulation is one of the prime causes of all the other problems concerning digital platforms. The primary issues on the circulation of content on digital platforms are

1. Circulation of fake news

2. Online abuse, bullying, harassment, and prejudice

3. Privacy breaches and dataveillance

Circulation of Fake News

One of the primary reasons behind public discontent is the unnecessary circulation of fake news. Fake news creates confusion and misinformation in social groups using digital platforms. Such fake information may lead to misunderstandings leading to vital social problems. This proves that digital platforms are not concerned about information sources. On the contrary, conventional media platforms use only verified sources. In addition, they also use the necessary credentials to justify the information. Circulation of misinformation causes the validation of sources to be demolished. This circulation of fake news has developed as a social issue. On the one hand, conventional media platforms generate verified information, whereas digital platforms generate unprofessional journalists. Such unprofessional journalists create fake news, not following the ethical code of journalism focusing on profit. Such digital platforms as Facebook and Twitter can attack specific gender, race, and community via false claims. In addition, a digital platform also promotes bullying and abuse.

Online Abuse, Harassment, Bullying, and Hate

Digital platforms have created space and opportunity for all marginalized communities to put across their demands and necessities and receive support. However, these platforms also support the activities of extremists, which promotes harmful content like hate in the digital field. Reports claim that race and racism spread on digital platforms in disturbing ways (Matamoros-Fernandez & Farkas, 2021). The platforms make it easy for racists to enhance their racial content and statements more precisely. Every platform becomes active regarding political and sociocultural matters because it is easy to spread any content through digital media. For instance, “cultures stigmatizing females and other minority communities have been increasing on Reddit” (Xu, 2021). Likewise, bullying has become dominant on digital platforms.

Similarly, sexual harassment is increasing on Twitter. Bullying and harassment are the primary harmful content increasing on digital platforms. Besides these, racism and prejudice have increased subtly. As per reports, such platforms have reformed policies that promote racism against marginalized communities through digital weapons like memes. In the same context, a widely recognized actor, Sacha Baron Cohen, presented a criticism that stated that digital platforms are the greatest propaganda machine in history(Xu, 2021). He also claimed that digital platforms were designed to engage the public in harmful content through the circulation of those content, triggering hatred.

Privacy breaches and dataveillance

The primary business of the concept of digital platforms is to utilize that data stored by the users in allowing interaction and engagement with several advertisement companies. The technology used in such platforms collects and aggregates information instantaneously. These platforms use algorithms for making preferential analyses and sell them to companies for commercial promotions. Thus, privacy breaching and dataveillance are significant concerns as there is a possible threat of misusing the data after selling it for commercial purposes. For this very purpose, every act of users is monitored, and their interaction within their regions and communities is controlled. These platforms ultimately disclose the interests of individuals depending on such monitoring. Likewise, the platforms can display indirect religious beliefs and political orientations. The matter of dataveillance took a shocking path when Christopher Wylie disclosed that illegal data collection and analysis were performed on Facebook. Later on, the data was sold to third parties. Thus, this confirms that digital platforms are not only content circulators, but they have the power to manipulate individual interests and a political process.

Responsibility of Digital Platforms

To control the issues, stakeholders must take action immediately. The responsibility of the content on digital platforms can be monitored by

  1. Government Regulation
  2. Regulation through social organization
  3. Self-regulation

Government Regulation

All the concerns, like bullying, harassment, and negative content on digital platforms, are primarily due to the lack of regulation of stakeholders on the platforms. Government plays a central role in regulating such content on digital platforms. The primary role of government is to set standards for policies and content distribution(Xu, 2021). Government can also establish standards regarding commercial activities performed. “Online debates are caused due to extreme polarization among users”(Cinelli et al., 2021). Several countries have adopted policies regarding regulation to match their socio-cultural and political environment. For instance, the European Union adopted GDPR(General Data Protection Regulation), which supports individual rights to defend their private data from any breach(Xu, 2021). Likewise, in south Asian regions, regulation policy involved strict liability for which any content promoted on digital platforms would be monitored by legal actions. For any inappropriate content, a penalty would be applied (Xu, 2021). Thus, the digital platforms will have to take legal responsibility for the content to be published on the platform.

Similarly, European Commission has adopted ‘the Code of Conduct for Countering Illegal Hate Speech’ to filter any hate content spreading on digital platforms. However, government policies can limit the content to some extent, but digital platforms are used globally.

Regulation through Civil Organization

To regulate the content on digital platforms, it is necessary to adopt not only government policies but global regulation policies are essential. For global regulation, civil organizations may be established to regulate the conflicts between nations regarding regulation strategies of digital platforms. For instance, Christchurch Call focuses on preaching harmful content like hate and extremism through self-intended consensus. The Call focuses on eliminating terrorist content online (Ministry for Europe and Foreign Affairs, 2019). The policies should eliminate toxic content and establish transparency while sharing content on digital platforms. Likewise, different nations also proposed International Telecommunications Union for developing a global-scale communication channel to regulate policies.

Self-regulation

Only governmental policies and civil organizations cannot address the issue of harmful content spreading on digital platforms; individuals should take responsibility for their self-regulation. The platforms like YouTube, Google, and Facebook have deployed algorithms to check on fake news content. However, reports claim that the algorithms cannot detect and prevent harmful content. Therefore, self-regulation in spreading such content is essential. Everyone should be aware of the consequences of online bullying, hate, fake news, and all other harmful content on the platforms.

Conclusion

Thus, digital platforms have rapidly increased social issues like hate, abuse, bullying, and harassment. For this, stakeholders like individuals, government, and other organizations should act together to prevent harmful content from spreading. The governmental tactics, organizational approach, and platforms’ policies are essential to monitor and control the content. It is equally necessary to regulate the access to data that compromises people’s privacy(Hajli et al., 2020).

This work is licensed under a Creative Commons Attribution 4.0 International License

References

Ashworth, L., & Free, C. (2006, August). Marketing Dataveillance and Digital Privacy: Using Theories of Justice to Understand Consumers’ Online Privacy Concerns. Journal of Business Ethics, 67(2), 107–123. https://doi.org/10.1007/s10551-006-9007-7

Christchurch Call. (n.d.). Christchurch Call TO ELIMINATE TERRORIST AND VIOLENT EXTREMIST CONTENT ONLINE. https://www.christchurchcall.com/. Retrieved September 28, 2022, from https://www.christchurchcall.com/

Cinelli, M., Pelicon, A., Mozetič, I., Quattrociocchi, W., Novak, P. K., & Zollo, F. (2021, November 11). Dynamics of online hate and misinformation. Scientific Reports, 11(1). https://doi.org/10.1038/s41598-021-01487-w

Hajli, N., Shirazi, F., Tajvidi, M., & Huda, N. (2020, August 13). Towards an Understanding of Privacy Management Architecture in Big Data: An Experimental Research. British Journal of Management, 32(2), 548–565. https://doi.org/10.1111/1467-8551.12427

Matamoros-Fernández, A., & Farkas, J. (2021, January 22). Racism, Hate Speech, and Social Media: A Systematic Review and Critique. SAGE Journals, 22(2). https://doi.org/10.1177/1527476420982230

Ministry for Europe and Foreign Affairs. (2019). Christchurch Call To Eliminate Terrorist and Violent Extremist Content Online (15 May 2019). In https://www.diplomatie.gouv.fr/en/french-foreign-policy/digital-diplomacy/news/article/christchurch-call-to-eliminate-terrorist-and-violent-extremist-content-online. Retrieved September 28, 2022, from https://www.diplomatie.gouv.fr/en/french-foreign-policy/digital-diplomacy/news/article/christchurch-call-to-eliminate-terrorist-and-violent-extremist-content-online

Reporter, G. S. (2019, November 24). Read Sacha Baron Cohen’s scathing attack on  Facebook in full: “greatest propaganda machine in history.” The Guardian. Retrieved September 28, 2022, from https://www.theguardian.com/technology/2019/nov/22/sacha-baron-cohen-facebook-propaganda

Xu. (2021, October 15). What’s ‘techlash’ and how can we handle this social dilemma? – 2021 ARIN2610. https://2021.Arin2610.Net.Au/2021/10/15/Whats-Techlash-and-How-Can-We-Handle-This-Social-Dilemma/. Retrieved September 28, 2022, from https://2021.arin2610.net.au/2021/10/15/whats-techlash-and-how-can-we-handle-this-social-dilemma/