Bullying, harassment, violent content, hate, porn and other problematic content circulates on digital platforms. Who should be responsible for stoping the spread of this content and how?

"Automotive Social Media Marketing" by socialautomotive is licensed under CC BY 2.0.

Social Media and Circulation of Problematic Contents

Social media platforms are already playing a role in how the metaverse is being defined and developed. “Social Media Mix 3D Icons” by Blogtrepreneur is licensed under CC BY 2.0.

Introduction

Humans currently rely heavily on digital technologies such as social media. While social media use varies substantially depending on a person’s socio-cultural background and psycho-demographic status, large numbers of people are currently conversing online as they share audio-visuals and ideas. While such digital platforms provide a safe and anonymous haven for users to air their opinions and concerns in public, they also provide a powerful breeding ground for problematic content and violent actions and ideologies, including bullying, harassment, hate and violence, pornography, and more. And, given that such platforms are interactive and support huge user bases, the problem has since evolved and become more complex. Addressing the problem would require a strong, multi-stakeholder framework composed of individual users, internet service providers (ISP), social media companies and government agencies to primarily design and enforce suitable policies and practices to ensure moderation of contents available to users.

 

The Necessity and Multi-Stakeholder Framework

Internet regulation is necessary to prevent the circulation of problematic content across digital platforms. At the behest of the rising popularity and usage of social media platforms lies a ticking time-bomb, promoted primarily by the spread of problematic contents and practices like bullying, porn, hate, harassment, and etcetera (Dalomba, 2022). The internet is embedded with a neoliberal ideologist was originally promised and designed to be free, and it was on this basis that the business model of social platforms was developed. As social media has grown, however, that confusion and contention have created contentious issues. The Internet is both an opportunity to perform racial identity (Nakamura, 2002) and a forum to reproduce power relations and hierarchies (Daniels, 2009). Similarly, social media platforms, as current mediators of most of the online sociability and creativity (van Dijick, 2013), are also tools for both prosocial and antisocial uses. Noble (2018) noted how the humans-technology interplay is continuously transforming society. Matamoros-Fernández & Farkas (2021) noted that social media firms play a key role in fueling the phenomenon, and that increased attention has been channeled to the role of companies like Facebook in mediating and exacerbating old and new forms of abuse, discrimination and hate, even though they present themselves as democratizing agents. Daniels (2013) determined that race and racism operate in disturbing and fascinating ways in social media platforms. Similarly, alternative forms of problematic content like bullying, violence and pornography are rife on the internet, as a study has shown that practically all media text contents contain visual violence (Yankkaya, 2009). Unfortunately, every user can fall victim to cyberbullying. Hence, preventing such cycles is essential.​

“Social media platforms” by Mark Kens is licenses under CC BY 2.0

​Preventing the circulation of these problematic content and cyberbullying will require individual and institutional remedies designed to moderate the content. Social media violence is multi-faceted, and operates independently from individuals and governments (Baban, 2012). Consequently, preventing the circulation of such problematic contents would require the cooperation of multiple stakeholders, including users, authorities, internet service providers (ISPs), lawmakers, social media companies, and etcetera. Social media companies, for example, need to have designs that make it easier to identify and flag such content, while lawmakers can encourage the design and adoption of strong and effective anti-cybercrime laws. For example, the adoption of the European Council’s revised Convention on the Protection of the Individual with respect to the Automatic Processing of Personal Data could help to address this issue. It would also be essential for social media networks to revise their internal regulations and policies to encourage users’ rights to access information and free speech. The strategy will promote multiple perspectives and a wide range of information, prevent the circulation of illegal content and material via user accounts, and allow them greater ability to counter misinformation. Such a strategy would help companies and authorities police and moderate content for the greater good.​

 

The Effectiveness Question

However, the complex nature of social media platforms threatens the viability of any measures intended to curb the circulation of such problematic contents. The platforms may quickly disseminate sudden, abrupt, violent events like contentions, wars, porn, and epidemics, etcetera. Such contents usually advance so quickly one introduced. For instance, once hate content is posted on Facebook, it is available for access worldwide. Postman (1994) dubbed this the Frankenstein Syndrome. Apparently, it is very hard to control and oversight the spread of contents and materials on social media platforms. Rather, any post on the platforms would be instantly available for access to millions of people across the globe.  Furthermore, the anonymity provided by Internet platforms makes it very challenging for authorities to identify and apprehend offenders (UNODC, 2022). This provides a good opportunity for any social media user to disseminate illicit contents across a wide area. The damage would inevitably be made even if such contents are censored as they would have already reached the intended audience.

In addition, internet regulation and policing would attack the social media company’s business model to their disadvantage. Social media companies like Facebook rely on the data they collect about users and their social networks for practically limitless advertising opportunities and business practices.  Regulating the internet thus poses a stark challenge to business models, and no regulatory regime will ever be entirely effective. Despite attempts by social media companies to police user content, the problem persists in a variety of ways. For instance, studies have shown that social media platforms like Facebook, YouTube, Twitter and Reddit have emboldened ‘platformed racism’ (Ganesh, 2021; Matamoros-Fernández, 2017) and toxic technologies like anti-feminist activism (Massanari, 2016) through their culture – their technical affordances, design, policies and business models. For instance, Matamoros-Fernández (2017) has shown that the mediation of the booing of the Australian Football League Indigenous star Adam Goodes on social media platforms featured elements of platform racism. There is an entanglement between the user practice of disguising and amplifying racist humor and abuse, and the contribution of the platform’s features and algorithms in the circulation of overt and covert hate speech. And the distributed nature of the platform’s editorial practices obscures the scope and type of such abuses. The current neoliberal business model of most social media platforms would allow the circulation of practically all kinds of content.

 

Content Moderation is Mutually Beneficial

However, Internet regulation and policing portends more benefits than problems to all the stakeholders, despite any associated challenges. For users, the measure would ensure a healthy space where all can participate and engage others without any fear of victimization. Given the significant influence that social media platforms have over the public, its regulation is for the public good. Platforms should also be aware that regardless of their want to support and expand the ceaseless flow of contents, they also remain indebted to the users who have power over them, and should be fearful of the platform activities turning criminal (Gillespie, 2018). Consequently, platform and content moderation, especially by balancing the public interest obligations’ ‘social contract’ and democratic rights of freedom of expression and free speech, would be mutually beneficial to all (Flew, Martin, & Suzor, 2019; Gillespie, 2018). Social Media platforms have increasingly taken on the responsibility of curating the content and policing the activity of their users: not simply to meet legal requirements, or to avoid having additional policies imposed, but also to avoid losing offended or harassed users, to placate advertisers eager to associate their brands with a healthy online community, to protect their corporate image, and to honour their own personal and institutional ethics. For instance, Google and YouTube have invested heavily on content moderation to keep themselves free from violent extremist contents. The tech-giants, alongside Twitter, Facebook and TikTokeither employ or outsource thousands of content moderators to ensure that their contents are free from suspect contents. This is despite that the firms have found it challenging to delineate between content to moderate and what is protected as free speech. Additionally, users must moderate their use of social media sites and contents, as this can substantially promote their health. Moderating contents would ensure that only the right contents circulate in the platforms, ensuring sustainable business and user’s wellbeing.

“Web 3.0 – Cloud Computing Platforms as a Service” by andrewpeterfogg is licensed under CC BY-SA 2.0.

 

Conclusion

​The increased circulation of problematic content via digital platforms is a concern. And it threatens the health and well-being of all stakeholders. Therefore, necessary steps should be taken to address this challenge. However, social media platforms are multi-faceted and the circulation of problematic content is a complex issue that seems to have to be addressed. Addressing this will require a multi-stakeholder framework of individual and institutional remedies dedicated to moderating social media content and use. Platforms must, in some form or another, moderate: both to protect one user from another or one group from its antagonists, and to remove the offensive, vile, or illegal – as well as to present their best face to current users, to their advertisers and partners, and to the public at large.

 

 

 

References

Baban, E. (2012). The influence of social media from the perspectives of McLuhan and       Baudrillard: Captivity of expression, observed society, lost identity syndrome. In T. Kara & E. Özgen (Ed.), Sosyal Medya [Social Media, pp. 57-79.

Dalomba, F. (2022, March 1). Pros and Cons of Social Media. Retrieved October 14, 2022, from      Lifespan.org: https://www.lifespan.org/lifespan-living/social-media-good-bad-and-ugly

Daniels, J. (2013). Race and racism in Internet studies: A review and critique. new media & society, 15(5), 695-719.

Daniels, J. (2009). Rethinking Cyberfeminism(s): Race, Gender, and Embodiment. Women’s     Studies Quarterly, 37(1/2), 101–124. http://www.jstor.org/stable/27655141

Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the      question of digital communication platform governance. Journal of Digital Media &     Policy.

Gillespie, T. (2018). CHAPTER 1. All Platforms Moderate. In Custodians of the Internet:   Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (pp.     1-23). New Haven: Yale University Press. https://doi.org/10.12987/9780300235029-001

Massanari, A. (2017). #Gamergate and The Fappening: How Reddit’s algorithm, governance,    and culture support toxic technocultures. New Media & Society, 19(3), 329–346. https://doi.org/10.1177/1461444815608807

Matamoros-Fernández, A. (2017). Platformed racism: the mediation and circulation of an    Australian race-based controversy on Twitter, Facebook and YouTube. Information,  Communication & Society , 20 (6: AoIR Special Issue).

Matamoros-Fernández, A., & Farkas, J. (2021). Racism, Hate Speech, and Social Media: A Systematic Review and Critique. Television & New Media, 22(2), 205–224.    https://doi.org/10.1177/1527476420982230

Nakamura, L. (2002). Cybertypes: Race, Ethnicity, and Identity on the Internet.     New York: Routledge, 2002. Print

Noble, S. U. (2018). Algorithms of oppression. In Algorithms of Oppression. New York      University Press.

Postman, N. (1994). The Disappearance of Childhood. New York, NY: Random House.

UNODC. (2022). Obstacles to cybercrime investigations. Retrieved October 14, 2022, from       UNODC.org: https://www.unodc.org/e4j/en/cybercrime/module-5/key-issues/obstacles  -to-cybercrime-investigations.html

Van Dijck, J. (2013). The culture of connectivity: A critical history of social media. Oxford University Press.

Yanıkkaya, B. (2009). [The representation of daily life: the fear of the others, visual violence and     the media . In B. Çoban (Ed.) Media, Nationalism, Violence.