Bullying, harassment, violent content, hate, porn and other problematic content circulates on digital platforms. Who should be responsible for stoping the spread of this content and how?

Humans on phones

Throughout historical progression the internet has evolved. It is transforming into digital technologies that are influenced by cultural, societal, and political perspectives, influencing each individual that comes into contact with it, simultaneously influencing the internet through such interactions. The internet can be viewed as a loop of communication, propelling information left right and centre, actively fuelled by individuals utilising the internet; however large consumption comes with problematic chaos, which stands as a dominant factor within the consuming nature of the internet. Given the diverse group of individuals within the public sphere engaging with the internet, differentiating perspectives are apparent due to the varying cultural normalities, ultimately generating complexities regarding the human experience. Divergent ideologies can create tensions on digital platforms as these complexities bring diverse opinions, religions, politics and often create interactions online that can be misinterpreted, inappropriate or unwarranted due to posing views.

Bullying, harassment, violent content, hate, porn and other problematic content circulates on digital platforms through the accessibility of the internet. These concerning topics are prevalent as they are fuelled by individual’s perspectives and experiences, however, these issues have undoubtedly been present throughout human history and most likely will never be dissolved, so who should be responsible for the spread of harmful or unwarranted information on the internet?

The introduction of the ‘Web 2.0’ has seen the convergence of individuals and the internet creating more freedom and usability. This current version successfully allows individuals to use the internet freely and participate on digital social media. However, as stated before, the public sphere is made up of people with opposing views and experiences that fuel their individual perspectives’ which causes bullying and harassment and hate to those who do not share the same ideologies. The internet intensifies these issues as people are behind the screen and can voice their opinions anonymously and directly. ‘Mob mentality’ within cancel culture has derived from this online harassment and hate towards others. This issue is self regulated as individuals post offensive content without administration or any consequence of their offensive behaviour creating inefficient ‘mob mentality’, as M. Clark states Cancel Culture has become ‘a last-ditch appeal for justice’, that ultimately demands prejudice in digital spaces, thereby creating online harassment which is, ‘limited both in scope and effectiveness by factors of structural power, time and access to resources’. (M.Clark 2020: 88)

Such resources should be provided by the technical platforms owners where policy arrangements are set in place in order to limit offensive behaviours, simultaneously avoiding offensive rebuttal through ‘mob mentality’. These “platform responsibilities might differ from those of traditional publishers”(Picard and Pickard 2017: 6) but tend to give “internet companies considerable freedom to self – regulate”, as the internets’ algorithmic processes do not tend to incorporate the reality of toxic mentalities and how this is causing creative destruction on digital platforms.

The use of algorithms have been viewed as negative in the past for example the ‘techlash’ saga as internet companies harbour personal data in order to create and share relatable and engaging content to their users. In Facebook…

…“the details of the curation algorithm are unknown to users and undergo frequent changes. What is known is that the algorithm favours information that fit’s the user’s profile of preferences and withholds information that does not. The resulting filter bubble presents largely consistent information that reinforces the user’s worldview and presents few challenges, leaving users confident that their own views are correct and the views of others at best mistaken and at worst malevolent”…

The Psychology of Fake News: Accepting, Sharing, and Correcting Misinformation. (2020). United Kingdom: Taylor & Francis.

…This specific algorithm inherently shelters individuals’ perspectives and provides them with like-minded opinions in order to prevent disappointment connected to the app’s interaction. This type of self-regulation that is seen commonly within apps addresses the sender / message / receiver interaction that the internet consists of. The algorithms presented by internet company self-regulations create a loop of communication that propels the interface’s participation rates. As the information users put in, the more like-minded information they receive, with this information the more they interact, the more they interact, the more information the algorithms collect, with more collection the more information users receive etc.

Visual Representation of sender/message/receiver

This never ending loop of information transactions are able to occur due to company policy regulations of algorithms, however, when aware of these algorithms individuals protest in the misuse of their information creating self regulation within the users. Whether that be harmful or hateful content, it is still easily accessible to users and company regulations such as trigger warnings, banning of content/ pages/ creators is a small step in the prevention of unwarranted content as humans are complex and hold different values and opinions. How do we reboot social media?

“My Ted Talk: How do we reboot social media?”
“My Ted Talk: How do we reboot social media?”

The Internet “is the first modern communications medium that expands its reach by decentralising the capital structure of production and distribution of information, culture, and knowledge’’ (Yochai Benkler : 2006), so who should be in charge of regulating the internet regarding the issues at hand?

I do not believe the government has a place within the regulations of the individual to actively limit and restrict internet policies, applying “strict adherence to jurisdictional boundaries in the application” serving as a political model act that influences the economic and political drive of digital cultures by heavily monitoring and surveilling. Rather than a ‘mob mentality’ online, this regime can see real mobs in response to lack of freedom and accessibility. However, freedom and accessibility is good, it can come with opposing views which leads to hate, bullying and harassment, as stated before, as well as fake news due to individuals not confining to what others may believe is the right way to act or think.

Co- regulation within companies and citizens can be viewed as the answer. Without the individual, social digital platforms would not exist and without the platforms, the usability that is attractive to the user will not exist, causing the platforms to become extinct. The never ending loop of shared information is what fuels the internet and digital platforms alike so creating a contract for the web creates positive co – regulations. The inclusion of individuals provides a variety of voices and opinions that consist of differentiating world views, the halt of algorithms of utilising private data could widen the users experience on certain apps but could cause uneasiness if they are faced with content they do not deem relatable, hence offensive. This can then lead to offensive online attitudes and behaviour. By self – regulating data themselves, the individual can pick and choose what they like or dislike with the help of self – regulation from the internet companies by limiting hate through trigger warnings, blocking of accounts and the discontinuation of allowing hateful voices to swarm their sites. Freedom of speech should be allowed but any harm to others should not be and if the individuals do not have self-control the owner of the internet company should self-regulate that individual with policies to remove harmful content. In conclusion, the greatest outcome to help prevent online harassment, bullying and hate is to co- regulate digital platforms.

References:

Culture, Media, Language: Working Papers in Cultural Studies, 1972-79. (1980). Routledge. https://doi.org/10.4324/9780203381182

The Psychology of Fake News: Accepting, Sharing, and Correcting Misinformation. (2020). United Kingdom: Taylor & Francis.

  Eldridge II, S., & Franklin, B. (2018). The Routledge Handbook of Developments in Digital Journalism Studies (First edition.; S. Eldridge II & B. Franklin, Eds.). Boca Raton, FL: Routledge.

 Clark, M. “‘Drag Them: A brief etymology of so-called “cancel culture”’. Communication and the Public 5 no. 3–4 (2020): 88–92.

Mueller, Milton. 2017. Will the Internet Fragment? : Sovereignty, Globalization and Cyberspace. Oxford: Polity Press. Accessed September 9, 2022. ProQuest Ebook Central.

Doothy Hobson, Andrew Lowe, Stuart Hall, & Paul Willis. (2003). Culture, Media, Language: Working Papers in Cultural Studies, 1972-79. Taylor and Francis.

  Benkler, Y. (2006). The wealth of networks how social production transforms markets and freedom. New Haven [Conn: Yale University Press. https://doi.org/10.12987/9780300127232

Spiderman Image: https://www.news18.com/news/buzz/did-spider-man-no-way-home-recreate-the-spideys-pointing-at-each-other-meme-4581383.html

Contract For The Web website https://contractfortheweb.org/principles/principle-8-build-strong-communities-that-respect-civil-discourse-and-human-dignity/