The Internet is accessible anywhere, anytime. It is a place of infinite possibilities, where individuals can be whomever they please. The Internet has afforded the world instant gratification with the speed of communication and information across geographically dispersed areas. The internet has lowered all possible barriers to communication and content sharing, allowing for an interconnected global network – bringing with it, the rise of big tech and social media, which undoubtedly have a strong hold on the lives of many in our present day.
Figure 1 ‘Online Marketing Secrets’ by Internet Marketing Secrets is licensed under CC BY-NC 2.0
The internet, however, has a dark side. Bullying and harassment, the circulating of violent content and hate, and the presence of pornography are rife; and have created an unsafe environment for users exposed to such content. But who is responsible for this on the internet? Who governs the internet? Early internet creators had a belief that the internet was a self-governing community, and there was no need for governments to be involved beyond a financial aspect. But what about the companies who own and operate social media platforms, the multibillion-dollar industry that may in fact be profiting from the circulation of this content – should they be responsible for the content on their platforms?
Figure 2 F8 2016 Day One by Inky is licensed under CC BY-SA 2.0
The question remains: what is the solution?
Bullying and Harassment
What is bullying and harassment online? According to the American-based Pew Research Centre, it is any of the following
- Name-calling
- Stalking
- Threats
- Sexual harassment
- Purposeful embarrassment
An article published by the Pew Research Centre, by Emily Vogels, notes that stories of online harassment have been in the headlines of news publications for years. Beyond the severe cases of aggressive abuse, Pew says that a discourse of general hate – including, name-calling, disdainful comments and belittling has been normalised online (Vogels, 2021). This has had, in some extreme cases, devastating effects on individuals and created a dangerous discourse on the internet.
Figure 3 Love Hate by Markus Grossalber is licensed under CC BY-ND 2.0
An article published by Forbes this year noted that one of the worst cases of online harassment, bullying and cyberstalking was found in the anti-Amber Heard Twitter Campaign, which began this year amid the highly publicized Johnny Depp vs Amber Heard defamation trial (Dellatto, 2022). Bot Sentinel, a non-partisan platform used to detect, and track non-authentic accounts and internet trolls realised a report on how organised attacks like these thrive on Twitter – with trolls creating accounts dedicated to circulating violent hate towards Heard. The article notes that Heard and her female supporters were “attacked relentlessly through vulgar and threatening language” – where in one case a fake account used images of one of Heard’s supporter’s deceased children in order to attack them and their family members.
Figure 4 Black Mass by Gabbo T is licensed under CC BY-SA 2.0
But what did Twitter do to stop the spread of such malicious abuse towards Heard? Bot Sentinel noted that “It is our opinion Twitter didn’t do enough to mitigate the platform manipulation and did very little to stop the abuse and targeted harassment” (Bot Sentinel, 2022).
Twitter was undeniably responsible for this content being so widely circulated – censoring keywords, such as Amber Heard’s name would have been an easy way to slow down the ongoing online hate. While the internet remains to be a highly ungovernable platform, where is the line drawn, and when can platforms step in to stop these widespread hate campaigns from taking over the internet? In the case of Amber Heard, the banning of hashtags or associated hate content towards Heard should have been monitored more closely by the platform – with individual accounts suspected of being trolls immediately removed, and others given warnings.
Violent Content Circulation
The freedom and endless possibilities of the internet have allowed for almost any content to be put up online. A journal article published by Maura Conway of the University of Dublin details the ‘Role of the Internet in Violent Extremism and Terrorism. Conway notes the increasing concern for the availability of violent extremist content that is so easy to access having violent radicalizing effects on internet users (Conway, 2016).
January 6 was a day that saw violent attacks on the Capitol in the United States – beginning with a tweet by Donald Trump, inciting violence in the White House. Two days later January 8, Donald Trump was permanently banned from Twitter.
Figure 5 White House by Diego Cambiaso is licensed under CC BY-SA 2.0
There has always been a complicated relationship between internet freedom and politics, as the internet was an ungovernable space where people were free to challenge authority and power. This has been commonly referred to as “hacktivism” – which can be defined as the use of digital tools to carry out an attack, often driven by political motivation, and can be both ethical and unethical. In the extreme case of the January 6 riots – violence was incited by the former president for political motivation. The attacks lead to over 140 police officers being assaulted by rioters, 1 rioter died of a series of strokes, and two others died by suicide (Britannica, 2021). There is no question that Trump was responsible for the attacks, however, Twitter is mainly at fault for not acting sooner on the provoking messages sent out in the form of tweets. Trump’s permanent ban from Twitter two days following the attack could potentially have stopped any further violence, however, perhaps Twitter should have moved faster, lowering the overall fatalities.
Pornographic Content and Illicit Images
The accessibility of internet has allowed young people to have equal access to webpages as adults alike. A journal article published by Michele Ybarra and Kimberly Mitchell estimated that up to 90% of youth aged between 12-18 years of age have access to the internet, and growing concern has been raised about the accessibility of pornographic content – as it can have “potentially serious ramifications on child and adolescent sexual development” (Ybarra and Kimberly, 2005). Academics fear that children exposed to pornographic pop-ups and advertising may be more likely to intentionally seek out this content. Young people in this situation are more likely to cross-sectionally report delinquent behaviour and substance use in the previous year.
Figure 6 Student on an Ipad at School by Flickingerbrad licenced under CC BY 2.0
Who is responsible for this content on the internet? Is there a way to limit young people’s exposure to inappropriate advertisements and pop-ups? Can the owners of pornography sites enhance security to apply an age restriction on content?
Governments may be able to come into play by restricting sites to advertise through internet browser pop-ups, and similarly, enforce age restriction rules and online security systems or walls to block young people, under the age of 18 from accessing content. However, these sites are international and difficult to monitor given that content is produced by individuals and then uploaded.
Internet is inescapable – responsible for breaking geographical communication and information barriers once thought to be impossible. The rise of social media in the last few years has allowed for the user / producer barrier to be broken down – content, comments, likes, and shares are open to anyone. This has given way to a very dark side of the internet; where bullying and harassment, violence and pornography exists and is harming individuals and societies as a whole; creating frightening discourses of violence and hate online. But who is responsible for the creation of the dark web? Is it up to social media companies to better monitor users, block troll accounts and ban certain words from appearing on their sites, or is it up to individuals to monitor each other behaviours? Alternatively, should the government turn to monitoring us? Going against our early internet creator’s idea that the internet should be a place free from governmental control.
References
Bot Sentinel Inc. (2022). Targeted Trolling and Trend Manipulation: How Organised Attacks on Amber Heard and Other Women Thrive on Twitter. https://botsentinel.com/reports/documents/amber-heard/report-07-18-2022.pdf
Conway, M. (2016). Determining the Role of the Internet in Violent Extremism and Terrorism: Six Suggestions for Progressing Research. Taylor Francis Online. https://doi.org/10.1080/1057610X.2016.1157408
Duignan, B. (2022). United States Capitol attack of 2021. Britannica.
Vogels, E. (2021). The State of Online Harassment. Pew Research Centre.
Ybarra, M., & Mitchell, K. (2005). Exposure to Internet Pornography among Children and Adolescents: A National Survey. CyberPsychology & Behaviour, 8(5), 473-486. https://doi.org/10.1089/cpb.2005.8.473
Dellatto, M. (2022, July 18). Anti-Amber Heard Twitter Campaign One of ‘Worst Cases of Cyberbullying,’ Report Says. Forbes. https://www.forbes.com/sites/marisadellatto/2022/07/18/anti-amber-heard-twitter-campaign-one-of-worst-cases-of-cyberbullying-report-says/?sh=2f4355f47d64
Carroll, L., & Health, R. (2018). Kids’ apps may have a lot more ads than you think. Reuters. https://www.reuters.com/article/us-health-kids-apps-ads-idUSKCN1N42BZ
Cusumano, M., Gawer, A., & Yoffie, D. (2021). Social Media Companies Should Self-Regulate. Now. Harvard Business Review. https://hbr.org/2021/01/social-media-companies-should-self-regulate-now