Cyberbullying and negative content online has run rampant across all social media and internet platforms since their creation. As technology advanced people could find comfort in their screens operating as a social shield from judgement and fear. Social media companies work to avoid violent content and hate spread across their respective platforms; this becomes difficult when there are billions of users to monitor. However, the blame for digital harassment does not entirely fall on the users to act more positively when major tech companies could be giving more attention toward software that does not allow for violence online. Particularly when there seems to be more focus behind our screens on the collecting, trading, and selling of our personal information.
The internet opened doors for all kinds of harassment and completely changed the way individuals can communicate. For example, how we each interact with politics and all public affairs has been entirely redesigned, and every person with a smartphone has been given a public platform to speak their mind. More recently with the rise of TikTok, it has never been easier to catch a viral wave, and this worked in favor of many creators online to share their voice without necessarily needing a large following. Subsequently this also allows for the same freedom to spread and project violent content and hate. In a world that allows technology to advance at a more rapid rate than social and political equality, it is the responsibility of tech companies and the people who enable them to put an end to the production of cyber harassment.
How to decipher freedom of expression?
Negativity online is especially difficult to moderate in America due to the first amendment protecting an individual’s right to freedom of speech without government intervention. However, the amendment also covers the right for an internet company or platform to regulate hate speech through the “terms of service” provided at entry into their software. Although most often users will not read the terms and conditions, they are signing and accepting that they will abide by the platform’s regulations against harassment and other forms of violent content. The responsibility is not completely in the hands of internet companies, but they do play a major role if they are not actively working to stop the spread of online hate.
In 2006, a prominent court case between Yahoo! Inc. and La Ligue Contre Le Racisme et L’Antisemitisme centered around the internet provider allowing the sale of Nazi merchandise on its virtual storefront. Although the selling was carried out by a user in the United States, the French court deemed Yahoo! responsible for the acts of indirect violence (Banks, 2010). User-based policies regarding built in “anti-hate” mechanisms within software are widely debated as there are a few prominent examples of social media platforms under fire for the accidental shadow banning of human rights and other activism content. Popular social media networks like Instagram, Snapchat, and TikTok have clear measures and consequences for violations of community guidelines, and these regulations have strengthened with time as the apps have grown.
The de-centralized placement of these platforms allows for strings of hate groups to form and thrive without limitation. In 2012, Google was hit with extreme controversy when a film which blatantly mocked the Muslim faith was only blocked from YouTube in seven countries. Meanwhile, the film caused a major disruption in Pakistan that resulted in rioting and the killing of 19 innocent people (Ring, 2013). It is on account of Google’s negligence to thoroughly monitor the film’s potential effect that they are to be undoubtably labeled responsible for this global impact. Google did not deem the film a blatant violation to their regulations as it did not contain any direct threats to the Muslim community. It is in instances like these that reveal the corruption and copious amounts of power given to these internet social spheres to regulate damaging content. Due to the inability for government interference and complete lack of federal practice regarding hate on digital platforms, the moderation of virtual harassment falls in the hands of these companies and their predetermined terms, conditions, and values.
How can ISP’s truly regulate cyber harassment?
The role of Internet Service Providers is paramount in the reduction of circulating hate online, particularly in terms of their ability to remove content from the inside. For example, when Instagram asks all new users to accept their terms and conditions, that grants them the power to remove posts or in some cases an account completely at any time if content violates the agreed upon limitations. However, in America these providers are not necessarily liable for all acts of problematic media within their platforms if it falls within the realm of freedom of speech rather than direct harassment. This unfortunately results in many popular Internet Service Providers attention to self-regulation being much more performative than truly productive (Banks, 2010). The argument over who is to blame for the clouds of negativity online is an endless loop skating the fine line of what is considered free speech or legitimate spewing of violence. Ultimately it is rooted in the innate human desire to hate and put down others that has been fueled by the immense technological advancement.
Cyberbully was not an issue ten years ago and those who grew up a part of Gen Z were the first to experience the harmful effects of childish bullying online. The way bullying operated for Gen Z youth was different than any other generation prior, as there is no ability for a victim to immediately notice a perpetrators weakness. The screen provides a barrier that can enhance the extremity of a bullies ill minded actions, it is a safety net from a face-to-face interaction which allows for bullying to require less courage (Turner, 2015). At the time, it was difficult to monitor the effects of cyberbully as the platforms it occurred on were still relatively new. Additionally, the 2010s popularized platforms like Ask.fm which focused on anonymous questions and opinion forums and were often linked to other forms of social media. These kinds of applications allowed for cyberbullying to thrive in the early digital age due the guaranteed anonymity; Gen Z youth were the first experience this kind of targeted hate without recourse.
An example of action was taken by the German Parliament in June 2012 when they passed The Network Enforcement Act, legally requiring all major social media platforms to delete illicit content. It stated that all providers with over 2 million active consumers would be given 24 hours to remove flagged content deemed illegal by the German Criminal Code. The implementation of this act is an evident leap toward less cruelty and hate online. Furthermore, the effectiveness of the act lies in its rigorous consequence for providers who do not abide. All social networks are liable to be fined up to €50 million if they disregard or fail to comply with the management of their user’s disruptive content (Alkiviadou, 2019).
The primary issue when trying to regulate the internet is that it is nearly impossible to wrangle such a vastly large network of endless interconnecting webs. Content may be considered hate in one country but fly below the radar in another, the solution platforms exercise is the ability to block or remove posts in certain areas of the world. But this becomes problematic due to how boundless the internet is and the frequency it has to reproduce content, blocking a post in certain countries feels like a band-aid to the issue when things like VPN’s and the dark web exist. The internet is the perfect avenue for anyone to be able to speak their truth and exercise their human right to expression, but the frequent abuse of that right online leads to destructive and damaging behavior with detrimental real-world consequences. It is the responsibility of Internet Service Providers to remove these kinds of actions from their platforms; and for other countries to consider following behind the German Parliament’s act on cyber violence to work toward a safer space online.
Alkiviadou, N. (2019) Hate speech on social media networks: towards a regulatory
framework?, Information & Communications Technology Law, 28:1, 19-
35, DOI: 10.1080/13600834.2018.1494417
Banks, J. (2010). Regulating hate speech online, International Review of Law, Computers &
Technology, 24:3, 233-239, DOI: 10.1080/13600869.2010.522323
Fernandez, H. (2018, October). Curbing Hate Online: What Companies Should Do Now.
American Progress. Retrieved October 8, 2022, from https://www.americanprogress.org/article/curbing-hate-online-companies-now/
Ring, C. E. (2013). Hate speech in social media: An exploration of the problem and its proposed
solutions (Order No. 3607350). Available from ProQuest One Academic. (1491383571). http://ezproxy.library.usyd.edu.au/login?url=https://www.proquest.com/dissertations-theses/hate-speech-social-media-exploration-problem/docview/1491383571/se-2
Sengupta, S. (2012, September). On Web, a Fine Line on Free Speech Across the Globe. The
New York Times. Retrieved October 8, 2022, from https://www.nytimes.com/2012/09/17/technology/on-the-web-a-fine-line-on-free-speech-across-globe.html
Turner, A. (2015). Generation Z: Technology and Social Interest. The Journal of Individual
Psychology 71(2), 103-113. doi:10.1353/jip.2015.0021.