The integration of communications and advanced technologies into daily life that makes many of us constantly interconnected is a major aspect of today’s digital society (Harwood et al., 2014). The emergence of digital technology and the internet have developed limitless spaces for society to explore, with fluid boundaries that are difficult to control or supervise. With the internet being built by a utopian concept of free speech (Abbate, 2017) and the difficulties for social media platforms to monitor, users have turned the internet to other purposes, which sometimes become threats to their own existence.
Harassment on the internet refers to a variety of harmful behaviours. Offensive comments, the spreading of harmful content, verbal abuse, as well as sexual and gender-based harassment, are examples of these online behaviours (Powell et al., 2018). Social media platforms are taking steps to mitigate this disaster; for example, TikTok now has a community ban mechanism that prevents a creator from posting material for a certain period of time if they breach specific guidelines (Content Violations and Bans, 2020). This, however, does not prevent harmful content and other unethical behaviour on social media. This essay will argue that social media platforms and their users should be held accountable for the problematic content that circulates online also discusses how social media regulation alone is insufficient to prevent those contents from spreading while emphasising the importance of government regulations as well as self-regulation.
Cyberbullying and Its Consequences
Cyberbullying is a buildout phenomenon from a general issue– bullying, which has been a societal concern for a long time. The new forms of communication such as online messaging and social media help transmit harmful behaviours to the online space. According to Gillepsie (2018, p.22), the internet offers the public a “platform” that “constitutes the public discourse, it is designed to invite and shape participation.” The participation encouragement from social media platforms enhances the human rights of freedom of speech. However, instead of claiming collective responsibility, unethical users frequently interpret freedom of speech as a “right to hate” (Clucas, 2020). This is proven by data from the Pew Research Center that shows that 41% of American adults have personally experienced some form of online harassment (Vogels, 2021). This shows how platforms’ designs cater to unethical users to make derogatory comments under posts as well as create and spread harmful content online as it is lightly regulated.
At first glimpse, it appears that this type of bullying is innocuous. However, the contrasts between cyberbullying and its typical real-life parallel are owing to the internet’s distinctive features: anonymity, the involvement of a broad audience, and the capability to assault 24 hours a day. The ability to be constantly exposed to attacks on social media has had a real-world impact on some users. In 2019, K-pop singer Koo Hara was discovered dead in her home, with a handwritten statement lamenting her life. She has been the target of nasty cyber harassment (Young, 2019). “It’s rather simple with physical violence, as the victim can go see a doctor, but with cyber violence, there is no cure,” says the Seoul Metropolitan Police’s cybercrime investigator (2019), Jeon Min-Su.
The case addresses the real-world social consequences of cyberbullying. Unfortunately, society did not appear to learn its lesson, as only months before Koo Hara’s death, another former K-pop singer, SULLI, committed suicide as a result of cyberbullying (Young, 2019). This illustrates how the phenomenon of cyberbullying has affected our society, and it is distressing to hear cases of people committing suicide or being heavily psychologically damaged as a result of the misuse of the internet designed to improve the quality of our lives.
Revenge porn, often known as image-based assault, is the act of spreading or threatening to release naked or sexually explicit photos or recordings after the relationship ended. Many websites are developed by individuals who were motivated by (or hoped to profit from) the freedom that the internet promised (Gillespie, 2018). This concept is abused by unscrupulous people who seek to profit unethically. This is evident in websites that hack to obtain nude images of women and then blackmail them to pay a fee for the image to be taken down (Laird, 2013). Hunter Moore launched the first revenge porn website, isanyoneup.com, in 2010. In 2011, the website got 10,000 picture submissions over a three-month period making Moore a considerable profit earning up to $13,000 each month (Bates, 2017).
On 31st August 2014, a Google search for “celebrity photo leak 2014” yielded over 2 million results, including several online news pieces on how a big list of female celebrities was victimised to revenge porn in a single day (Bates, 2017). This phenomenon explores the role of algorithms in the spread of harmful content. As the algorithm allows, the content will rapidly spread to a larger audience. When a 2 million Google search of celebrity nudes is done in a short period of time, users on other platforms will be aware of the event, which makes the content spread more.
So who should be held responsible? How can we stop this?
Social media is tightly linked to users, the government and platform owners. Governments are in a privileged position to make decisions for platforms, as evidenced by China’s Great Firewall and the banning of major social media platforms such as Facebook and TikTok (Barry, 2022). Government censorship and monitoring influence the sorts of content accessible to users. However, as norms shifts as technology advances, the traditional system of legal rules is no longer applicable to the internet. These transformations require new laws that are able to be used with the new culture in new media. In Tasmania, there are new laws for image-based abuse like revenge porn, however, they seem to do little impact on society as the rates of revenge porn have increased (Bowes, 2022). One study suggested one in three Australians between the ages of 16 and 64 has been a victim, compared to one in five three years earlier (Slattery, 2020). This indicates that government legislation alone is ineffective to control social media use.
Platforms have more authority over who may use and how the platform can be used. Some moderation and banning of violent remarks are required to guarantee that the platform is safe for marginalised users and adequate for productive engagement (Gillespie, 2018). Twitter, for example, begins its posted community guidelines with: “We believe that everyone should have the power to create and share ideas and information instantly…..there are some limitations on the type of content and behaviour that we allow” (Twitter’s Policy on Hateful Conduct, n.d.). This demonstrates that platforms are responsive to the digital problems emerging in today’s society that relives the longstanding problem of media and culture. However, platform regulations are insufficient to completely prohibit illicit content. This is evident in Twitter itself, which still has over 10 million porn accounts (Griffin, 2015), and was recently found promoting child pornography (IANS, 2022).
Users have the most vital role in preventing harmful content from spreading. Since the internet has become a part of our lives, one approach to prevent the issue from emerging is for users to self-regulate, as Gillespie (2018) argues that the most effective way to stop the spread is for users to refrain from publishing from the start. Although algorithms aid in content filtering, it might be difficult to recognise complex cultural materials; whether it is pornographic or educational. However, users have a sense of communally constructed social values (Gillespie, 2018), and the values that society holds as users may be used as a guideline to self-regulate, as well as to report unethical content and stop it from spreading.
The emergence of the internet comes with another global issue that revisits traditional problems such as bullying and pornography. Government, platforms and users should be held accountable for the spreading of malicious content that negatively impacts society as they have the right and authority to prevent the issue from happening. The three parties can take action by producing strict regulations to heavily ban unethical content creators and also, for users, to self-regulate what they produce and share. The three sides must collaborate to reach a consensus on a safe and respectful internet.
Abbate, J. (2017). What and where is the Internet? (Re)defining Internet histories. Internet Histories, 1(12), 8–14. https://doi.org/10.1080/24701475.2017.1305836
Barry, E. (2022, January 18). These Are the Countries Where Twitter, Facebook and TikTok Are Banned. Time; Time. https://time.com/6139988/countries-where-twitter-facebook-tiktok-banned/
Bates, S. (2017). Revenge Porn and Mental Health: A Qualitative Analysis of the Mental Health Effects of Revenge Porn on Female Survivors – Samantha Bates, 2017. Feminist Criminology. https://journals-sagepub-com.ezproxy.library.sydney.edu.au/doi/full/10.1177/1557085116654565
Boseley, M. (2020, May 8). Revenge porn in Australia: the law is only as effective as the law enforcement. The Guardian; The Guardian. https://www.theguardian.com/society/2020/may/09/revenge-porn-in-australia-the-law-is-only-as-effective-as-the-law-enforcement
Bowes, N. (2022, August 16). Revenge Porn Offences (Tas). Go to Court. https://www.gotocourt.com.au/criminal-law/tas/revenge-porn-offences/#:~:text=Revenge%20porn%20laws,against%20this%20type%20of%20abuse.
Clucas, T. (2020). Don’t feed the trolls: Social Media and the Limits of Free Speech. In S. Polak (Ed.), Violence and Trolling on Social Media: History, Affect, and Effects of Online Vitriol (pp. 47–64). Amsterdam University Press. https://doi.org/10.1515/9789048542048-004
Content violations and bans. (2020). Tiktok.com. https://support.tiktok.com/en/safety-hc/account-and-user-safety/content-violations-and-bans
Gillespie, T. (2018). All Platforms Moderate. In Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (pp. 1–23). Yale University Press. https://doi.org/10.12987/9780300235029-001
Griffin, A. (2015, May 21). 10 million Twitter accounts could be deleted in porn purge to satisfy advertisers. The Independent. https://www.independent.co.uk/tech/twitter-could-delete-10-million-accounts-in-porn-purge-to-satisfy-advertisers-10261284.html
IANS. (2022, October). Twitter bans over 57K accounts for promoting child porn, nudity in India. The Times of India; Times Of India. https://timesofindia.indiatimes.com/business/india-business/twitter-bans-over-57k-accounts-for-promoting-child-porn-nudity-in-india/articleshow/94589378.cms
Laird, L. (2013). Striking Back at Revenge Porn: Victims are taking on “revenge porn” websites for posting photos they didn’t consent to. ABA Journal, 99(1), 45–52.
News, A. (2019). K-pop singer Goo Hara, 28, found dead in her apartment on Sunday [YouTube Video]. In YouTube. https://www.youtube.com/watch?v=ucADkaDpnm0
Powell, A., Scott, A. J., & Henry, N. (2018). Digital harassment and abuse: Experiences of sexuality and gender minority adults. European Journal of Criminology, 17(2), 199–223. https://doi.org/10.1177/1477370818788006
Slattery, C. (2020). Study charts rising trend of image-based sexual abuse. Rmit.edu.au. https://www.rmit.edu.au/news/all-news/2020/feb/image-based-abuse
Twitter’s policy on hateful conduct. (n.d.). Twitter.com. Retrieved October 11, 2022, from https://help.twitter.com/en/rules-and-policies/hateful-conduct-policy
Vogels, E. A. (2021, January 13). The State of Online Harassment. Pew Research Center: Internet, Science & Tech; Pew Research Center: Internet, Science & Tech. https://www.pewresearch.org/internet/2021/01/13/the-state-of-online-harassment/
Young, H. (2019, November 28). Cyber bullying, star suicides: The dark side of South Korea’s K-pop world. U.S. https://www.reuters.com/article/us-southkorea-kpop-idUSKBN1Y20U4