Emerging digital technology has helped us greatly with connecting people all over the world, making communication easier than before. However, it was not without its disadvantages. Due to the anonymity of social media, you do not have to face the people you talk to, it desensitized the message, and you easily dehumanize or objectify the person behind the screen (Kowalski, Limer & Agatston, 2008). Therefore, it is very easy to use harmful, discriminative, and even violent language toward people on the internet (Kowalski, Limber & Agatston, 2008). More importantly, Sparby & Reyman (2019) found that most hate speech and bullying online were towards ethnic, religious, and sexual minorities (Evolvoi, 2018), which is even more problematic considering they are more likely to experience bullying in real life too (Sparby & Reyman, 2019). What’s more, despite diligent regulation of sexually explicit content on social media platforms, there is still unidentified, under-regulated explicit content (Spišák, 2016). Since many teenage users are also on these platforms, if not their main target groups, the whole situation is quite problematic. The questions are: who be responsible for stopping the spread of these types of content, and how?
Cyber-bullying is so salient in today’s society, there were types and tactics defined in academic setting: flaming, harassment, denigration, impersonation, outing and trickery, exclusion/ostracism, cyber stalking, and happy slapping (Kowalski, Limber & Agatston, 2008). While the other ones are easily understood just by the name, happy slapping is a form of prank combined with offline activity of slapping strangers and recording with online shaming (Kowalski, Limber & Agatston, 2008). It is often difficult to distinguish between opinion. Freedom of speech, and hateful message or cyberbullying, but I think Richardson (2021) defined it as a satisfactory term by addressing the message’s intention: whether or not it was intended to be harmful or upset others.
Although it is not a sin or morally debauched to look at porn for adults, problematic pornographic content could be misleading or even horrifying for underaged individuals (Brennan, 2016). Most porn is designed to please the male audience; therefore, it often ends with only male actors having orgasms, only male actors receiving oral sex, female actors being roughly treated during shooting, etc. (Brennan, 2016). This could be very problematic for people who have not experienced sex., to think these norms are to be expected in real life (Spišák, 2016). Indeed, some teenage viewers of porn even exhibit violent or lewd behaviors they mimicked from porn in real life (Spišák, 2016). What’s more, teenagers are more likely to develop an addiction to porn, which is damaging both psychologically, physically, and sexually (Spišák, 2016). Furthermore, a study has shown that many women have a problem with how their labia minora looked so they were getting plastic surgeries to remove some part of it (PLASTIC PLANET, n.d.). They did this because, in porn, most women have extremely small labia minora, wo it cultivated society into thinking only small labia minora are attractive (Gunter, 2019)。 Social media already cultivates young girls into thinking there is only one acceptable body type, imagine now they are thinking there are only one acceptable labia minora type.
There are also issues with children’s pornography, watched by adults, on Twitter, and only fans. That is even more problematic and ethnically atrocious. TikTok has been a paradise for predators since it was filled with teenagers dancing suggestively, and its algorithm worked as if you are interested in teen dancing, your home page will be filled with teens dancing (Morning Express, 2021). Another notorious platform that hosted much problematic content is Reddit, it was known for its extreme and illegal content like the subreddit r/Jailbait with incriminating sexualized underaged women (Richardson, 2021). In fact, the CEO explained that subreddit got banned not because of its ethically wrong nature, but simply because it was against the law, because they believe in total freedom of speech no matter how distasteful some content might be (Richardson, 2021)
The difficulty of regulating his problematic content is striking, since most platforms do not need your personal information like your real name, phone number, or identification, it is so easy to just make up a random email and be a predator online (Morning Express, 2021). As long as you do not get caught by the FBI and have agents investigating you via your IP and tracking you down spying on you, normally you remain anonymous and safe from scrutiny even if you did put out questionable content, the worst any platform can do to you is to ban this account, which you can easily create another using another easily set up email address. It is vicious circle, really. What’s more, because of the convenience of using digital technologies, people from all over the world can be on the same platforms. Different countries have different laws and regulations regarding hateful content, therefore regulation across countries is even more difficult (Sparby & Reyman, 2019).
Most social media platforms, like Twitter and Youtube, have a self-regulating algorithm, that can detect some form of nudity and sexually explicit content and mark them (Sparby & Reyman, 2019). Twitter has some of the content blurred, so they would not appear on your timeline unless you click them. But it is not nearly enough, however, emerging technology sometimes hit a new milestone every day, it is not wishful thinking that social media algorithms can improve overtime. Nevertheless, current problems persist. What if you are underage and you lied about your ae and you clicked them open? Should the platforms be responsible for some lying kids? Onlyfans even require identifications, but there are always ways to go around it: a 14-year-old only fan creator used a grandmother’s passport (Hitheradge & Croxford, 2021). On the other hand, Onlyfans took 20% of the creators’ earnings, should not we hold them to a stricter standard? Should their system at least match their face and their identifications? But what about platforms that are free to use like Twitter and Youtube? Do they need to be held to such a high standard towards problematic content? Or should the parents take more notice and subsequently invade the kids’ privacy by monitoring the type of content they view online? Sparby & Reyman has found that school surveillance is more efficient and accepting to teens than home surveillance (2019), therefore school should act as a medium to educate their student on the ethics of digital technology and offer help should any students need consulting after being bullied.
We cannot rely on the platforms to do all the work, apparently, there are far more users than employees, and as far as our technology has gone, the algorithms are not smart enough to detect everything wrong before people hit that send bottom. There are bound to be times when we noticed something before it was taken down, or if it was taken down at all. So as individuals, we need to take action to participate in the circle, and if everybody chips in to building a safer digital environment, it would be safer than if nobody cares and took any actions at all. What we can do is always take screenshots of the offending post/comment/video/photos, etc. and report them according to the internet regulations, most platforms have a rule if you were blocked/reported too many times, and you were sure to violate the platform user agreement, you could be banned from said platform. Another step of action should be taken if the situation is elevated: you can report to legal authorities and seek professional help. Last but now least, tell your own story: whether someone is trying to impersonate you or falsify information about you, it might spread, and people might believe them: just check out how many people believe and talk about celebrities as they saw in the tabloid! What you need to do is to tell your own story, make your voice heard, and be true to yourself (Richardson, 2021).
The society and governments should also play a part in instigating new laws regulating the harmful content online, moderating the appropriate punishment across different countries and social media platforms in order to contain the malice (Vickery, 2017). Regulations and user agreement can only limit user’s internet activities. But laws can regulate citizen’s behaviors online, which gives them more incentives to behave if the alternative is jail time or getting a fine (Polak & Trottier, 2020).
Adorjan, & Ricciardelli, R. (2019). Cyber-Risk and Youth: Digital Citizenship, Privacy, and Surveillance (1st ed., Vol. 1). Routledge. https://doi.org/10.4324/9781315158686
Brennan, J. (2016). Abuse Porn: Reading Reactions to Boys Halfway House. Sexuality & Culture, 21(2), 423–440. https://doi.org/10.1007/s12119-016-9397-3
Sparby, E. & Reyman, J. (2019). Digital Ethics: Rhetoric and Responsibility in Online Aggression. Taylor and Francis. https://doi.org/10.4324/9780429266140
Evolvi, G. (2018). Hate in a Tweet: Exploring Internet-Based Islamophobic Discourses. Religions, 9(10), 307–. https://doi.org/10.3390/rel9100307
Gunter, P. (2019, August 24). Considering labiaplasty? read this first (NSFW). Dr. Jen Gunter. Retrieved October 11, 2022, from https://drjengunter.com/2015/08/09/considering-labiaplasty-read-this-first-nsfw/
Kowalski, R. M., Limber, S., & Agatston, P. W. (2008). Cyber bullying: Bullying in the Digital age. Blackwell.
Morning Express. (2021, October 17). The pandemic of child pornography on social media. Morning Express. Retrieved October 11, 2022, from https://morningexpress.in/the-pandemic-of-child-pornography-on-social-media/
PLASTIC PLANET. (n.d.). I got surgery to have a designer vagina. Video. Retrieved October 11, 2022, from https://video.vice.com/en_us/video/i-got-surgery-to-have-a-designer-vagina-plastic-planet-labiaplasty/5c3f6978be40777d9f2d7492
Polak, S. & Trottier, D. (2020). Violence and Trolling on Social Media: History, Affect, and Effects of Online Vitriol. Amsterdam University Press.
Richardson, L. (2021). Hate speech against women online : concepts and countermeasures. Rowman & Littlefield.
Spišák, S. (2016). ‘Everywhere they say that it’s harmful but they don’t say how, so I’m asking here’: young people, pornography and negotiations with notions of risk and harm. Sex Education, 16(2), 130–142. https://doi-org.libproxy.berkeley.edu/10.1080/14681811.2015.1080158
Titheradge, N., & Croxford, R. (2021, May 26). The children selling explicit videos on onlyfans. BBC News. Retrieved October 11, 2022, from https://www.bbc.com/news/uk-57255983
Vickery, J. R. (2017). Policies of Panic: Porn, Predators, and Peers. In Worried About the Wrong Things. The MIT Press. https://doi.org/10.7551/mitpress/10653.003.0008