
"Social Media Logos" by BrickinNick is licensed under CC BY-NC 2.0 .
RE11 / Student Anqi Cai (ACAI4340) / Tutor: James Baguley
Social media platforms cannot be complete “freedom.”

With the advent of the Web 2.0 era, more people are using social media platforms to engage and interact with each other, where users can dominate or participate in expressing their opinions and connecting socially (Gillespie, 2018). There are benefits to the openness of the Internet. Minorities can seek justice by giving themselves a voice on the Internet; knowledge can be quickly passed on to children in low-education resource areas; culture can be perpetuated through social platforms, and so on. But the dangers constructed by excessive ‘freedom of expression’ and the right to anonymity come with harmful content about violence, abuse, obscenity, sexism, and racial hatred being widely spread in social media (Flew et al., 2019).
Nefarious dealings in the shadows of digital platforms – the shocking “Nth Room”
‘The Nth Room case: The Making of a Monster’ by The Korea Times. All rights reserved. Retrieved from: https://www.youtube.com/watch?v=WGMgp8sup0w&t=1s
In 2019, the South Korean sex abuse case ‘Nth room’ was revealed, an incident that has caused a massive negative social media impact that people are focusing on. ‘Nth room’ was a sex slavery crew with the user name ‘Doctor’ as the mastermind selling degrading, violent, and sexually abusive images, videos, and even videos of the victims (all female) in the encrypted cross-platform messaging app Telegram. On Youtube, Korean media ‘The Korea Times’ combed through the details of the ‘Nth room,’ which brought to the netizens’ attention that anonymity rights invariably hurt these victimized women. Both the sex slave team and the viewers are among the perpetrators. The incident culminated in a statement signed by approximately 5 million people in South Korea demanding that the identity of the site’s users be made public, and the Korean government was forced to take action (Kim, 2020). The main reason for this incident was the platform’s improper social media content regulations.
Who Should Stop the Spread of Negative Content

The ‘Nth room’ case made it clear that social media platforms need to be regulated, that freedom without rules can harm innocent people, and that those responsible for stopping the spread of harmful content rely primarily on self-regulation. Self-regulation means that media organizations have internal regulatory frameworks that guide how platforms monitor and regulate the activities of others. Media organizations usually use platforms such as Facebook, Google, and Twitter. As Gillespie writes, “platforms act as rule makers, interpreters of laws, adjudicators of disputes, and enforcers of rules” (Gillespie, 2019, p. 6), platforms sit in an intermediary position, and the responsibility to regulate user content is inescapable. And media companies regulate the range of their platforms not only to maintain their corporate image but also to protect users, prevent undesirable events and reduce the impact of malevolent incidents (Flew, 2019). At the same time, policies provide the Internet with ample space for self-regulation. 1996 saw the passage of the US Communications Act, Section 230, which provides Internet companies with a “safe harbor” provision. The Act protects digital platforms in two ways. First, platforms will not act as ‘publishers,’ i.e., they will not be punished if a user causes a malicious incident on the platform; second, platforms regulate the removal and monitoring of content sent by rationalized users (Flew et al., 2019). Thus, self-regulation is responsible for organizing the dissemination of harmful content.
False content and hate speech are causing strife

Media platforms provide a venue for communication, but not all speech is friendly. Culturally, anti-racism is a hot topic discussed globally. UN Secretary-General António Guterres has stated that racism needs to be condemned wherever it is encountered (Nations, 2022) due to the severe online tragedies caused by hate speech. In 2018 there was a shooting inside two mosques in New Zealand, and the entire incident was live-streamed by the man who fired the gun using Facebook (Sommer, 2019). The investigation afterwards revealed that the man from Australia had written a manifesto on a fringe website called 8chen, which was filled with fascist and white supremacist hate speech. The manifesto and the live video of the shooting had such a negative impact that racists edited the content, embellished the shooter’s image, and put it on YouTube to fuel the hate. The solution for the YouTube platform was to remove the video, but even with the removal of the video and live content, fake news and conspiracy theories will continue to grow, a challenge that self-regulation needs to face.
At the political level, hate speech and exaggerated news are stoking the fires of war. In February 2022, Russian hordes invaded Ukraine, a significant conflict escalation occurred, and hate speech about the Russo-Ukrainian war exploded on social media. There were numerous posts on Twitter about proposed violence against Russians (Coleman & Sardarizadeh, 2022), while false and exaggerated news on Youtube led viewers to an incorrect or paranoid extreme.
Hate speech and false content flood the internet with no bystanders, and the uninformed bystanders lose the original truth, ultimately leading to ‘false’ international political chaos. The mass media is becoming a ‘weapon,’ and the herd mentality of the public makes them accomplices to the publishers of hate speech on the internet (Snowden, 2022). Twitter and Youtube platforms try to remove and block posts containing grossly harmful content, but this isn’t easy to accomplish. A constant stream of harmful content is posted, and even algorithms struggle to identify all conflicting and violent posts perfectly.
The means of self-regulation and the challenges currently faced

As can be seen in the case of hate speech, self-regulation is usually a matter of giving rules and standards. If a user violates the laws, the account and the content posted will be deleted or “disappeared” (the account is banned from access). Self-regulation has unique advantages. First, platform companies have a better understanding of the technology and business of their industry, and the guidelines set by self-monitoring are often more practical. Second, platforms are efficient in their enforcement and can quickly address harmful content and protect consumer rights. However, self-regulation is not perfect, and it has flaws that are difficult to regulate, which is the challenge that self-regulation currently faces. First, media organizations (digital companies), whose primary motivation is to expand their interests through corporate growth, find it challenging to monitor themselves impartially. In the case of the Russian-Ukrainian conflict, Facebook, as a communication platform, should have controlled and regulated posts filled with violent speech, but Meta did not. Instead, it supported the posting of violent speech; Meta issued a statement saying, “because of the Russian invasion of Ukraine, the platform temporarily allowed users to violate the rules on political

speech, such as violent speech”. This incident was considered a serious loophole in self-regulation (Vengattil & Culliford, 2022). Second, the massive amount of content to be censored is difficult for the platform to handle. Research studies have shown that YouTube generates 400 hours of posted content every minute. Youtube handles this by uploading as much as it can pass machine screening, waiting until a certain number of negative comments are reached or the second round of robot screening finds that the video contains negative elements before moving on to manual detection (Flew et al., 2019, p. 45). Third, geopolitics. Within different national jurisdictions, platforms need to distinguish between cultural backgrounds and compliance with national laws, among others (Flew et al., 2019, p. 41), the most direct example of which is the “safe harbor” clause not applying to the EU. The economic and political impact of self-regulation is highly far-reaching, and the challenges to self-regulation go far beyond this. Thus self-regulation of the dissemination of harmful content is not a complete solution.
Everyone has a responsibility to maintain a ‘healthy’ digital platform

Maintaining a healthy digital platform is essential and requires the efforts of everyone. Digital media companies, who are primarily responsible for stopping the spread of harmful content, try hard to use self-regulation to maintain a good communication environment, but the truth is that they can’t do it perfectly for economic reasons or Geopolitics, etc. So the final point that needs to be made is that content publishers need to measure their content and address the spread of negative issues at the source. This seems to be a tricky proposition at the moment. The original purpose of putting the Internet is to allow users to communicate freely, but freedom without restrictions is dangerous, and this harm will not only hurt others but also backfire on themselves. For a more healthy digital platform, please start by changing yourself.
Reference
- Charles, S. (2022). What is the Nth room? The horrific true story of “cyber hell: Destroy the nth chatroom.” Esquire. https://www.esquire.com/uk/culture/tv/a40018199/what-is-nth-room-true-story-cyber-hell-destroy-the-nth-chatroom-netflix/#
- Coleman, A., & Sardarizadeh, S. (2022). Ukraine conflict: Many misleading images have been shared online. BBC News. https://www.bbc.com/news/60513452
- Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal of Digital Media & Policy, 10(1), 33–50. https://doi.org/10.1386/jdmp.10.1.33_1
- Kim, L. (2020). Update: More Celebs Voice Support For Petitions To Reveal “Nth Room” Sexual Exploitation Case Suspects’ Identities. Soompi. https://www.soompi.com/article/1389701wpp/celebrities-support-petition-to-reveal-identities-of-suspects-behind-nth-room-sexual-exploitation-case
- Nations, U. (2022). Fight racism. United Nations. https://www.un.org/en/fight-racism
- Picard, R. G., & Pickard, V. (2017a). Essential Principles for Contemporary Media and Communications Policymaking. Reuters Institute for the Study of Journalism: University of Oxford. https://reutersinstitute.politics.ox.ac.uk/our-research/essential-principles-contemporary-media-and-communications-policymaking
- Sky News. (2022). Ukraine War: “Distortions, dishonesty and disinformation” from Russia. Www.youtube.com. https://www.youtube.com/watch?v=9XRjpzT_hI8
- Sommer, L. (2019, March 18). How Hate Spreads On Social Media: Christchurch Terrorist Attack. StayHipp. https://stayhipp.com/news/how-hate-spreads-on-social-media-christchurch-terrorist-attack/
- Snowden, C. (2022, April 5). Guns, tanks and Twitter: how Russia and Ukraine are using social media as the war drags on. The Conversation. https://theconversation.com/guns-tanks-and-twitter-how-russia-and-ukraine-are-using-social-media-as-the-war-drags-on-180131
- Gillespie, T. (2019). All Platforms are Moderate. In Custodians of the Internet (pp. 1–23). Yale University Press. https://doi.org/10.12987/9780300235029-001
- The Korea Times. (2021). The nth room case: The making of a monster [documentary on online sex crime in korea]. Www.youtube.com. https://www.youtube.com/watch?v=WGMgp8sup0w&t=1s
- Vengattil, M., & Culliford, E. (2022, March 11). Facebook allows war posts urging violence against Russian invaders. Reuters. https://www.reuters.com/world/europe/exclusive-facebook-instagram-temporarily-allow-calls-violence-against-russians-2022-03-10/