
Introduction
Due to Web 2.0′, the exchange of information between the masses is no longer just a one-way process (Baym, 2013). The platform has emerged, developed and gradually integrated into people’s daily lives (ibid). Its function changes the ability of the masses to connect with each other, especially in the current covid-19 (John, 2022). As a result, there are more opportunities for people to communicate more widely across time and space and to make a wider range of connections. Also the platform was also initially given the positive, liberal concept of information and ideas being freely disseminated and shared in this virtual space (Gillespie, 2018). However, as digital services play an increasingly important role in the lives of the general public, platforms are somehow not moving in the supposedly expected direction but are being misused, with many negative consequences (ibid). Examples include illegal and harmful activities online, pornography, violence, etc. (ibid). In this situation, who needs to stop the spread of such harmful content in order to create a healthy online environment became a point of discussion. This essay mainly concentrates on the aspect of online users, social media platforms, and the government to analyse their roles in these problematic contents and how to prevent such incidents from happening again.
The dissemination of information is not caused by one person or media organisation alone. When inappropriate content appears on a platform, it is difficult to place all the blame on one party. However, analysing and highlighting certain aspects can go some way to reducing the appearance of inappropriate content.

Online user
It is the responsibility of the online user to prevent inappropriate content from appearing. Firstly, online users are individuals who participate in the network. On this basis, part of the reason that causes these complicated problems is the network users themselves. The online platform created by technology has, to some extent, accelerated the widespread dissemination of information and brought people closer to each other. The public can share their lives and opinions on social media and view or comment on different content from others. However, precisely because the users are the general public, there is no control over the general public’s literacy or education level. In this respect, online platforms form the opposite effect. Virtual identities and free speech seem to become weapons, turning words and speech into bullets without consequences or reflection. This phenomenon deepens the risk of online harm. Taking K-pop artist Sulli as an example, she ended her life in 2019 after facing a lot of online violence and hate (Julia & Yoonjung, 2019). She caused controversy because she broke out into a relationship at the height of her career as a K-pop idol. This went against the sacred image of Korean idols in the minds of their fans. She also publicly supported the “Free the Nipples” movement, which is somewhat at odds with traditional Korean conservative values. Another example is from Juwon Park, a Seoul entertainment journalist. She questioned a musician who works with the Korean pop boy band BTS about the violence against two women on the social media platform. In the weeks that followed, she faced a torrent of abuse and insults, even from fans who used the internet to steal her personal information, impersonate and insult her reputation and force her to resign. Thinking about who is the real killer of Sulli, who put an ordinary person in danger. It is the public themselves. Price (2021) demonstrates that online harm is caused by technology, but its roots are still in human behaviour. Whether it’s Sulli or journalist Juwon Park, the platform is used as a weapon for K-pop fans to vent their negative feelings and “destroy” values and behaviours they disagree with.
How to prevent it?
Improving individuals by online users is the best way to combat this phenomenon. Because the user is the creator of the platform’s content and is the source (Gillespie, 2018). In this circumstance, the public needs to be civilised on the Internet and strengthen their moral values online. Improve your own civility, and do not weaponise the platform. Try to think first when posting opinions and content on the Internet. Don’t go out of your way to cause harm to others and society. At the same time, firmly resist text, images and video materials that harm physical and mental health.

Social media platform
While web users are the primary creators of online hate, platforms also need to be held accountable for problematic content on their media. This is partly because the platform gives harmful content the hosting content service and partly because it provides a space for interaction (Price, 2021). First, the vehicle for content is the nature of the platform (Gillespie, 2018). When users generate somewhat dangerous content, then, in a way, the platform becomes a catalyst for the spread of harmful content as well. From personal verbal attacks that do little harm to politicians manipulating social media for other personal purposes. For example, the “pizzagate” conspiracy was designed to inflame public sentiment against another contender, Mollington, to ensure that Mr Trump won the election. This harmful fake news affects not only the elections but also the safety of life offline. Secondly, platforms use algorithms and constructs to create spaces for user interaction (Bucher, 2012). Therefore, platforms need to manage the security of these platforms. If not managed properly, a platform’s online space should naturally produce harmful results due to its algorithms (ibid). Taking the concept of filter bubbles as an example, this phenomenon leads one into the trap of the algorithm. Algorithms use the information of the masses to select precisely the values and worlds seen by the corresponding users (Shmeleva, 2019). This exacerbates polarisation while limiting the masses to their own world (ibid).
How to prevent it?
The platform complies with local government laws and regulations and monitors, regulates and removes the content as needed to control the quality of content on the platform and create a healthy environment (Gillespie, 2018). For example, between January and June 2019, Twitter took action against 95,887 accounts that promoted terrorist, violent or extremist-related content (ibid). Meanwhile, automated technology has been developed by algorithms on the platform to identify violations (ibid). This has significantly improved the efficiency of censorship..
Government
The government, as the highest authority in a country, needs to support the development and creation of national digital technologies while suppressing other risks. Negative information on the Internet poses a threat to individuals and leads to public distrust of the state and its policies (John, 2021). In this circumstance, to maintain user security and government authority, the government needs to develop more effective and up-to-date laws to require platforms and individuals. For example, Australian legislation imposes criminal liability on platforms that fail to warn and remove ‘hate content in a timely manner (Biddington, n.d). Another example is the UK, which first released a White Paper on cyber harm to demonstrate its determination to become the world’s safest cybernation. It also published a ‘response’ to update the current rules and address the controversial nature of the previous (John, 2021). At the same time, the British government has been active in the Lucy Faithfull Foundation project (Internet Watch Foundation, 2020). Under this project, internet companies actively participate and report the data of illegal videos (ibid). As a result, it has prevented at least 8.8 million UK internet users from accessing videos and images of sexually abused children (ibid).
Notably, the government also needs to strike a balance between the public interest and the rights of individuals, as a degree of increased control means less freedom and voice for the public. The UK government’s initial White Paper on Cyber Harms also caused controversy (Haggart & Tusikov, n.d).
Conclusion
As online users, social media platforms and governments are interconnected, stopping inappropriate content online is a coordinated effort between all three. Individual users are the source of the problem and need to regulate their own online behaviour and refuse to watch harmful videos. Online platforms need to comply with local laws and regulations, use more accurate technology to review violations, and take them seriously. Finally, governments need to enact more effective laws that require platforms and individuals to balance the public interest with individual rights. Ultimately, a healthy internet can be shaped by all parties negotiating and progressing together.
Reference Lists:
Baym, N.K. (2013). New relationships, new selves? Chapter 5, Personal connections in the digital age (pp.140-169). Polity. Hoboken.
Biddington, M. (N.d). Regulation of Australian online content: cybersafety and harm. Parliament of Australia. Retrieved from https://www.aph.gov.au/About_Parliament/Parliamentary_Departments/Parliamentary_Library/pubs/BriefingBook46p/Cybersafety.
Bucher, T. (2012). Want to be on the top? Algorithmic power and the threat of invisibility on Facebook. New media & society, 14(7), 1164-1180.
Gillespie, T. (2018). Custodians of the Internet : Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. All Platforms Moderate (pp. 1-23). Yale University Press. New Haven, CT.
Haggart, B, & Tusikov. (2019). What the U.K.’s Online Harms white paper teaches us about internet regulation. The conversation. Retrieved from https://theconversation.com/what-the-u-k-s-online-harms-white-paper-teaches-us-about-internet-regulation-115337.
John, W. (2021) Regulating Online Harms. House of Commons Library. Retrieved from https://commonslibrary.parliament.uk/.
Julia, H., & Yoonjung, S. (2019). Death of K-pop star Sulli prompts outpouring of grief and questions over cyber-bullying. CNN. Retrieved from https://edition.cnn.com/2019/10/15/asia/kpop-sulli-death-aftermath-intl-hnk-scli/index.html.
Miller, M.E. (2021). Pizzagate’s violent legacy. The Washington Post. Retrieved from https://www.washingtonpost.com/dc-md-va/2021/02/16/pizzagate-qanon-capitol-attack/.
Millions of attempts to access child sexual abuse online during lockdown (2020). Internet Watch Foundation. Retrieved from https://www.iwf.org.uk/news-media/news/millions-of-attempts-to-access-child-sexual-abuse-online-during-lockdown/.
Price, L. (2021). Platform responsibility for online harms: towards a duty of care for online hazards. The Journal of Media Law, 13(2), 238-261. DOI: 10.1080/17577632.2021.2022331.
Real Events Caused by Fake News in the US. (N.d). Marubeni. Retrieved from https://www.marubeni.com/en/research/potomac/backnumber/19.html。
Shmeleva, R. (2019). How Living in a “Filter Bubble” Fuels Political Polarisation. Medium. Retrieved from https://medium.com/social-media-writings/how-living-in-a-filter-bubble-fuels-political-polarisation-bcbfbcf4b55.