
Introduction
Information spreads quickly in the digital modern world. A growing amount of unethical content is being published on digital platforms as the audience of platform users grows. The public has grown more worried and expects that such content can be effectively stopped due to the increase in bullying, harassment, violent content, hate, porn, and other problematic content on the Internet (Gillespie, 2018). The following analysis that follows will provide examples of offensive content on digital platforms, identify the responsibilities and obligations of platforms, governments, and societies in stopping the spread of illegal content, and suggest solutions in light of a critical understanding of the history, politics, economics, and culture of Internet technology.

The problem with platforms
Online communications and content that are illegal and harmful are becoming an increasingly big problem as social media platforms proliferate and are utilized more frequently by users (Yar, 2018). Bullying that occurs online on digital platforms has a significant negative impact on an individual’s mental and physical health. More frequently, due to internet harassment, media outlets have been reporting on suicides happening more regularly. For instance, the Australian teen Dolly Everett killed herself in 2018 after experiencing cyberbullying (Abou El-Seoud et al., 2020).
Who and how to stop
1. Direct regulation by the government
The early freedom network is working outside of the government, notably in regards to online contents through media policies. New technologies, however, shouldn’t be governed by outdated laws. With the development of technologies, there is growing unhappiness with corporate self-regulation and the emergence of new types of governmental regulation.
The government will implement relevant political measures to stop offensiveness in order to hold themselves more accountable for the users of their service’s online safety. The expansive Online Safety Act passed in 2021, for instance, demonstrates victims of online abuse could make a complaint to the eSafety commissioner (D. Eggers et al., 2019). Furthermore, because government regulation is generally considered authoritative, it is a powerful strategy to foster a harmonious networking environment (Teets, 2017). For instance, Roskomnadzor, the federal executive authority with control and supervision responsibilities in the zones of mass media, communications technology, and information systems, is required to keep a focus on digital information in accordance with the provisions of the Law on Information (Demkina, 2022). According to the information above, the government could effectively control the problematic content that is disseminated on digital platforms. They could also develop regulations for emerging technologies in order to stay up with technological advancements. It is a reasonable method for attempting to strike a balance between promoting creativity, safeguarding consumers, and addressing any unexpected implications of disruption (Constantinides et al., 2018).
2. Self-regulation by platforms
Despite the fact that governments will inevitably become more involved in oversight, it appears that platforms should be more aggressive in with there self-regulation of problematic content on public networks. This entails the actions that companies or platform associations consider taking to substitute or enhance governmental regulations and guidelines (Cusumano et al., 2021). From the history of self-regulation before and after the widespread adoption of the internet, Cusumano et al (2021) researched that if they disregard the interests of consumers or the industry as a whole in favour of their own short-term, individual ambitions and, as a result, irreparably damage the conditions that allowed them to succeed in the first place. In contrast to this circumstance, the appropriate self-regulate would profit substantially. Thus, self-regulation could be essential to preventing a potential tragedy of the commons situation for digital platforms.

Content moderation is an efficient method of dealing with online harassment and other offensive comments, which benefits from removing prohibited and unlawful content and preventing public controversies (Jhaver et al., 2018). Although the Internet was initially intended to be a “technology of freedom” platform, platformization has resulted in an increase in the importance of content moderation. Platforms police content for moral and legal purposes as well as to improve the user experience as a whole. At the same time, it seems that it has evolved into a more sophisticated and complete approach to content moderation with the increasing development of the platform. Twitter has developed, revised, and expanded a comprehensive policy framework throughout the years in an effort to combat online misinformation, hate speech, and incitement to violence. There is an illustration, Twitter has taken the lead in creating security policies and prosecuting serious transgressions of those policies. In July 2016 it permanently suspended right-wing provocateur Milo Yiannopoulos, and in September 2018 it permanently suspended conspiracy theorist Alex Jones (Hussain & Contreras, 2022). For the management of the problematic content of the platform, the platform can conduct self-regulation. In order to efficiently deal with the illegal content on the platform, the network typically employs the content audit approach, which is the most typical way for a platform to undertake self-regulation.
3. Non-government organisations (NGOs)
In addition to companies and governments, several “NGOs,” including civil society organisations, nongovernmental Organizations, and academic researchers, would play a significant role in platform practises research, advocacy, and supervision (Gorwa, 2019). As an example, the Global Network Initiative (GNI), a relatively isolated accountability institution established to assist platforms in handling various government requests for content removal or dealing with user data, has contributed significantly to the history of NGOs regulating the network. Before GNI, platforms had few mechanisms to exchange guiding principles and frequently handled significant issues involving human rights on a necessary basis (Gorwa, 2019).

NGOs commonly develop guiding principles that provide platforms and governments with key strategies to elicit public discussion and make an impact on governance. For example, the “Santa Clara Principles for Content Moderation” (SCPs), which favours specific recommendations on how companies should handle appeals, user notifications, and moderation processes rather than offering guidance to governments, was set up forward by 2018 by a small group of civil society organisations and researchers, including EFF. The principles of SCPs tend to have been developed primarily by civil society, with little assistance from companies or the government (Goldman, 2021). In order to make social government policy more fair based on justice, and to take into consideration the interests of stakeholders, a larger number of NGOs should be involved in the policy of platform development. “NGOs” can also be successful at preventing the publication of unlawful content on platforms, and they frequently establish guidelines that platforms and governments can follow.
Conclusion
As individuals become more reliant on the Internet and digital platforms, the suffering that individuals face online, such as cyberbullying, harassment, violent content, hate, sexually explicit, and other hazardous content, grows. In order to address this issue, governments suggest legislation and policies prevent offensive information available on digital platforms, platforms self-regulate and combat it through content moderation, and NGOs offer advice regarding how to minimize foul speech (Zankova & Dimitrov, 2020).
References
ABC News. (2019). “Disturbing and brave”: Teen director’s cyberbullying ad lauded after Dolly Everett’s death. ABC News. https://www.abc.net.au/news/2019-09-19/teen-suicide-of-dolly-everett-sparks-new-ad-on-cyberbullying/11523028
Abou El-Seoud, S., Farag, N., & McKee, G. (2020). A Review on Non-Supervised Approaches for Cyberbullying Detection. International Journal of Engineering Pedagogy (IJEP), 10(4), 25. https://doi.org/10.3991/ijep.v10i4.14219
Constantinides, P., Henfridsson, O., & Parker, G. (2018). Platforms and Infrastructures in the Digital Age. Information Systems Research, 29(2), 381–400. https://doi.org/10.1287/isre.2018.0794
Cusumano, M. A., Gawer, A., & Yoffie, D. B. (2021). Social Media Companies Should Self-Regulate. Now. Harvard Business Review. https://hbr.org/2021/01/social-media-companies-should-self-regulate-now
Eggers, W., Turley, M., & Kishnani, P. K. (2019). The future of regulation. Deloitte Insights. https://www2.deloitte.com/us/en/insights/industry/public-sector/future-of-regulation/regulating-emerging-technology.html
Demkina, A. V. (2022). Legal Regulation of Social Platforms (Network). The Platform Economy, 187–201. https://doi.org/10.1007/978-981-19-3242-7_13
Fai, M., Bradley, J., & Allan, N. (2022). Online safety to continue as a front-and-centre focus of Australian law. Www.gtlaw.com.au. https://www.gtlaw.com.au/knowledge/online-safety-continue-front-centre-focus-australian-law
Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. In Google Books. Yale University Press. https://books.google.com.au/books?hl=zh-CN&lr=&id=cOJgDwAAQBAJ&oi=fnd&pg=PA1&dq=+Bullying
Goldman, E. (2021). Content Moderation Remedies. Michigan Technology Law Review, 28(1), 1–60. https://doi.org/10.36645/mtlr.28.1.content
Gorwa, R. (2019). The platform governance triangle: conceptualising the informal regulation of online content. Internet Policy Review, 8(2). https://policyreview.info/articles/analysis/platform-governance-triangle-conceptualising-informal-regulation-online-content
Hussain, S., & Contreras, B. (2022). Twitter was at the forefront of content moderation. What comes next? Los Angeles Times. https://www.latimes.com/business/technology/story/2022-04-27/twitter-elon-musk-content-moderation-free-speech
Jhaver, S., Ghoshal, S., Bruckman, A., & Gilbert, E. (2018). Online Harassment and Content Moderation. ACM Transactions on Computer-Human Interaction, 25(2), 1–33. https://doi.org/10.1145/3185593
Teets, J. (2017). The power of policy networks in authoritarian regimes: Changing environmental policy in China. Governance, 31(1), 125–141. https://doi.org/10.1111/gove.12280
Yar, M. (2018). A Failure to Regulate? The Demands and Dilemmas of Tackling Illegal Content and Behaviour on Social Media. International Journal of Cybersecurity Intelligence & Cybercrime, 1(1), 5–20. https://vc.bridgew.edu/ijcic/vol1/iss1/3/
Zankova, B., & Dimitrov, V. (2020). Social Media Regulation: Models and Proposals. Journalism and Mass Communication, 10(2). https://doi.org/10.17265/2160-6579/2020.02.002