
As more individuals engage in online culture, media platforms provide consumers a broader space for social communication and unique sorts of interpersonal encounter. Users of the new communication method may fully express themselves. However, media outlets can serve as a means of distributing dangerous information that might cause societal unrest. For instance, users re-post violent or pornographic images online, and social media platforms’ hashtag features allow extreme users to broadcast racist insults in a specific online area and interact with viewers. Due to this, it is the platforms’, governments’, and international internet organizations’ duty to prevent the spread of these more overtly objectionable items on media platforms.

There is no doubt that media platforms themselves should first consider the efficacy of self-regulation and take steps to kerb the spread of harmful information when it is pervasive, such as when bullying remarks and pornographic photos. Platform self-regulation comprises the automated removal and filtering of damaging content as well as the use of the Internet’s analytic skills to identify sensitive phrases. The most well-known social networking site, Twitter, has developed guidelines so that users can access the Internet more freely and safely. Users must abide by regulations that prohibit them from fostering hate speech or racism and vehemently reject material that advocates child sexual exploitation. As the platform’s algorithm found the sensitive phrases “self-harm” and “corpse” in the message in 2016, Twitter alerted the tweet and suspended Awsten’s account (Smith, 2018). The performance was examined by Twitter’s security division, and it was determined that it encouraged violence or self-harm, leading to its suspension. Additionally, because the phrases “breast” and “penis” are frequently linked to pornographic photos, the Weibo platform has marked them as forbidden words and screened the subjects.
Despite widespread user acceptance of content that has been classified as banned words, the automated algorithm feature seems to have difficulty determining a clear line between, for example, what is a harsh term that is acceptable to users and what is false news that needs to be filtered out. In this context, the drawbacks of media platforms using digital technology to self-regulate are evident. According to Gillespie (2017), to avoid liability for errors in judgment, content posted on all platforms overseen by the Apple App Store is allowed to be made public immediately. Automated censorship and content removal actions on each platform are taken after a post has received views. This after-the-fact action means that offending content on digital platforms is briefly distributed to more users because it is not controlled. Therefore, in addition to self-regulation by platforms with the assistance of code and design, relevant state authorities should also assume responsibility for regulating the Internet to reduce the widespread dissemination of harmful content caused by weak enforcement by platforms.

The traditional approach to regulating digital media calls for national governments and affiliated institutions to coordinate policies and build out the infrastructure for digital platforms. Governments and organizations in Australia are in charge of stopping the spread of dangerous information on the internet and are essential to preserving the security and steady growth of digital platforms. The main body in charge of Internet regulation and content censorship in Australia is the Australian Broadcasting Authority (ABA). In accordance with the ABA’s regulatory framework, any content on its digital platforms that contains potentially harmful speech, such as profanity or public scandal, is deemed to be in violation of the law. The ABA has the power to order content hosts to delete harmful materials, and the institution will send the information to Australian law enforcement organizations when unacceptable content crosses legal red-lines, such as when it contains child pornography or libellous claims. Besides, the ABA’s complaint process for Internet regulation can efficiently control the spread of harmful content (Internet Regulation in Australia, n.d.). For instance, the ABA received a complaint about a television program that repeatedly aired video of Russell Crowe smoking a Marlboro cigarette while conducting an online interview in 2000 from a viewer. The network did not stop airing the reported tape after obtaining a warning from the authorities; instead, it went to the Commonwealth Government for judicial review in order to get approval to publish the interview. The outcome of the legal review revealed that the television network’s actions broke Australia’s federal statute against cigarette advertising. As a consequence, the Federal Court determined that the authorities had not broken any laws and that the network was responsible for paying the proper fines (TCN Channel Nine Pty Ltd v. Australian Broadcasting Authority, n.d.).
The aforementioned instances demonstrate that because national governments are required by law to punish the publication of unlawful content, they are able to effectively restrict the spread of damaging information on the Internet. However, as individual nations place more emphasis on direct regulation of digital platforms by their pertinent organizations, this inexorably results in a fragmentation of the globalization of information, and the ‘splinternet’ creates an unbridgeable chasm to stop the flow of harmful information on digital platforms. For instance, although the Chinese state-run Internet has built Firewalls to censor information on the YouTube and Facebook platforms, it is powerless to prevent false assertions about China’s internal issues from being posted on these media outlets (Hetler, 2022). Therefore, it is important to address the difficulty of a fragmented network that might arise from national government control of the Internet, and global Internet governance is a crucial instrument to preserve the steady growth of the global Internet.

Global internet governance, a tool that frequently aims for multi-stakeholder management from a global viewpoint, may contribute to the debate of internet-related public policy concerns and be a key player in the fight against spreading harmful content online. In order to better shape cyberspace via the development of Internet rules and values, China has organized international exchange events like the World Internet Conference. The forum’s goal is to mobilize national initiatives to create a cyber-community of destiny to come to an agreement on how to properly regulate and oversee online content (Attrill & Fritz, 2021). As China’s cyberspace vision gradually materializes, the Internet will no longer only be a potent tool for the dissemination of information; it will also depend on the operational environment in which it is shaped and managed globally to provide effective governance of the online community and stop the spread of harmful content on digital platforms.
Moreover, the Internet Governance Forum (IGF), a well-known example, aims to bring together representatives from diverse cultural backgrounds to discuss ways to maximize advantages and minimize issues that arise on the Internet. At the IGF summit in 2015, the conference, with the backing of many attendees, adopted resolutions to support the sustainability and security of the Internet through increased digital oversight, prompt blocking of communication technologies used for terrorism and cyber-crime, and protection of Internet users’ freedom of expression (About the IGF, n.d.). The IGF can therefore efficiently monitor online material to the greatest extent and prevent the circulation of unpleasant information, such as bullying and hate speech, on digital platforms since it fosters involvement from a variety of interest groups in the globalization process. However, due to the multi-participant governance paradigm, it is challenging to find common ground in situations involving many cultural backgrounds. Additionally, there is no specific authority in charge of global internet administration, which has the benefit of avoiding authoritarianism and tyranny, but it also has obvious flaws, especially a lack of enforcement and cultural differences.
In conclusion, the digital platform offers a larger communication area and a new communication style for network members. Platforms have a duty to fully implement self-regulation through code and design to effectively halt the spread of harmful information, such as harassment, pornographic imagery, and false statements, under the presumption that users’ freedom of expression is not compromised. Additionally, by enhancing pertinent regulations and monitoring systems to better restrict dangerous information, national governments and global Internet legislation should work in tandem.
Work Cited:
About the IGF. (n.d.). Internet Governance Forum. Retrieved October 13, 2022, from
https://www.intgovforum.org/multilingual/tags/about
Attrill, N. & Fritz, A. (2021, November 29). China’s vision to shape global internet governance | The Strategist. The Strategist.
https://www.aspistrategist.org.au/chinas-vision-to-shape-global-internet-governance/
Gillespie, T. (2017). Governance by and through Platforms. The SAGE Handbook of Social Media, London: SAGE, pp. 254-278.
https://dx.doi.org/10.4135/9781473984066.n15
Hetler, A. (2022, June 7). The splinternet explained: Everything you need to know. TechTarget.
https://www.techtarget.com/whatis/feature/The-splinternet-explained-Everything-you-need-to-know
Internet Regulation in Australia. (n.d.). Australian Human Rights Commission. Retrieved October 13, 2022, from
https://humanrights.gov.au/our-work/publications/internet-regulation-australia#1
Smith, K. L. (2018, March 13). Twitter Is Deleting Accounts And These Are The Words That Might Get You Suspended. PopBuzz.
https://www.popbuzz.com/internet/social-media/twitter-account-suspension-trigger-words/
TCN Channel Nine Pty Ltd v. Australian Broadcasting Authority. (n.d.). Tobacco Control Laws. Retrieved October 13, 2022, from
https://www.tobaccocontrollaws.org/litigation/decisions/au-20020718-tcn-channel-nine-pty-ltd-v-aus