
The Internet enables faster and wider dissemination of ideas and allows users to express beliefs. However, problematic content such as bullying, harassment, violent content, hate, and porn are more likely to erode and pollute the internet environment. The United Nations (2011) defines harmful speech as the derogatory and discriminatory language which attacks individuals based on colour, gender, sexual identity, and nationality. With the proliferation of increased harmful beliefs and expressions, proper regulation without infringing on the freedom of expression becomes a critical issue. It will critically explore the benefits and risks of the platform governance triangle (Gorwa, 2019), i.e. the tensions and balances between direct government regulation, self-regulation by companies and mutual monitoring by internet users in mitigating extremist ideologies, hate speech and freedom of expression on the internet.
Firstly, it is crucial to specify what content needs to be regulated. In its simplest form, regulation exists to monitor user-generated posts by implementing a set of pre-arranged rules and guidelines (Heldt & Dreyer, 2021). Regulation policies vary on different platforms but ensure users maintain a positive experience. However, the controversy is that there are no standards, and it is difficult to distinguish between acceptable and unacceptable. The Facebook and Napalm Girl controversy is an excellent example of this (Wong, 2016). The government complained that it was a graphic and disconcerting image of a naked child screaming in pain (Gillespie, 2018).

On the contrary, Facebook users believe the photograph is not only historically and emotionally significant but also such a disturbing indictment of humanity that many feel compelled to look at it – because Pan Jinsook’s suffering and nudity make the horrors of chemical warfare clear. Facebook argues that it is difficult to distinguish between photos of naked children that are allowed in one context and those that are not in another. After widespread criticism from the global press and media, the photo was reinstated. It does not suggest that government regulation is wrong; the example proves that the Internet is a place of confrontation and justice. Government regulation alone is not enough and is critical to ensure regulation by platforms and users.
The advantage of government intervention is that they can perform their duties, and acts such as incitement to hate speech and illegal activities can be sanctioned by law, which is helpful for the government to grasp and control public opinion. In addition, not only does it minimise the chances of receiving false information, but it also reduces the possibility of creating conflict among people. However, the free speech principle expressed in the First Amendment has minimal content regulations (Napoli, 2018). Australian laws regulating online defamation under the Racial Discrimination Act(1975) are considerably sparse. The Constitution (2022) offers protection for freedom of all types of speech in America, which also includes offensive content.
Moreover, governments are perceived as understanding little about digital technologies. Therefore, their involvement may inhibit innovation. If the government has monopolistic power over content regulation, it can also control what is to be created, which could harm economic growth and scientific advancement. The government could also abuse internet regulation to silence its critics and only provide people information they want to present, which creates an echo chamber(Napoli, 2018). This abuse of power can lead to severe problems, such as a loss of democracy or civil liberties. In North Korea, the government tightly controls all the information on the Internet (O’Hara and Hall, 2018). On the other hand, the effectiveness of government is reducing since the rapid and interactive nature of internet transformation. Google CEO Eric Schmidt (2011) believes that High tech runs nine times faster than the government. According to O’Hara and Hall (2018), the North Korean government requires four times more human and financial resources than South Korea to delete harassing content that circulates on digital platforms, with a sharp rise in the financial cost. Because the use of the Internet to generate criminal activity across regional boundaries has also significantly increased the burden and cost.
While depending solely on the government seems impractical, entrusting all the power to self-regulation by companies is also inadvisable. It is true that the platform’s regulation increases productivity and efficiency. It succeeds most directly by positively influencing the hate speakers and convincing them to stop saying inappropriate content by simply banning their accounts. However, excessive vetting can lead to users feeling too restricted and patrolled (Gillespie, 2017), thus triggering conflicts between users and platforms. Furthermore, if platforms monopolise regulatory resources, regulation could be used illegally to collect data from users for political or financial purposes. During the 2016 US presidential election, for example, Cambridge Analytica utilised exclusive psychometric models to analyse user behaviour. It helps the Trump campaign tailor everything from political positions to campaign slogans and places digital advertisements precisely, somewhat reversing his poor image and helping him win the election. The conspiracy would not be successful if General Data Protection Regulation (GDPR) had been enforced in the US. GDPR suggests that European’ data subjects, that is, the platform users, have the right to withhold personal information and need to approve the use of their data.

Another issue that the regulation of content needs to consider is the scale. The Internet has 420 billion online users and a vast amount of user-generated content daily(Heldt & Dreyer, 2021). While the government makes an effort to formulate laws, artificial intelligence audits used by company have been implemented on many digital platforms such as YouTube and Twitter are there to assist effective regulation. These AI audits use filters that can detect banned and harmful words and tags, determine if imagery or videos show pornographic content, and match, report and remove copyrighted material (Gillespie, 2019). Platforms’ AI audits combined with law enforcement by the government resolve a large number of illegal contents promptly.
In addition to cooperative regulation by the government and platforms, regulation by internet users is essential. Freedom of expression is not only related to the ability to speak but also to the ability to listen and allow diverse views. According to the UN Human Rights Committee (2011), “Freedom of expression means that the media can comment on public issues with censorship or restraint and inform public opinion.” This censorship and restraint come not only from governments but also from users. The Internet and social media are public domains where the public is free to express their thoughts and opinions globally. In the vast mass of Internet users, there are inevitably illegal cases which have escaped government and platform regulation. Freedom of expression is a fundamental human right but regulation between Internet users is necessary. Since some exploit the freedom right to disseminate dangerous worldviews that are harmful to others, especially minorities(Gillespie, 2018). Meanwhile, the Australian government can learn from the 2016 EU Code of Conduct to Combat Unlawful Hate Speech Online and other countries to implement limited censorship to minimise the impact of extremist ideologies and the spread of hate speech online.
Tech companies that comply with Europe’s 2016 Code of Conduct for Combating Unlawful Hate Speech Online may be able to control their content better. When community guidelines are implemented, social media may become a safer place for users to report content that violates these guidelines. However, implementing this policy alone may not be enough to stop the spread of dangerous ideologies and hate speech. By “censoring and reporting,” these tech giants are simply applying a band-aid solution to a much larger and more complex problem. Governments and social media companies need to implement more policies to regulate hate speech and spread the word online, raise awareness of media education, and promote internet literacy and critical thinking.
The vision of an “open” platform with a genuine notion of community and democracy is a fantasy (Gillespie, 2018). A framework of “Five Actions for the Way Forward” for companies and states argues that the focus should be on improving the content inspection process rather than adding content-specific restrictions. Platforms can and must do a better job of self-regulation. Government should play a role in steering the company in the right direction. Civil society and experts should be involved in the design and evaluation of regulations. In addition, users should be able to appeal any review decisions they feel are unjust. Companies should also be more transparent about how they review content. Gorwa (2019) suggested that the content review process should be more transparent so that users and experts can understand the governance model they must follow. Finally, government-imposed laws should be transparent, necessary, proportionate, and non-discriminatory. The appropriateness and development of the Internet environment are still being explored and require mutual discipline and balance between audiences, digital platforms, and government agencies.
References
Flew, T. (2019, April 26). Guarding the gatekeepers. Griffith Review. https://www.griffithreview.com/articles/guarding-gatekeepers-trust-truth-digital-platforms/
Federal Register of Legislation. (1975). Racial Discrimination Act 1975. Federal Register of Legislation. https://www.legislation.gov.au/Details/C2016C00089
General Data Protection Regulation. (2018, November 7). What is GDPR, the EU’s new data protection law? GDPR.Eu. https://gdpr.eu/what-is-gdpr/
Gillespie, T. (2018). All Platforms Moderate. In Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media. Yale University Press.
Gillespie, T. (2017). Governance by and through Platforms. In The SAGE Handbook of Social Media. SAGE. https://ebookcentral-proquest-com.ezproxy.library.sydney.edu.au/lib/usyd/detail.action?docID=5151795
Gorwa, R. (2019). The platform governance triangle: Conceptualising the informal regulation of online content. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1407
Heldt, A., & Dreyer, S. (2021). Competent third parties and content moderation on platforms: Potentials of independent decision-making bodies from A governance structure perspective. Journal of Information Policy, 11, 266–300. https://doi.org/10.5325/jinfopoli.11.2021.0266
Napoli, P. M. (2018). What If More Speech Is No Longer the Solution? First Amendment Theory. Meets Fake News and the Filter Bubble. Federal Communications Law Journal, 70(1), 55+. https://link.gale.com/apps/doc/A539774158/AONE?u=usyd&sid=bookmark-AONE&xid=1f0a3a5c
O’Hara, K., Hall, W., & Cerf, V. (2021). Four internets. Oxford University Press. http://dx.doi.org/10.1093/oso/9780197523681.001.0001
The UN Human Rights Committee. (2022). OHCHR. https://www.ohchr.org/en/treaty-bodies/ccpr
Wong, J. C. (2016, September 9). Mark Zuckerberg accused of abusing power after Facebook deletes “napalm girl” post. The Guardian. https://www.theguardian.com/technology/2016/sep/08/facebook-mark-zuckerberg-napalm-girl-photo-vietnam-war