

We enjoy the convenience brought to us by the Internet. However, Harmful online content is also a problem that must be considered. Bullying, harassment, violent content, hate, porn, and other problematic content circulates on digital platforms frequently happened. Hence, online regulation should be taken seriously. This essay states that government and media platforms must be responsible for network content regulation. The Government should assist digital platforms and encourage media self-regulation rather than government regulation. However, each country holds different definitions and models for regulation. It would be impossible and lack representation to mention all the regulation models. Therefore, this essay will focus on the Australian regulation model. Furthermore, as a media giant, Facebook is one of the representatives of the social media platform. Therefore, an investigation of Facebook’s self-regulation system will also be included.
Moreover, talking about the Christchurch terrorist attack in 2019. the analysis includes the impact of this live online violent content event and exploring who was involved in violent content regulation during or after the event. Finally, what actions and methods were taken?
Who should take responsibility?
First, it might be helpful to figure out who should take responsibility for online content regulation by exploring who would be taking action when harmful online content occurs. For instance, Christchurch terrorist attack that happened in 2019 has reflected the regulation from media platforms and the Government. The Christchurch terrorist attack that killed 50 men, women, and children worshiping at a mosque has caused global shock and revulsion (BBC, 2020). the victim Livestream his process of crime on Facebook. However, the video has not removed until the New Zealand police warned. Doubts have been raised about Facebook’s censorship; Facebook acknowledged the Untimely censorship of violent content for their imperfect Monitoring system. Trying to improve on its cutting-edge technology to solve the problem. The Facebook coping approach reflected that the digital platform had been involved in stopping the spread of harmful content. Christchurch terrorist attack is regarded as one of the most representative cases of spreading violent content on the Internet, and various countries have responded, including Australia. Countries took part in the Christchurch Call to Action Summit, which brought together countries and tech giants. To discuss how to regulate social media promotion of terrorism and violent extremism. The establishment of the Christchurch call to action summit reflects the responsibility of national governments to regulate harmful content on the Internet.
Furthermore, the Government can manage vicious incidents by revising laws, issuing policies, and pressuring media companies to strengthen self-regulation. The Australian Government, for example, has changed its laws since the incident. In 2019, the Criminal Law Act 1995 was amended: Hosts or service providers of violent Internet content will be criminally liable. Executives of social media giants could face up to three years in prison for failing to meet their obligations to stop the spread of violent material or a fine of 10% of the platform’s annual turnover (Fingas, 2019). the changes in Australian policy would put unprecedented pressure on media platforms. Hence, the Government takes responsibility for facilitating social media’s self-regulation.

How to regulate harmful content? (Government)
Analyze the Christchurch terrorist attack again and observe the solution of the Australian governments and Facebook for this incident. Firstly, According to Christian Porter of the Australian Department of Justice, the Christchurch terrorist tragedy has pushed the issue of violent online content incidents to a head (Griffiths, 2019). As mentioned, the Australian Government issued and amended relevant criminal laws in response to this incident. Therefore, amending the criminal code is one of the methods the Australian Government will take. Through the Law and Bills Digest issued by the Australian Government, this paper got the information that the Government will first conduct a parliamentary investigation by the Senate References Committee on Environment and Communications according to the basic legislation enacted by the country. Collect relevant evidence, facts, and materials. The results are then provided to Australian government commissioners. Then do further research based on the report. For example, in November 2016, the Senate References Committee on Environment and Communications provided the Australian Government Committee with a report on the harm caused to Australian children by accessing pornographic content on the Internet. The report recommended that the Committee undertake a particular study on the exposure of Australian children and young people to online pornography and other pornographic material (Biddington, 2022). Next step, the Government will take action by setting up special working groups to prepare proposals for existing problems. Crackdown on the corresponding network harmful content. Furthermore, The Australian Government has also established the eSafety Commissioner, which administers the online content scheme for user complaints (Biddington, 2022).To sum up, the management measures taken by the Australian Government can be summarized as follows:
- The Parliament collects existing problems and submits them to the government Commissioner for further study.
- The working group is established within the Government to prepare proposals.
- The network security Commissioner manages network user complaints.
Media self-regulation (Facebook)
In response to the Christchurch terror incident, media platforms also made corresponding responses. Taking Facebook as an example. Facebook, Microsoft, Twitter, Google, and Amazon have signed the Christchurch Call to Action and are committed to implementing a nine-point plan to address the misuse of technology to disseminate terrorist and violent extremist content (Meta, 2019). The plan offers a glimpse of the kind of governance Facebook is about to use. Individual actions that Facebook will use include explicitly changing its terms of use to prohibit the spread of terrorist and violent extremist content. Second, establish a visible and easy-to-use reporting mechanism for users to report violations, and accelerate the platform deletion hearing speed. Third, strengthen investment in technology, such as artificial intelligence audits, to improve online monitoring and removal of violent content. Therefore, the primary web content management scheme used by media platforms can be summarized into the following three parts:
- Updates to the Terms of Use that include community standards, codes of conduct, and acceptable use policies (notify users in advance of content that may be deleted or censored to prevent infringement of freedom of speech)
- User reporting function
- High-end monitoring technology, such as artificial intelligence monitoring.
Regulation problem

Firstly, Talk about Problems with social media censorship social media. Companies are reluctant to remove offensive posts or close accounts because of freedom of speech (Lowe, 2019). So, freedom of speech is the biggest obstacle to social media censorship. It is hard to achieve the goal of reducing harmful content while preserving free speech. Therefore, media companies need government legislation to define illegal speech or content. For example, in the case of the Christchurch attack, the Australian government described the provision or dissemination of violent content, such as extremism and terrorism, as a crime. Facebook can then change its content management policy. delete and review relevant content.
Furthermore, there are also some problems in legislation. However, given the Australian Government’s policy to restrict content related to extremism and terrorism, this paper argues that its impact may be positive and potentially problematic. First, the positive effect of directly criminalizing media platforms hosting content may be to pressure media platforms to strengthen the supervision of extremist and terrorist content. However, the legislative proposals in Australia may touch on the issues of democracy. The liability issues involved are too broad. The Law Council of Australia said the legislation could have serious unintended consequences and lead to censorship of the media, which is unacceptable (Griffiths, 2019). The Government should cooperate with the media and encourage them to regulate themselves. For example, the safe harbor protections that the United States has created for media platforms. Media platforms are treated as intermediaries, which do not need to monitor what users say and do because they are seen as providers of the Internet or other web services. The aim is to allow media platforms to stay out of the way, encouraging them to self-regulate (Gillespie, 2018). As well as technological innovation.
Conclusion
Finally, to supervise harmful content on the Internet, the government and the media should carry out adequate supervision in a clear and shared responsibility and cooperation way. Instead of complete government regulation, the government should assist media platforms in self-regulation to encourage technological innovation. Indeed, protecting democracy and freedom of expression is a complex act during online content regulation.
word count: 1390
Reference:
Tarleton, G. (2018). Governance by and through Platforms (pp. 254–278). SAGE.
Biddington, M. (n.d.). Regulation of Australian online content: Cybersafety and harm. Parliamentary of Australia. Retrieved October 11, 2022, from https://www.aph.gov.au/About_Parliament/Parliamentary_Departments/Parliamentary_Library/pubs/BriefingBook46p/Cybersafety
Sabbagh, D. (2021, October 29). Facebook trained its AI to block violent live streams after Christchurch attacks. The Guardian. https://www.theguardian.com/technology/2021/oct/29/facebook-trained-its-ai-to-block-violent-live-streams-after-christchurch-attacks
Lowe, D. (2019, April 21). The New Jurist. The Christchurch Terrorist Attack, the Far Right, and Social Media: What Can We Learn? https://newjurist.com/christchurch-terrorist-attack.html
Griffiths, J. (2019, April 4). Australia passes law to stop spread of violent content online after Christchurch massacre. CNN. https://edition.cnn.com/2019/04/04/australia/australia-violent-video-social-media-law-intl/index.html
Fingas, J. (2019, March 30). Australian bill could imprison social network execs over violent content. Engadget. https://www.engadget.com/2019-03-30-australia-laws-could-imprison-internet-execs.html
News, B. (2020, August 27). Christchurch mosque attack: Brenton Tarrant sentenced to life without parole. BBC News. https://www.bbc.com/news/world-asia-53919624
Meta. (2019a, May 15). Facebook joins other tech companies to support the christchurch call to action. Meta. https://about.fb.com/news/2019/05/christchurch-call-to-action/
Willsher, K. (2019, May 15). Leaders and tech firms pledge to tackle extremist violence online. The Guardian. https://www.theguardian.com/world/2019/may/15/jacinda-ardern-emmanuel-macron-christchurch-call-summit-extremist-violence-online
Abdo , B. A. (n.d.). A safe harbor for platform research. Knight First Amendment Institute. Retrieved October 12, 2022, from https://knightcolumbia.org/content/a-safe-harbor-for-platform-research