Who should be responsible for stopping the spread of bullying, harassment, violent content, hate, pornography, and other problematic content on digital platforms, and how?

Instagram and other Social Media Apps” by Jason A. Howie is licensed under CC BY 2.0.


The continuous development of the Internet has provided more and more convenience for people to obtain various information. At the same time, a huge amount of unhealthy and negative information has been spread through the Internet and its platforms. For instance, hate speech, pornographic messages, harassing messages, etc. These negative messages may be politically misleading or provocative. Violent or sexual messages can negatively influence the mental health of minors (Schlesinger, 2020). More frequently, there are ongoing incidents of cyberbullying, physical assault, and harassment that can happen to all users. These phenomena need to be managed and controlled.


The most prominent form of cyber attack is cyberbullying. Cyberbullying can be described as the repeated and deliberate use of negative words and behaviors by a person or group of people using technology to cause distress and danger to someone. In June 2010, Australia had the highest Internet usage among 14-17-year-olds, with 91% going online every week. It is estimated that at least one in ten students in Australia is a victim of cyberbullying (Cotter, 2022).

The Australian media has reported on a variety of problems related to forms of Internet censorship, including monitoring online child pornography networks, demands to take down racist websites, court orders to remove Facebook hate pages featuring alleged criminals and demands to regulate bullying and abuse. Reports are increasing. Discrimination that occurs “offline” in everyday life can also occur “online” (Cotter, 2022). The Commission’s legal responsibility for discrimination and human rights protection requires it to focus on Internet-related behaviors such as cyberbullying, cyber racism, sexual harassment, sexism, and homophobia.

cyberbullying” by paul.Klintworth is licensed under CC BY-NC 2.0.

Online Racism   

There are many examples of online racism on the Internet, from individual racist Facebook posts to group pages created specifically for racist purposes. One example of online racism in the media is a Facebook page consisting of various Aboriginal images with racist captions. According to Facebook, the page was categorized as “controversial humor,” even though the page described the entire Aboriginal race as “gasoline-sniffing and welfare-dodging inferior alcoholics.” The page’s creators eventually removed the content, but Facebook did not remove the page itself (which is still listed as “controversial humor”).

Facebook Screenshot” by codemastersnake is licensed under CC BY 2.0.

Who should regulate extremist content online?   

As liberal democracies struggle to cope with the evolution of political extremism online, social media and Internet infrastructure firms and governments find themselves having to increasingly make decisions about who can use their platforms and what people can say online (Schlesinger, 2020). This raises the question explored in this paper: Who should control the content on the Internet about bullying, hate, and harassment?

Regulation and self-regulation   

The delineation of responsibilities between government regulation and self-regulation differs from country to country, depending on the local legal framework, and regulatory bodies the heart of content, regulation is a balance of rights between the protection of users from extremist content and the preservation of freedom of expression and freedom of public communication.  These decisions profoundly affect public communication, the nature of public safety, freedom of expression, and social norms. However, in most nations, there is a regulatory framework that defines what content is unlawful and what content should be removed from the web, such as content from banned terrorist organizations, hate speech, or child pornography. These laws are the foundation for social media companies to censor content (Schlesinger, 2020). As a result, these companies have established large institutions to monitor and regulate online speech on their platforms.

Government Solutions  

In the context of digital platforms, governments must play a key role and enact certain regulations to encourage internet users to pay more attention to the quality of online content. While these practices are at odds with promoting freedom of expression, the idea of freedom of expression is at odds with the public interest, and government regulation of content does not address the imbalance between the rights of digital users and the public interest (DeNardis, 2015). Governments, as policymakers and police, have a duty to regulate illegal content on digital platforms. With the availability of digital media distribution, illegal content can be distributed by multiple users on different online platforms. This means that censoring content on only one platform cannot completely eliminate the distribution of illegal information (Cotter, 2022). Therefore, using the law as a framework for managing and censoring online content, with the government as the regulator, can help improve the overall level of governance of the online environment. The Australian government has also passed legislation requiring media companies to remove violent content from their online platforms, which will take effect in 2019.

What is the public function of social media? What tasks should it perform in the digital public sphere?   

Social media ensures stewardship by regulating the speed and scope of content delivery, as well as the removal and reorganization of content. Social media platforms offer new opportunities for citizens to communicate and interact directly, organizing them into online audiences. While the benefits are clear and sometimes seem utopian, pornography, obscenity, violence, illegality, abuse, and hate are increasing (All Platform Moderate, 2018). Most social media content uploaded by users requires human intervention to filter properly, especially in the case of videos and images. Human censors use a range of advanced cognitive and cultural skills to determine if a site or platform is appropriate. Social media platforms need to be able to identify offensive users and be responsible for content management and policing activities to avoid losing harassed users (DeNardis, 2015). Social media plays multiple roles in the digital public sphere.

First of all, social media companies are important players in many different types of regulation. The platforms themselves may not shape the public discourse, but they do shape it. Their role needs to be examined rather than portrayed as omnipotent or merely instrumental. Individuals need to recognize their impact on public participation and the complex dynamics of that impact without overestimating their ability to control it. Review moderation and its operation and open these questions for review (All Platforms Moderate, 2018). The platform needs effective moderation, both to protect a user and to remove what is offensive, despicable, or illegal. But also to show the best to new users, advertisers and partners, and the public.



This implies that social media platforms must adapt their content. The goal of regulation is not to achieve illusory neutrality in social media content testing. Rather, the goal is to mold the sector’s organization and incentives to better meet public goals. Social media regulation is motivated by our assessment of the digital public sphere. Social media have created a digital public sphere in that they are the most important players. It is essential to work to keep the digital public sphere vibrant and healthy, thereby promoting the fundamental goals of freedom of expression, political democracy, cultural democracy, and the development and dissemination of knowledge. Achieving these goals requires a trusted intermediary with appropriate regulation (All Platform Moderate, 2018). Regulation should aim to facilitate the accountability of social media companies in the digital public sphere. When regulating detrimental online content, the government and media should divide and work together in a clear and appropriate manner (Gillespie, 2018). Governments are required to support self-regulation, rather than abandoning regulation altogether, to encourage innovation in media platforms.


Schlesinger. (2020). After the post-public sphere. Media, Culture & Society, 42(7-8), 1545–1563. https://doi.org/10.1177/0163443720948003

All Platforms Moderate. (2018). In Gillespie, Custodians of the Internet : Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (pp. 1–23). Yale University Press, https://doi.org/10.12987/9780300235029

Cotter, K., Kanthawala, S., Foyle, K. (2022). Safe from “harm”: The governance of violence by platforms. https://doi.org/10.1002/poi3.290

DeNardis, L., & Hackl, A. M. (2015). Internet governance by social media platforms. Telecommunications Policy, 39(9), 761–770. https://doi.org/10.1016/j.telpol.2015.04.003

Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. New Haven: Yale University Press. https://doi-org.ezproxy.library.sydney.edu.au/10.12987/9780300235029