Internet content governance: Who should take the responsibility?

“Social Media Logos” by BrickinNick is licensed under CC BY-NC 2.0.

Regulating digital media illicit content has long been a political issue of media policy in the contemporary digital environment since the arising of the internet. The global and interactive nature of the internet platform allows online content to break through the limitations of space and spread more easily in a short period. While online platforms are deeply engaging with various forms of public activities, it has also generated a number of potential harms against the public interest and safety in practice, such as cyberbullying, hate speech, violent content, and many other forms of illicit content. At the same time, the existing laws and regulations governing traditional mass media content services are difficult and inadequate to the current digital media environment in terms of managing public speech and illicit content through technological changes (Gillespie, 2017, p.255). Hence, accompany by the growing volume of online content, there is a need to seek effective strategies to strengthen online management and improve the legal and regulatory system to minimize the negative impacts of such problematic content and behavior.


Questions about who should take responsibility and liability for constraining illicit content on platforms are amplified as the unfavorable illicit content online expand spontaneously. Although the national governments are devoted to creating new laws while online platforms are launching self-regulatory frameworks to restrict harmful media content (Flew et al., 2019). This article suggests that stakeholders in the online platform ecology are integral parts of governing the internet and stopping the dissemination of illicit content. To support this argument, this article will consider the corresponding obligations of the platforms, national governments as well as individual online users in governing the online platform environment.


Responsibility of the governments

Considering the possible threat and negative influence on the public good that can be caused by harmful online behaviors and content, governments as policymakers, and law enforcers are responsible for regulating the illicit content on digital platforms. Since the shared nature of digital media communication, illicit content can be circulated by a large group of users across different platforms on the internet. This means that content moderation by the individual platform is inadequate and deficient to completely eradicate the spread of illicit information (Gorwa, 2019). Thus, having governments as regulators and laws as a regulatory framework to manage and censor online content promotes a standard for the governance of the internet ecology integrally (Helberger et al., 2018, p. 8). Recently, there is an increasing number of laws that are created and enforced in order to prevent and stop the diffusion of illicit content by requiring the platform to remove or censor the contentious content (Flew et al., 2019, p. 35). For example, back in 2017 Germany passed and implemented a law against online hate speech, this Network Enforcement Act forced social media companies to quickly take down hate speech, fake news, and other related content from platforms (BBC News, 2018). Similarly, the Australian government has passed legislation aiming to compel media companies to remove violent content on online platforms that came into effect in 2019 (Karp, 2019). The Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019 provides for harsh penalties of fines or imprisonment for any internet service provider who fails to report the presence of materially violent content to the Australian Federal Police within a reasonable time after becoming aware of it, or fails to ensure that the content is promptly removed (Australian Government Attorney-General’s Department, 2019). Moreover, another reason for the need for government regulation in online illicit content is that online platform companies are mostly driven by financial interests, and therefore the platforms cannot be fully trusted with the self-regulatory framework (Helberger et al., 2018, p. 8). In other words, the platform does not govern the platform from the point of view of safeguarding the public interest and public safety. From these perspectives, the responsibility for governing illegal content on the Internet needs to be shared by the stakeholders, thus creating a state of mutual restraint and maintaining a relative balance.

Responsibility of platforms

Furthermore, another important constituent part of the shared responsibility to prevent the circulation of illegal content and behavior is through internet companies. While the platforms tend to claim that they are hosting user-generated content which places them in an ambiguous position of governing the online content. However, platforms are more than just intermediaries, they are the constitution of the participatory culture (Gillespie, 2018). In response to the legal measures from the governments as well as an increasing pressure of public sentiment concerning the harm of illicit content diffusion in public communication, corporations in the internet industry are devoted to operating self-regulatory content moderation on the platforms (Flew et al., 2019). Moderation is a regulatory strategy based on relevant published policies to remove or hide illegal content from online platforms (Flew et al., 2019, p.40). There are many instances of conducting content moderation by platforms, for example, in 2020, Facebook decided to remove a post of a false claim about the coronavirus posted by the then President of the United States Donald Trump which was considered as spreading harmful misinformation. Similarly, this false information led to Trump’s content being removed from Twitter due to the violation of relevant content regulations (Wong, 2020). The removals are enforcement actions against spreading illegal content on social media platforms in the context of platform self-regulation. Hence, without these regulatory methods governing platforms, the spread of illegal content will not be effectively managed.


Responsibility of individual users

Finally, individuals should take shared responsibility for stopping the circulation of harmful content. Essentially, regulating illicit content is to manage user behaviors (Gillespie, 2018, p. 256). Organizations or individuals who produce or publish online information content, as users of online information content, have an obligation to report complaints about illegal information and behaviors found to be harmful (Helberger et al., 2018, p. 2). Online harassment, for example, is strictly prohibited by the platform policy as well as the national laws. However, it is obscure to moderate by the platforms. Instead, users are empowered by using the platform technology to block and filter illicit messages (O’Brien & Kayyali, 2015). On the Internet, everyone is not simply a recipient of information, but also a producer of content, both as a beneficiary and as a possible aggressor, and therefore every user has a responsibility to stop the spread of illicit content.



To sum up, in such cases, involving the institutional sector of state government, the technological sector of platforms, and the human sector of users is essential to the shared responsibility in regulating the contentious content online. However, acknowledging the difficulties of allocating this cooperative responsibility is important since the stakeholders contribute in different ways within the internet ecology system. For instance, from the government’s perspective, too much state regulatory interference can threaten the public’s right to freedom of expression (Flew et al., 2019, p. 42). Accordingly, it is important to foster a democratic, transparent, and encompassing process of public discourse on what constitutes harmful content and how to regulate them (Helberger et al., 2018, p. 8). On the other hand, it is also important to consider the limitations of the platform’s self-regulatory context. Platforms may take advantage of the resources they have in terms of technology and information to act to the detriment of public interests. Therefore, these determinations strongly impact public safety, the nature of public communications, and freedom of speech, and should not be dictated solely by governments or online platforms and individual users.


This work is licensed under Creative Commons Attribution-NonCommercial 4.0 International Public License.



Australian Government Attorney-General’s Department. (2019). Abhorrent Violent Material Act Fact Sheet.


BBC News. (2018, Janurary 01) Germany starts enforcing hate speech law.


Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal of Digital Media & Policy, 10(1), 33–50.


Gillespie, T. (2017). Regulation of and by Platforms. In The SAGE Handbook of Social Media (pp. 254–278). SAGE Publications


Helberger, N., Pierson, J., & Poell, T. (2018). Governing online platforms: From contested to cooperative responsibility. The Information Society, 34(1), 1–14.


Karp, P. (2019, April 04). Australia passes social media law penalising platforms for violent content. The Guardian.


O’Brien, D. & Kayyali, D. (2015, Janurary 08). Facing the Challenge of Online Harassment. Electronic Frontier Foundation.


Today. (2020, Augest 07). Facebook Removes Trump Campaign Post For Spreading Coronavirus Misinformation | TODAY. [Video]. Youtube.