
1. Online abuse: Digital platforms and problematic content
The advent of social media has made it simpler than ever to keep in touch with loved ones, make new friends, and express one’s opinions and ideas to the world at large. Unfortunately, abusers also make extensive use of social media. Victims of internet abuse can be of any age, gender, sexual orientation, socioeconomic status, race, or culture.
-
Cyberbullying
Cyberbullying is the use of digital technology to carry out bullying, that is, repeated behaviours through social media, instant messaging platforms, gaming platforms, mobile phones, etc. with the purpose of intimidating, angering, or humiliating others.

For example:
- Spread false statements about others or post embarrassing photos on social media.
- Send hurtful or threatening messages to others through instant messaging platforms.
- Impersonate someone else and send malicious messages in someone else’s name.
Cyber violence can involve different types of harassment, invasion of privacy, sexual abuse and exploitation, and bias crimes against social groups or communities. Causing, promoting, or threatening violence against individuals, resulting in physical, sexual, psychological, or economic harm or suffering.
-
Internet hate speech
Hate speech is the expression that advocates incitement to harm based on the social and/or demographic characteristics of the target, including discrimination, hostility, radicalisation, and verbal and/or physical violence. Hate speech may include, but is not limited to, statements that advocate, threaten, or encourage violence.

-
Content Censorship and Internet Governance
The shift in focus over the last decade from debates about the Internet itself to debates about content censorship, data flows and competition has changed the debate on governance. It has shifted from governance ‘of the internet’ to governance ‘on the internet’, adding new political tensions (Chapelle and Fehlinger, 2020). When platforms are called upon to intervene in the content and behaviour that takes place in their digital space, important questions of value, public interest, and public policy must be part of the discussion.
2. Internet platform regulation challenges
This problematic content is difficult to manage on the internet platform, mainly due to the publishers and administrators of the content. From the point of view of the publishers, free speech on the internet has given rise to increasing problematic content. Much of the discussion on content censorship and online harm has therefore focused on freedom of speech and expression. Firstly, Western countries have always supported the sovereign right of free individuals to hold opinions, even if they are unpopular (individuals); secondly, free media is essential to hold those in power to account (media); and at the same time, Habermas suggests that truth and knowledge can be gained through the free exchange of different opinions in the public sphere (platforms).
In terms of the stewards of content (governments, laws, state institutions), different countries have different political and philosophical traditions that provide a creative environment for problematic content – e.g., the Declaration of the Rights of Man and the Citizen in France declares citizens’ freedom of expression, but also emphasises that citizens are responsible for their own speech. The First Amendment, the Constitution of the United States, emphasises the absolute freedom and rights of citizens and that neither the government nor Congress has the right to interfere with civil liberties.
3. Who should stop it? How to stop it?
Internet governance is constrained by the freedom of expression of content publishers and the open nature of the public domain, while the cross-border nature of problematic content means that no single national actor can be responsible for stopping the dissemination of such content. Multistakeholder governance is therefore currently an effective governance solution for problematic content, involving multiple stakeholders such as civil society, the private sector, academia, and the technical community.
A multistakeholder model of Internet regulation has emerged, namely the online content regulation model (Douek, 2021). The platform governance triangle presented by Gorwa (2019) shows the main responsible parties for internet governance – the state, NGO, and firm. Each region has a unique or hybrid regulatory framework.
Two regulation types are most common: self-regulation and co-regulation. Self-regulation aims to strengthen an actor’s self-governance, i.e., their independence and accountability to a higher authority. The platformization of the Internet has increased the importance of content moderation, with platforms censoring content for legal and ethical grounds, but also to improve overall user experience (Gorwa, 2019).
Facebook also strikes a delicate balance between free speech and content moderation. On the one hand, Facebook established community standards with the goal of creating a space for expression and giving people a voice (Gillespie, 2018). However, The Facebook Community Standards spell out exactly what is and isn’t permitted on the Facebook app. Facebook has also formed an Oversight Board (FOB) of 40 well-known and independent international members to undertake review judgments on selected Facebook content rulings, such as the decision to ban former US President Donald Trump from the network. That decision also prompted Trump to sue the platforms.
Trump suing Facebook, Twitter, and Google over right-wing censorship claims by Global News.
Co-regulation: This style of regulation attempts to integrate the ‘best’ capabilities of all three actors. The goal of co-regulation is to strengthen shared governance among stakeholders. Accountability varies, but in most circumstances, it takes the form of regular transparency reports, audits, and reactions in the absence of action after notification (Gorwa, 2019).
In 2008, the Social Networking Task Force held a meeting focused on multi-stakeholder collaborative governance. Various regulators, safety organisations, and social networking sites including Facebook are involved in the formulation of regulatory policy with the aim of addressing online regulatory issues related to child welfare, education, and government policy——this process led to the “Safer Social Networking Principles for the EU” (Livingstone et al., 2013).
In 2010, The project “Clean IT” was established, which develop “general principles and best practices” for combating terrorist content and “other illegal uses of the internet through a bottom-up process where the private sector will be in the lead”. The members agreed to a “Code of Conduct on illegal online hate speech”, which raises “privatised enforcement” obligations on the firms to promptly remove terrorist material and other problematic content (Coche, 2018).
4. Critical thinking on the multi-stakeholder model
The Internet economy promotes content producers to produce content that maximizes commercial interests. With the advantages of digital platforms, problematic content such as cyberbullying, violent content, and hate speech challenges Internet regulation. This is also the result of the game between free speech and content moderation. The development of Internet technology also requires regulation. The stakeholder engagement approach is used in many fields and has become an internationally recognized norm. Many people refer to the “multistakeholder model” as a single solution. But there is no single pattern that works anywhere or for any problem. Although the governance triangle is a useful paradigm for identifying stakeholders, some stakeholders are difficult to group together and group them merely on a “spatial” basis.
Papaevangelou (2021) proposes the “Expansion of platform governance triangle”, it can be the basic model for further internet governance for multi-stakeholder.
Stakeholder participation in decision-making is responsible, sustainable, and effective. As the Internet continues to evolve, as do the digital economies and societies that depend on it, the multistakeholder approach must also continually adapt to new challenges.
Reference list
Chapelle, B. de L., & Fehlinger, P. (2020). Jurisdiction on the Internet: From Legal Arms Race to Transnational Cooperation. In Oxford Handbook of Online Intermediary Liability. Oxford University Press. https://doi.org/10.1093/oxfordhb/9780198837138.013.38
Coche, E. (2018). Privatised enforcement and the right to freedom of expression in a world confronted with terrorism propaganda online. Internet Policy Review, 7(4). https://doi.org/10.14763/2018.4.1382
Douek, E. (2021). GOVERNING ONLINE SPEECH: FROM “POSTS-AS-TRUMPS” TO PROPORTIONALITY AND PROBABILITY. Columbia Law Review, 121(3), 759–834.
Gillespie, T. (2018). All Platforms Moderate. In Custodians of the Internet (pp. 1–23). Yale University Press. https://doi.org/10.12987/9780300235029-001
Gorwa, R. (2019). The platform governance triangle: conceptualising the informal regulation of online content. In SocArXiv. https://doi.org/10.31235/osf.io/tgnrj
Livingstone, S., Ólafsson, K., & Staksrud, E. (2013). Risky social networking practices among “underage” users: Lessons for evidence-based policy. Journal of Computer-Mediated Communication: JCMC, 18(3), 303–320. https://doi.org/10.1111/jcc4.12012
Papaevangelou, C. (2021). The existential stakes of platform governance: a critical literature review. Open Research Europe, 1, 31. https://doi.org/10.12688/openreseurope.13358.2