
Digital platforms breed problematic content
Media, freedom of expression, and freedom of information have long been recognized as Internet governance principles. Internet governance is the creation and implementation of agreed principles, norms, rules, decision-making processes, and programs by governments, the corporate sector, and civil society in their respective responsibilities (de Bossey, 2005). The profit-seeking nature of the Internet also gives room for such problematic content to survive. The Internet economy has given rise to the profit-seeking nature of capital, which has led to public platforms where many publishers consider the capital gains and commercial interests of the content. More individuals are adopting social media platforms due to the Web 2.0 era to engage and communicate with one another. On these platforms, users may voice their thoughts and connect socially in either a dominant or participatory manner (Gillespie, 2018). Culturally influenced by the dominant ideology of free speech supremacy, many technology companies took a laissez-faire approach to extreme content (Leiner et al., 2009).

At the same time, much of the Internet content is politically motivated (e.g., hate speech, political party struggles), which allows content such as hate speech, partisan struggles, and terrorism to spread on the Internet and have a transnational character. In this context, the transformation of the Internet has raised many social and cultural issues. The speed of the Internet has led to increasingly sophisticated cultural exchanges and cultural development, and many areas that lack educational resources can receive quality resources via the Internet. However, the risks posed by excessive “freedom of speech” and the right to anonymity include the propagation of negative information on digital platforms that promotes bullying, harassment, violence, hate, porn, and other problematic content.
Examples with problematic content
- Molly Russell’s death
- Donald J. Trump “the Chinese virus”
- ISIS video

In 2019, Ian Russel chose to go public with the story of his daughter Molly Russell’s death. Technicians cracked Molly’s phone, and Molly had saved, liked, or shared 16,300 Instagram posts in the six months before her death, including 2,100 related to suicide, self-harm, and depression. When Molly followed an account, Twitter’s algorithms would have recommended similar ones – the more interaction with content on a particular subject, the more of that kind of content is suggested. In the end, the investigator made a ruling on her death. Her real cause of death was self-harm under the influence of depression and online content (Crawford & Smith, 2022). This issue was caused mainly by the platform’s incorrect social media content policies. This investigation requires representatives of the companies involved, Meta and Pinterest, to come to the UK to take the oath in person. Pinterest’s representative quickly admitted the defects of the platform, and the spokesman of Meta said that the report provided by the investigator would be carefully considered in the future (Crawford & Smith, 2022).

In 2020, Donald J. Trump, a former president, tweeted about “the Chinese virus” and the volume of tweets on the coronavirus that included anti-Asian hashtags. Recent research from UC San Francisco revealed about 20% of the almost 500,000 hashtags with #covid19 displayed anti-Asian prejudice, whereas 50% of the more than 775,000 hashtags with #chinesevirus revealed anti-Asian bias (Kurtzman, 2021). The hashtag #chinesevirus was widely used, and those who used it were far more likely to combine it with explicitly racist hashtags.
Hate speech and news are fanning the flames of conflict on a political level. In 2014, ISIS posted a video in which it beheads a man they claim is a U.S. journalist called James Foley as a response to American airstrikes in Iraq. The footage of Foley being decapitated got such a bad reaction that terrorists posted it on YouTube to incite hatred (CNN, 2014). The removal of the video was the YouTube platform’s answer.
“ISIS video appears to show beheading of U.S. journalist” by CNN All rights reserved. Retrieved from: https://www.youtube.com/watch?v=rSpXadFAgmw
State, NGOs, and Firm should be responsible for stopping the spread of this content and use government regulation, self-regulation, and firm-NGO co-regulation
Hate speech and fake information flood the internet. Uninformed bystanders lose sight of the real story, resulting in “false” political upheaval on a global scale. The public’s tendency to follow the herd complicates those who spread hate speech online as the mass media becomes a “weapon” (Snowden, 2022). Deleting and blocking posts that include egregiously damaging information on Twitter and YouTube is difficult. Even algorithms fail to detect all confrontational and violent postings due to the continual stream of damaging information submitted. Thus, it is difficult for a single state or actor to control and regulate, so multistakeholder governance is needed, and NGOs, firm and state should be responsible for stopping the spread of this content and use government regulation, self-regulation, and firm-NGO co-regulation.
Government regulation
To develop a set of regulatory norms for various circumstances, the government should take the lead in setting standards and carefully balancing the competing objectives of public rights, freedom of speech, and juvenile protection. Also, the proliferation of hate speech can have various harmful effects and potentially jeopardize social security in the broader online environment. State should be responsible for stopping the spread of this content and use government regulation. The government can take the lead in setting standards and carefully balancing the competing goals of public rights, freedom of speech, and anti-racism. For instance, the proliferation of hate speech can have a variety of harmful effects and potentially jeopardize social security in the broader online environment. Online platforms are subject to fines of up to €50 million under the Network Enforcement Act statute of Germany, which went into effect on January 1, 2018, for persistently failing to remove unlawful information (The impact of the German NetzdG law, 2019).
Self-regulation
Because of the Internet’s immense volume, technological complexity, and other factors, direct government control is highly challenging (Trudel, 2009). Firms are to expand their interests through corporate growth, find it challenging to monitor themselves impartially. Self-regulation refers to internal regulatory frameworks inside media firms that direct how platforms keep an eye on and control the actions of others. According to Gillespie, platforms serve as intermediaries, translators of laws, arbitrators of conflicts, and enforcers of regulations. Therefore, the obligation to control user material is inevitable (2019, p. 6). Firms should be responsible for stopping the spread of this content and use self-regulation. Self-regulation involves establishing guidelines and expectations. The user’s account and any submitted information will be terminated or “disappeared” if they breach the law (the account is banned from access). Moreover, a rating system or specific real-name policy might be implemented for problematic content. Digital platforms can promptly address dangerous material, uphold consumer rights, and effectively enforce laws. Firms maintain codes through user complaints, keyword searches, and other rules such as user agreements, codes, and content regulation. Platforms must distinguish between cultural backgrounds and adherence to national laws, among other things, inside various national jurisdictions (Flew et al., 2019, p. 41). For instance, forty prominent and independent worldwide members make up the Facebook Oversight Board (FOB), which makes judgments about some Facebook content moderation, such as whether to remove former US President Donald Trump from the network (Klonick, 2020). It demonstrates that industry self-regulation is more important than direct government action.
Firm-NGO co-regulation
Around the world, non-governmental organizations (NGOs) and businesses co-regulate various functions. They also play an increasingly significant role in shaping policy by balancing political, commercial, and community motivations (“Non-Governmental Organizations (NGOS),” 2003). Even if the video and live broadcast are removed, false information and conspiracy theories will continue circulating. Therefore, NGOs should prevent the spread of this information and employ co-regulation between businesses and NGOs. Online service providers have voluntarily committed to “take transparent, specific actions trying to prevent the upload of terrorist and violent extremist information and to prevent its spread on social media and similar content-sharing platforms” (Call, n.d.). For example, the Internet Watch Foundation is a nonprofit organization with over 70 employees working in various sectors to combat violence and content to assist sexually abused children by locating and eliminating abusive movies disseminated online. Additionally, community donations finance the organization’s operations (Internet Watch Foundation, n.d.)
Everyone is accountable for keeping a “healthy” digital platform
In conclusion, each regulatory approach has its limitations. It is difficult for a single state or actor to control and regulate. Future regulation of problematic content should combine the advantages and disadvantages of multistakeholder governance, for which NGOs, the Firm, and the State should be responsible through government regulation, firm-NGOs regulation, and self-regulation.
Reference List
Call, C. (n.d.). Christchurch Call. Christchurch Call. Retrieved October 14, 2022, from
https://www.christchurchcall.com/
CNN. (2014). ISIS video appears to show beheading of U.S. journalist. https://www.youtube.com/watch?v=rSpXadFAgmw
Crawford, A., & Smith, T. (2022). In her own words – Molly Russell’s secret Twitter account. BBC.
https://www.bbc.com/news/uk-62892636
de Bossey, C. (2005). Report of the Working Group on Internet Governance.
Leiner, B. M., Cerf, V. G., Clark, D. D., Kahn, R. E., Kleinrock, L., Lynch, D. C., Postel, J., Roberts, L. G., & Wolff, S. (2009). A brief history of the internet. Computer Communication Review, 39(5), 22–31. https://doi.org/10.1145/1629607.1629613
Non-Governmental Organizations (NGOS). (2003). In Introduction to the International Human Rights Regime (pp. 257–263). Brill | Nijhoff.
The impact of the German NetzdG law. (2019, August 2). CEPS. https://www.ceps.eu/ceps-projects/the-impact-of-the-german-netzdg-law/
Trudel, P. (2009). Web 2.0 Regulation: A risk management process. SSRN Electronic Journal, 7(243).
https://doi.org/10.2139/ssrn.1573980
Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal of Digital Media & Policy, 10(1), 33–50.
https://doi.org/10.1386/jdmp.10.1.33_1
Klonick, K. (2020). The Facebook Oversight Board: Creating an independent institution to adjudicate online free expression. Yale Law Journal, 129(2418).
Kurtzman, L. (2021). Trump’s ‘Chinese Virus’ Tweet Linked to Rise of Anti-Asian Hashtags on Twitter. University of California San Francisco. https://www.ucsf.edu/news/2021/03/420081/trumps-chinese-virus-tweet-linked-rise-anti-asian-hashtags-twitter
Snowden, C. (2022). Guns, tanks and Twitter: how Russia and Ukraine are using social media as the war drags on. The Conversation. https://theconversation.com/guns-tanks-and-twitter-how-russia-and-ukraine-are-using-social-media-as-the-war-drags-on-180131
Gillespie, T. (2019). All Platforms are Moderate. In Custodians of the Internet (pp. 1–23). Yale University Press.
https://doi.org/10.12987/9780300235029-001
Van Wyk, J. H. (2019). Why we exist: What if. Independently Published.