Moderating problematic content – who should be in charge?

“Social Media.” by Lauren Coleman is licensed under CC BY 2.0.

Moderating problematic content – who should be in charge?

The contents of bullying, hate content, porn, and other harmful content online has circulated, often with ease, since the inception of the internet and has become an ever-growing issue since its exponential growth. However, whose role is it to filter this content and subsequently stop its spread? There are numerous groups who may be involved in this process, including governments, technology companies and their subsequent social media platforms, as well as individual internet users. This multi-stakeholder approach allows for the penetration of varying different methods of control, forming the most effective method of protection against problematic online content.

Why should we moderate content?

The online environment is one which fosters the rapid dissemination of information and an ease of accessing content online, both anonymously and publicly. The ease of posting content is mitigated through the hardware of social media platforms, where a simple ‘post’ button allows for anyone to share online. This can foster and provide an easy environment to share hate towards minority groups, individualised attacks, and disturbing footage, which may be shared and within minutes be viewed by thousands of individuals. A defining instance in the growth of problematic online content was the sharing of James Foley’s, an American journalist’s, execution video by ISIS. It rapidly spread through Twitter, and numerous measures were employed to slow the dissemination. This was a standpoint in the conversation for the need to regulate harmful content online at a more efficient pace (Kang, 2014). Albeit this example illuminating violent content, it effectively highlights the need for content moderation for all problematic content.


Since the internet’s inception, governments have sought control over its capabilities and ever-growing usage. Motivations are driven by a need to remain in control, and to protect citizens who access the internet from viewing harmful content. Some more autocratic governments wish to limit content to ensure no political hate speech may be spread or ensued. Regulation from a governmental standpoint is often minimised in democratic countries to allow the internet to remain largely independent (Topornin et al., 2021). However, government’s role in moderating information their citizens can access is restrained due to the global nature of content on online platforms and their ability to be spread without national security checks (Gorwa, 2019). The spread of problematic content can be somewhat moderated and attempted to be controlled by governments using laws and initiatives implemented as well as cooperating with technology companies (Busch et al., 2018). Previously, the government was able to assert a “command and control” regulation wherein financial and legal penalties were threatened if technology companies did not comply to regulations which protected their interests (Gorwa, 2019). However, with the growing power of these companies, the government is not able to use these tactics as proficiently. In saying such, Australia has implemented numerous laws, such as the Online Safety Act 2021 which allows the Commissioner to remove inappropriate and problematic content across a range of social media platforms (Campbell, 2021).

Getting specific – The Chinese Communist Party

China is infamous for the restrictive nature of the internet and regulation of harmful content. The ‘Firewall’ is a combination of legislations and proxy’s filtering content from reaching Chinese citizens. Although limiting their freedom to access all social media, it targets content harmful content against the ‘moral norms,’ such as violence or political hate (Taneja & Wu, 2013). However, not only does it restrict access from social media sites such as YouTube and Facebook, but also search engines like Google, and domestic websites spreading harmful content. Albeit an extreme route, it protects the vast majority of individuals from accessing any such harmful content (Taneja & Wu, 2013). However, the power of this governmental control is hindered as the use of VPNs and proxy blockers may be used to obtain access to the blocked sites and subsequent content.

Self regulation: internet companies

Internet companies play a key role in the moderation of problematic content online via self regulation. Social media platforms are highly influential over content shared and disseminated online. Since many of these platforms establishments, they haven’t been without numerous public scandals arising from indecent content which was failed to be monitored by measures implemented (Flew, 2019). All social media platforms have inbuilt regulation measures which are used to filter and pre-empt harmful content (Cusumano et al., 2021). Zuckerberg, the CEO of Facebook, indicated he personally believed in the regulation of social media sites at a US Congress meeting. He stated, “platforms should be required to demonstrate they have systems in place for identifying unlawful content and removing it” (Watney, 2022, p. 198). This opinion is highly valuable as it illuminates how the architects of these large technology companies recognise the weight of their power in disseminating harmful content and the power they have to control it. Further, platforms voluntarily moderate problematic content for a variety of other self-advantageous reasons. These include their desire to remain independent from government intervention, and to avoid public controversies which often stem from the release of elusive content (Ozimek & Forster, 2021). The inclusion of self-guided methods for users of social media platforms to report and flag this content is also important, allowing the users the interface was designed for to have a say in what content they deem acceptable. This includes the report button on YouTube as well as Instagram, and the ‘Community Standards’ on Facebook.

INTERNET” by lecasio is licensed under CC BY-NC-ND 2.0.
Logan Paul incident

The need to for social media platforms to self-regulate is exemplified in the 2018 incident involving YouTuber Logan Paul, after he filmed a video of a man who had recently committed suicide in the sacred Japanese forest, Aokigahara. YouTube reviewed the video before it was publicly released, and it was in YouTube’s top 10 trending videos. Over 6.5 million internet users saw the video before it was taken down, demonstrating the rapid dissemination of content online and the need for more thorough screenings (“YouTube punishes Logan Paul”, 2018).

Commercial content moderation

A large part of self-regulation for online platforms is commercial content moderation as well as other entity’s providing guidelines for what is acceptable online content. Commercial content moderation is the exercise of screening content before, or soon after, it is published on a given online platform (Roberts, 2019). This screening comes because of users or external parties ‘flagging’ uncomfortable content. The moderators are people who have been trained in following set criteria, such as the intent or nature of the content, to approve or remove it from remaining online (Roberts, 2019). This method is reliable for obvious negative content, however, mainstream platforms welcome sizeable amount of content daily that intense screening cannot occur, hence many screeners allow most content to pass without significant checks. Hence, if there is an issue with content, it has most likely already been displayed to the public before its removal (Roberts, 2019). The use of human intervention comes with the challenge of dealing with the trauma and emotional labour of reviewing harmful content, triggering secondary trauma and eventual burnout (Steiger et al., 2021). Consequently, this method of moderation may be seen as ethically unsustainable, however it is practical to use human moderation to target such a broad issue.

“Social Media Keyboard” by Shahid Abdullah is licensed under CC0 1.0.

Independent entities

The existence of independent entity’s is vital in providing a separate standpoint and advice on which social media and other online platforms can follow in terms of content moderation. The Facebook Oversight Board is a body of individuals who independently review content which has been flagged and is deemed controversial to remain online. The Oversight Board website details its decisions publicly, giving reasoning for each. A specific example is the Boards decision to overturn a post previously deleted by Facebook surrounding Breast Cancer Awareness, as a result of a female nipple being displayed (Oversight Board overturns original decision, 2021). Albeit this usually allowing for the removal of such content, the post was made in line with raising awareness for the breast cancer and not posted in a sexual context. This demonstrates the importance of the Board moderating content to be beneficial to the public, showing the positive side of revising content formerly seen as problematic.


It is clear that a multi-stakeholder approach is needed in order to combat the spread of problematic content across various platforms. Through this approach, both legislative measures as well as inbuilt design features and human intervention provides an extensive methodology in screening and removing problematic content online.



Busch, A., Theiner, P., & Breindl, Y. (2018). Internet Censorship in Liberal Democracies: Learning from Autocracies?. In J. Schwanholz, T. Graham & P. Stoll, Managing Democracy in the Digital Age (1st ed., pp. 11-29). Springer. Retrieved 8 October 2022, from http://10.1007/978-3-319-61708-4.

Buzzfeed, [@BuzzFeed]. (2018, Jan 3). YouTube responded to Logan Paul’s dead body video controversy and people aren’t here for it [Tweet]. Twitter.

Campbell, M. (2021). Tightening the law around online content: Introduction of the Online Safety Act 2021 (Cth) – Security – Australia. Mondaq. Retrieved 13 October 2022, from,material%20and%20harmful%20online%20content.

Economy, E. (2022). The great firewall of China: Xi Jinping’s internet shutdown. the Guardian. Retrieved 14 October 2022, from

Gorwa, R. (2019). The platform governance triangle: conceptualising the informal regulation of online content. Internet Policy Review8(2).

Kang, J. (2014). Should Twitter Have Taken Down the James Foley Video?. The New Yorker. Retrieved 13 October 2022, from

Oversight Board overturns original Facebook decision: Case 2020-004-IG-UA | Oversight Board. (2021). Retrieved 13 October 2022, from

Ozimek, P., & Förster, J. (2021). The Social Online-Self-Regulation-Theory. Journal Of Media Psychology33(4), 181-190.

Roberts, S. (2022). In Behind The Screen (pp. 33-73). Yale University Press. Retrieved 13 October 2022.

Steiger, M., Bharucha, T., Venkatagiri, S., Riedl, M., & Lease, M. (2021). The Psychological Well-Being of Content Moderators. Proceedings Of The 2021 CHI Conference On Human Factors In Computing Systems, 8-13.

Taneja, H., & Wu, A. (2014). Does the Great Firewall Really Isolate the Chinese? Integrating Access Blockage With Cultural Factors to Explain Web User Behavior. The Information Society30(5), 297-309.

Topornin, N., Pyatkina, D., & Bokov, Y. (2021). Government regulation of the Internet as instrument of digital protectionism in case of developing countries. Journal Of Information Science1(1).

Watney, M. (2022). Regulation of social media intermediary liability for illegal and harmful content. European Conference On Social Media9(1), 194-201.

YouTube punishes Logan Paul over Japan suicide video. BBC News. (2018). Retrieved 4 October 2022, from