Halting the Circulation of Problematic Content on Digital Platforms

Halting the Circulation of Problematic Content on Digital Platforms

There is no doubt that social media has been infiltrated by harmful content, including violent content, harassment, bullying, and hate. Various digital platforms have tried to activate various mechanisms and approaches to moderate content to protect users from harmful content and eliminate abuses from their sites (Jung et al., 2021). Nevertheless, social media platforms are accessible globally; thus, individual, necessary social values and behaviour required to regulate the platform and the content have become highly diverse. Achieving sufficient control of the enormous content has become extremely challenging for stakeholders. For example, Facebook, among the largest social media platforms, has approximately 2.7 billion active users per month globally. The platform supports 111 languages, with only 15,000 moderators against billions of users (Jiang et al., 2021). It also uses the Facebook Community Standards that govern moderation. Markedly, it is difficult for Facebook and other digital platforms to control the circulation of harmful content since users come from diverse backgrounds. I argue that halting harmful content circulation in digital space requires collaborative efforts involving users, governments, and owners.


Role of Platform Owner in Halting Problematic Content

Social media platforms, including YouTube, Facebook, Instagram, TikTok, and Twitter, are core to sharing information and are now integral parts of people’s social lives worldwide. Despite not being the producer of shared content, the platform store, organize, and share massive user-generated content (Jiang et al., 2021). The platforms claim to act impartial besides being reluctant to control speech, but most have incentives to do moderation. Apart from the need to attain regulatory and policy restrictions, the platforms must also negate losing users due to harmful content, keep a positive corporate image, and act based on internal ethics. Chandrasekharan et al. (2017) established that moderation could work in enhancing the already toxic diverse communities on digital media. Markedly, the platforms regulate users using two significant rules: community guidelines and service terms. The terms of service outline the legal contract that binds users and the platforms. Users and platforms have specified obligations and liabilities under terms of service. Scholars have argued that there is a need for integration of work of policy in current research with the idea that policy is entangled with practice and design.

On the other hand, community guidelines apply plain language, which elaborates the how the platforms expect the users to conduct themselves when using or engaging. There is a higher likelihood of users paying attention to and reading community guidelines and comprehending them when they need to use the platform rules (Jiang et al., 2021). The platforms have community guidelines that elaborate on bad and correct behaviour and expect users to engage within such framework. Illegal behaviour, such as child exploitation and trafficking of humans, are prohibited from the platform—further, non-illegal content like commercial spam that moves users out of the channels and harassment. Studies have demonstrated that the rule is critical in deterring the circulation of harmful content on the platforms (Jung et al., 2021). In contrast, they constitute abusive behaviour and specific norms aligned with the platform. They also promote constructing a governance system that can guide much more diminutive platforms. Platforms with a considerable number of users can enhance policies and rules based on each other. For example, recently, Twitter banned political ads just after Facebook had declined to remove them or check them to ascertain facts. The rules, particularly when distinct, demonstrate how various digital platforms consider themselves the umpire of cultural values.

As the problems continue to grow, the amount of content also grows at an alarming rate, thus complicating any attempt to halt circulation. The scenario then poses a challenge to platforms’ content moderators. Arguably, proactive moderation is eliminated when the content becomes massive; in other words, it becomes a challenge for the moderator to fact-check before the content reaches the platform. Therefore, the only remaining option is reactive content moderation, where the publication of the content occurs beforehand the review (Roberts, 2019). The moderator then removes unacceptable content upon checking facts after users have already reported. In the reactive scenario, malicious content affects online users regardless of the time it lasts before the moderator filters it. The platform needs to deploy several people to achieve good moderation based on ex-post moderation. Facebook has approximately 15000 permanent moderators spread across the globe, and their main job is to filter or removes harmful content. However, they cannot manage to review significant traffic of content generated by billions of users (Jiang et al., 2021). It implies a higher likelihood of harmful content remaining unseen for many days since the moderator has insufficient capacity to manage such huge content. Further, the lexical variations allow malicious users to circumvent moderation, complicating online moderators’ moderation. The platforms have also recently employed algorithms to filter harmful content; however, algorithmic moderation still has limited accuracy in filtering abusive content. Therefore, there is still much to be done by the platforms to enhance the removal of harmful content. Refining the algorithm can provide the most viable solution to the problem.

The Role of Governments and Its Actions

Digital platforms are now used globally, implying that various local laws apply to online interaction and regulation of digital platforms due to the intersection of local laws. For instance, European Union has stipulated rules governing different hate speech categories. In some jurisdictions, the platforms only need to commit to their rules to filter harmful content. However, it is a legal requirement for the platforms to remove abusive content; examples are Germany and Australia (Jiang et al., 2021). Another challenge in moderation is the highly diverse online users; users come from distinct backgrounds with distinct values and norms. It becomes pretty challenging for platforms that rely on one set of rules to regulate the behaviour of users from different cultural backgrounds. Some of the decisions made by the platforms have sparked global controversy. For instance, Facebook’s decision to exempt politicians from sharing fake information created controversies. Arguably, unified rules are required for digital institutions to enhance control of harmful content on their platforms.

Government can play a critical role in regulating online platforms. Despite the efforts by the platforms to reduce the circulation of harmful content, the challenge remains since content removed from one channel might be posted on another. Therefore, an ecologic perspective is necessary for dealing with the problem. The government should assist in regulating online content by creating a framework that allows sharing responsibilities by core societal stakeholders (Helberger et al., 2017). The United States and European nations have explored necessary regulations to compel digital media firms to remove harmful content and fake news from digital platforms. Nevertheless, the approach might lead to the removal of content that might be useful as some content may not be understandable to some people. Therefore, stakeholder involvement in content moderation or removal interferes with public safety and curtails freedom of speech. Studies have shown that the government tends to regulate public communication based on freedom of speech. It is challenging to trust social media firms to promote the public’s interest, yet they are profit-oriented. The users may also lack knowledge of self-monitoring or desisting from using harmful content (Jung et al., 2021). Therefore, the government should play a balanced role by incorporating the concerns of all parties in developing laws aimed at controlling the use of harmful content. Further, laws and regulations should be extended to digital platforms like public service media. However, the regulation should take new approaches since the dynamics are different. Users should act within the regulations by ensuring that the online content adheres to ethical standards; otherwise, one should face the law (Jiang et al., 2021). All stakeholders must play their part in eliminating harmful content on the platform.


Indeed, a collaborative effort is required to stop the continued circulation of harmful content on online platforms. The government’s principal role is to set laws that govern social media firms and online user behaviour. Of course, the platforms need to strengthen the rules of engagement with online users to deter the posting of harmful content. So far, the platforms have tried to moderate content by relying on moderators and algorithms. However, the two approaches are still inadequate since the person cannot manage the enormous traffic, and the algorithm is still inaccurate. Users, government, and social media companies have essential roles to play in ending the circulation of harmful content; hence, consideration of the impact of rules can help in halting the menace without compromising freedom of expression and interfering with businesses.



Chandrasekharan, E., Pavalanathan, U., Srinivasan, A., Glynn, A., Eisenstein, J., & Gilbert, E. (2017). You Can’t Stay Here: The Efficacy of Reddit’s 2015 Ban Examined through Hate Speech. Proceedings of the ACM on Human-Computer Interaction1(CSCW), pp. 1–22. doi: 10.1145/3134666

Helberger, N., Pierson, J., & Poell, T. (2017). Governing online platforms: From contested to cooperative responsibility. The Information Society34(1), 1-14. doi: 10.1080/01972243.2017.1391913

Jiang, J., Scheuerman, M., Fiesler, C., & Brubaker, J. (2021). Understanding international perceptions of the severity of harmful content online. PLOS ONE16(8), e0256762. doi: 10.1371/journal.pone.0256762

Jung, P., Shim, W., Youn, J., Kang, M., & Jeon, H. (2021). Self-Regulation of Foreign Internet Service Provider on Illegal and Harmful Internet Content: The Status Quo and Improvements. European Constitutional Law Association35, 141-168. doi: 10.21592/eucj.2021.35.141.

Roberts, S. (2019). Behind the Screen: Content Moderation in the Shadows of Social Media. New Haven, Connecticut: Yale University Press.