From Micro to Macro: A Peak into the Governing of Content on Digital Platform

Bullying, harassment, violent content, hate, porn and other problematic content still haunt digital platforms

The world is has made the leap into the digital era where exchange are increasingly seen online. With the rise of Web 2.0, interactivity, user-generated content, accessibility and availability becomes the gist of the current times (Goerge & Scerri, 2019).

With the signs of accelerated technology development and digital transformation, digital platform has become the new space for exchange that takes places online. Social media has gained great popularity over time and there are more than 4.26 billion people worldwide that uses social media (Dixon, 2022). To name a few, some of the most popular social media platforms includes Twitter, Facebook, Instagram and Youtube.


With digital transformation, bullying, harassment, violent content, hate, porn and other problematic content still haunts these digital platforms. According to the findings, 52%, 52%, 46%, 52% of the teenager in United States has experienced racism, sexism, anti-religion, homophobic hate speeches respectively as of 2018 (Statista, 2022).

With technological affordance and traits of Web 2.0, these problematic content spreads at greater scale and speed online (Brake, 2014). This therefore comes to be the problem of who should be responsible for the spread of these content and how. In the address to these issue, this media piece will take on the approach from micro to macro, examining and identifying the stakeholder on the management and responsibility of these problematic content.


From a Micro Perspective: The Governing of the Self

Social media platform is Web 2.0. Web 2.0 is the form of online space which allows, facilitate and encourage easy usage, user-generated content, participatory culture and interaction. As compared to Web 1.0 where the form of contribution is being controlled, Web 2.0 has made the form of transformation from Web 1.0 (Fuchs, 2010). This therefore has given the users greater control and rights, making them an important key stakeholder for the stopping of spread for these problematic content.



On Facebook, there are approximately 3 billion Facebook users, Youtube has approximately 2.5 billion users and Instagram has approximately 1.5 billion users (Datareportal, 2022). This shows that the there are high volume of users on social media with at least 3000 million active users per month (ibid.).

These social media platforms then feed on these users to contribute contents on social media platform. User-generated content is when the social media users post content such as video, text, audio, images and other media contents on social media (Kim & Song, 2017). This allows the social media to becoming vibrant and content heavy. Furthermore, the users of social media have become prosumers where they not only produce but they are also consumers of these contents. This therefore makes the users to become an important stakeholder for the stop of spread of these problematic content.



Therefore, education for users is important on making them understand the negative impact of these problematic contents such as harassment, violent content, hate, porn and other negative contents online. From understanding is the form of awareness that the posting and spreading of these content will have negative impacts on the self and also society. As these digital platforms are being fed by these contents contributed by the users, the understanding and awareness of these problematic contents will also mean that users will not contribute such contents. Furthermore, as prosumers, they will also not spread these information (Ritzer, 2015). These users will also seek to report these contents when they see it online. Therefore, through education and awareness creation, these users should do their part by not contributing such content, do not spread these contents such as liking, reposting and commenting and also seek to create a healthy online space through checks and actively reporting problematic contents.




        From a Meso Perspective: Self-regulation of Digital Platforms


Self regulation of digital platform becoming the way forward


Digital platform can be the double-edge sword where it offers a platform for ease and fast of exchange, however also encouraging and harvesting bad behaviours and bad actors. These digital platforms therefore face the form of dilemma of greater social good or accelerating distribution of extreme contents for greater user engagement and interaction which than translates into billion of revenue (Cusumano et al., 2021).

However, it is believed that digital platforms should engage in greater self-regulation now. When companies are being engaged in short term interest which are self based instead of for the greater interest of the society, they have greater risk on the creation of “tragedy of the commons” (Ghosh, 2021). This therefore comes to be the importance of ensuring the bottom line of digital platform which is ensuring good for the society and governing the form of bullying, harassment, violent content, hate, porn and other problematic content that circulates.

Self-regulation is where the companies have their own rules and guidelines to govern the process of the company. This therefore allow self-monitoring of the company. For digital platforms, self-regulation means that the company have put in place various algorithms and guideline to identify and take necessary actions for these problematic contents online.

These form of self-regulation of these digital platform can then be incorporated onto the “term of service” which then allows these digital platforms to delete these form of unethical, illegal and harmful content. Many of the social media platforms such as Facebook, Instagram and Twitter have their “term of service” which reserve the rights for these digital platform to remove the contents (Medzini, 2021). Algorithms and various mechanisms are also being in place where certain words and phrases are unable to be posted and spread on these social media platforms. This then become the function of these digital platform for they need to engage in self-regulation to ensure that these bullying, harassment, violent content, hate, porn and other problematic content does not circulates on digital platforms.


From a Macro Perspective: Government Regulation

Despite having both the micro and meso approach to the governing of these problematic content, government should continue to be an anchoring body which provides various legislation, laws and policies to have law-abiding effect (Jakubowicz, 2019). Government continues to become an important stakeholder in the control and regulation of these problematic contents. Being the governing body of the nation, the government has the ability to set in place law enforcing regulations and legislations and also being able to oversight the space.


  Racist abuse of Black English football Player on social media platform Facebook and Twitter has urged for stronger laws in place to govern these problematic contents


In Australia, an Australian code of conduct on disinformation and misinformation has been drafted (Gelber, 2021). The Australian Communications and Media Authority has also taken up the responsibility of ensuring the efficacy of this code. The “Safety by Design framework is also being developed where the form of unethical, harmful and illegal contents are not allowed and prohibited on the online environment.

In Germany, the government has put in place one of the toughest law against hate speech online. This law therefore will fine the social media companies €50 million if they failed to delete contents that are unlawful (McGoogan, 2017)). France has also passed law which put in the requirement for social media to delete content that are unlawful and being reported by users. In the United Kingdom, the Online Safety Bill is also being incorporated with “new framework to tackle harmful content online” (ibid.).


Reference list

Brake, D. (2014). Are We All Online Content Creators Now? Web 2.0 and Digital Divides∗. Journal of Computer-Mediated Communication 19: 591-609.


Cusumano, M., Gawer, A., & Yoffie, D. (2021). Social Media Companies Should Self-Regulate. Now..

Dixon, S. (2022). Number of global social network users 2018-2027.

Fuchs, C. (2010). Labor in informational capitalism and on the Internet. The Information Society, 26(3), 179-196. doi: 10.1080/01972241003712215

Ghosh, D. (2021). Are We Entering a New Era of Social Media Regulation?

Gelber, K. (2021). A better way to regulate online hate speech: require social media companies to bear a duty of care to users.

Goerge, C., & Scerri, J. (2019). Web 2.0 and User-Generated Content: Legal Challenges in the New Frontier. Journal of Information. 2.

Medzini, R. (2021). Enhanced self-regulation: The case of Facebook’s content governance. New Media & Society. 24 (10).

McGoogan, C. (2017). Germany to fine Facebook and YouTube €50m if they fail to delete hate speech.

Ritzer, G. (2015). The “New” World of Prosumption: Evolution, “Return of the Same,” or Revolution?. Sociological Forum. 30 (1): 1-17.

Statista. (2022). U.S. teens encountering hate speech on social media 2018, by type.

Kim, M., & Song, D.  (2017). When brand-related UGC induces effectiveness on social media: The role of content sponsorship and content type. International Journal of Advertising 37, no. 1: 105–24.

Jakubowicz, A. (2019). 6 actions Australia’s government can take right now to target online racism.