
Rise of platforms
The term “platform” is now one of the mostly used and dominant words in the fields of media and the internet. Its meaning has been broadened from “infrastructure to build applications on” to “something computational, political and figurative” (Gillespie, 2010). It is a place with multiple functions that have changed the ways our economy, politics and society work. As a result, Platformization has become a global trend engulfing almost every aspect of our society, such as cultural production, education, financial modes, socializing, entertainment and so on (Nieborg, 2018; Westermeier, 2020; Kerssens, 2021; Zhang, 2021). And some websites have transformed into corresponding platforms in response to the call of times. For example, Facebook began to authorize third-party developers to access users’ information in 2006 and officially launched Facebook platform in 2007 (Helmond, 2015).
To a large extent, platformization represents the main characteristics of Web 2.0 technologies. Bruns (2008) saw these technologies as to give users more opportunities to participate in content production and smear the boundaries between production and consumption. Thus was born what Jenkins and Deuze (2008) called “participatory culture” of the Internet. And of all the content provided on platforms, user-generated content (UGC) has become dominant. It’s a term used to describe any content on platforms produced by individuals instead of brands (Beveridge, 2022). Thanks to advanced technologies, users are able to pose text, videos, images on platforms on their own, rather than wait to receive information passively. In a metaphorical sense, this mode of content generation marks some sense of democracy and empowerment (Beer, 2009, p. 986). Users are empowered to voice for themselves and let their voice to be heard by as many people as possible.


Inappropriate content on platforms
However, every sword has two sides. What users generate is not all the time positive or useful. People interact with each other online and exchange their ideas in almost the same way as they converse off line, except that on the Internet, they use more often written words and communicate anonymously. What’s more, interactions on platforms are often non-real-time. That is to say, the users may not respond to others immediately once they have received messages. All these typical traits of platform communication, although with their own advantages, can also lead to some serious problems. One of the problems that have received tons of attention both in academic and nonacademic world is that digital platforms are becoming breeding grounds for bullying, harassment, violence, hate, porn, and other problematic content.
Take cyber-bullying for example. It refers to the repeated and intentional using of negative words or actions against people through technological methods. In most cases, cyber-bullying can cause damage to the victim’s mental and physical health. In the worst situation, people who get cyber-bullied may even conduct extreme behaviors such as suicide and self-harming. The Amanda Todd Case is one of the most serious cyber-bullying cases in the history of the Internet. Amanda met a stranger online were lured to show her breasts in front of the camera. Later she was blackmailed and her topless photo started to circulate on the Internet. Two months after she posted a video on YouTube telling her story of being insulted and bullied in 2012, Amanda committed suicide by hanging herself in her home.
As the case shows, cyber-bullying can have very serious consequences, not to mention there are other forms of inappropriate content on digital platforms.

Who should be responsible
People who use digital platforms to express their voice are no flawless saint. They are just ordinary people with their own character shortcomings. And since development technologies have made platforms more and more convenient for people to use, it’s inevitable to have negative and unpleasant content on the Internet.
Therefore, it should be the responsibility of authorities and internet service providers to ensure a safe and peaceful internet environment where users can enjoy the freedom of speech and at the same time don’t need to take the risk of getting bullying, harassment, violent content, hate, porn and other problematic content on digital platforms.
From a macro perspective, authorities, mainly the central and local government, are responsible for policy-making and monitoring the implementation of these policies. While at the same time internet service providers are chiefly responsible for regulating the content on their platforms whether by techniques or regulations.
Apart from government and internet service providers, users of platforms should also be responsible for their own behavior as long as they have full civil capacity.
How to be responsible
As just said before, authorities, internet service providers and users all have responsibilities to stop the spread of offensive and potentially harmful content on platforms.
- Responsibility of authorities. Authorities have at least two main tasks. First, they should make use of the state force conferred by the constitution. Specifically, the government should legislate to protect civils from potential harm on the Internet and make reasonablecorresponding policies to enact the law. For example, Australia newly passed its Online Safety Act in 2021 that gives power to the eSafety commission, allowing individuals to report cyber-bullying incidents to the commission and asking bullying content to be removed by social media companies (Taylor, 2022). Second, the government should take the responsibility of humanizing and civilizing its citizens through education, arts, literature, music and other public activities.
- Responsibility of internet service providers and social media companies. Internet service providers are organizations that offer technical services. Internet services providers often cooperate with owners of digital platforms and provide services that they need. Platform companies should make reasonable community rules to regulate its users’ behaviors to create safe and peaceful internet environment. And internet service providers should use technologies to help platform companies achieve the goal. Facebook has its own community standardswhich describe what content will be identified as harmful and inappropriate, and what should be down to this type of content. Accordingly, Facebook equips itself with several functions to help enact its standards, including reporting page, standards page and regulators’ right to take down any content considered by reviewers to be harmful or illicit.
- Responsibility of users. Users themselves with full civil capacity should of course be responsible for their words and behaviors online. Fostering empathy can be a good solution to the problem. Whenever you feel like to judge, scold or do something that may hurt others mentally or physically, think about what consequences you may cause and how others will feel. Always treat people the way you want to be treated. At the same time, when someone is making or spreading bullying, harassment, violent content, hate, porn and other problematic content on digital platforms, do as you can to stop the spread instead of joining the team. As for children, their legal guardians are supposed to supervise their behaviors and words on digital platforms, and correct them whenever they make or spread bullying, harassment, violent content, hate, porn and other problematic content on digital platforms.
In conclusion, it can be a formidable task to maintain a safe and harmonious internet environment while at the same time ensure the freedom of speech conferred by the constitution. Each party, including authorities, internet service providers, digital platform companies and platform users, share the responsibility of stopping the spread of bullying, harassment, violent content, hate, porn and other problematic content on digital platforms. The absence of any party would make the problem even more difficult to resolve. And since authorities and platform companies have relatively more visible or invisible power over users, regulation can sometimes lead to the violation of freedom of speech. How to solve this dilemma is still on the intense debate of scholars and policy-makers.
References
De Kloet, J., Poell, T., Guohua, Z., & Yiu Fai, C. H. O. W. (2019). The platformization of Chinese society: Infrastructure, governance, and practice. Chinese Journal of Communication, 12(3), 249-256.
Srnicek, N. (2017, August 30th). We need to nationalize Google, Facebook and Amazon.
Here’s why. The Guardian, 30.
Nieborg, D. B., & Poell, T. (2018). The platformization of cultural production: Theorizing the contingent cultural commodity. New media & society, 20(11), 4275-4292.
Kerssens, N., & Dijck, J. V. (2021). The platformization of primary education in The Netherlands. Learning, Media and Technology, 46(3), 250-263.
Westermeier, C. (2020). Money is data–the platformization of financial transactions. Information, Communication & Society, 23(14), 2047-2063.
Zhang, Z. (2021). Infrastructuralization of Tik Tok: Transformation, power relationships, and platformization of video entertainment in China. Media, Culture & Society, 43(2), 219-236.
Helmond, A. (2015). The platformization of the web: Making web data platform ready. Social media+ society, 1(2), 2056305115603080.
Gillespie, T. (2010). The politics of ‘platforms’. New media & society, 12(3), 347-364.
Bruns, A. (2008). Blogs, Wikipedia, Second Life, and beyond: From production to produsage (Vol. 45). Peter Lang.
Jenkins, H., & Deuze, M. (2008). Convergence culture. Convergence, 14(1), 5-12.
Beveridge, C. (2022, January 13). What is User-Generated Content? And Why is it Important? Hootsuite.
Beer, D. (2009). Power through the algorithm? Participatory web cultures and the technological unconscious. new media & society, 11(6), 985-1002.
Taylor, J. (2022, January 22nd). How will new laws help stop Australians being bullied online?. The Guardian.