With the widespread use of the internet, more and more teenagers are using the Internet and more issues are emerging among digital platforms. The governance of media platforms has become a hot topic of concern. People are beginning to think about who is responsible for the sensitive content (e.g. bullying, violent content, pornography, hate and other problematic content) that appears on digital platforms. How to stop and improve these problems has become the goal of researchers on digital platforms. This article will be divided into four sections to explore why digital platforms and governments are responsible for this phenomenon and what can be done to stop it by strengthening platform audits and introducing government regulations.

- Who should be held responsible?
Digital platforms should be responsible
When problems are identified with digital platforms that cannot be ignored, there is no doubt that the digital platforms themselves should respond to this phenomenon. Addressing the problems that arise with digital platforms is itself a process that is essential to the development of the platforms. As Massanari (p.354, 2017) mentions it’s important to comprehend how toxic technocultures create, maintain, and use platforms. Despite their disgust, these occurrences should be studied by new media researchers and activists because they can offer insight into alternative designs or instruments that could stop their spread. The emergence of such sensitive information is evidence of a loophole in the rules set by the platform. Publishers of sensitive information use socio-technical platforms as a conduit for coordination and harassment, as well as their seemingly herd-like amoral quality, to achieve their aims. Members of these communities frequently show off their technical prowess by engaging in morally dubious behaviors, such as aggregating both public and private content about their objectives and making use of platform policies that frequently place a premium on attracting large audiences while providing little protection for harassing targets. (Massanari, p. 33, 2017). Rule-making is subjective and reflects the biases and worldviews of the rule-makers (Gerrard & Thornha, 2020), and it is these subjective rules that provide the publishers of sensitive information with the regulation of platform auditing to achieve their aims. Vetting systems are the most common way in which digital platforms deal with these sensitive issues. When a platform user posts content, the platform’s review of the content determines whether or not to allow the information the user wishes to disseminate to be made public. Platform moderation is used in conjunction with traditional negative media regulation in this process to safeguard people, especially vulnerable audiences like children, from offensive, illegal, obscene, or potentially harmful content and to reinforce positive behaviors like respect for others’ privacy. Additionally, it makes an effort not to post anything unlawful or dangerous(Flew, Martin & Suzor, p. 40, 2019). He includes the platform deciding what speech is allowed and what speech is prohibited. Platforms create complex tools for content review, including increasingly complex terms of service and content rules, as well as regulatory algorithms and the people who enforce such policies (Riemer & Petter, p. 7, 2021). Digital platforms are at root the distribution platforms for such sensitive information and have an unshakable responsibility for the emergence of such questionable content.

The state should be in charge
When one thinks of the regulation of digital platforms, the management of platforms by the state is one of the things that strikes people. When sensitive information appears on digital platforms, the state has an unshakable responsibility. In his study, Schlesinger (p. 1559, 2020) declares that the influence and power of these multinational digital media organizations are being questioned more frequently by governments, who are also looking for ways to control their operations and content. In capitalist democracies, the shortcomings of democratic systems in designing a social order of equity and solidarity mean that the arena of the state is necessarily a zone of conflict(Schlesinger, p. 1558, 2020). The operation of the regulatory process and the underlying principles that inform it are closely linked to the forms of state and economic relations prevalent in any social order. State policy on the regulation of digital platforms is therefore an essential part of the development of the Internet environment. While opponents of state regulation of the media argue that the inclusion of state control over digital platforms infringes on people’s freedom of expression, the widespread impact of digital platforms on society is such that the state cannot leave them unchecked. In Abe’s (p. 216, 2004) study the author suggests that, overwhelmed by the perceived potential danger of undermining public security, neither the media nor public opinion can remain rationally critical of the increasing surveillance. The governmental implementation of surveillance policy has met with little opposition, even if it threatens the fundamental rights of citizens through the constant regulation of everyday life. The State, as the party that sets the standards for digital platforms, therefore has fundamental control over them making the State responsible for the appearance of sensitive information on them.
- How to control?
Media platforms establish standards of control and increased scrutiny
- The presence of sensitive content on digital platforms requires a more efficient way of censoring platforms to ensure that similar content is controlled. Furthermore, the uncertainty of the criteria by which some content is judged by digital platforms is a part of how to reduce sensitive content that cannot be ignored. Currently, there are two main forms of social media content review: (1) automated and (2) manual (Gerrard & Thornham, 2020). Due to the subjective nature of manual review, increasing the efficiency of automated review is the most efficient way to reduce the presence of sensitive information on digital platforms. Examples like the use of computational tools to filter user-generated material, such as automated text search bans and using “skin filters” to check if a significant percentage of a picture or video contains bare flesh (Kloet, Poell, Guo & Yiu, p.37, 2019).
- The criteria for assessing whether content is allowed to be posted should also be examined in more depth. Social media platforms have historically been reluctant to restrict speech. However, there is an ongoing debate about what content is allowed (Riemer & Petter, p. 7, 2021). For example, in a September 2016 article Norwegian journalist Tom Egeland included ‘The Horrors of War‘ by Associated Press photographer Nick Ut (a photograph that is an iconic image of war) in an article-which reflected on photographs that changed the history of war. Without a doubt, the Facebook administrator removed Egeland’s post as a result of some combination of this graphic vulgarity and juvenile nudity. There is frequently no discernible difference between violent or naked photographs that have historical and/or global importance and those that do not(All Platforms Moderate, p. 3, 2018). Even if there were a clear standard, it would be challenging to sort through millions of messages each week since certain photographs may be objectionable in one region of the world but acceptable in another.
In summary, greater efficiency in automated review of digital platforms and clear criteria for judging content posted could reduce the presence of sensitive information on platforms.

State regulation
As mentioned above the blurred boundaries make the criteria for judging content on digital platforms very ambiguous and the governance of state regulation and policy on digital platforms is also an authoritative way to reduce sensitive information on digital platforms.Flew, Martin & Suzor (p.43, 2019) in their study mention addressing national differences in content regulation and publishers’ cultural expectations is also platform governance is key to the future. Many significant Western internet platforms are governed by US law, giving them practically total freedom to establish and uphold their own regulations. When the commercial internet first came up, the biggest worry was that various nations with very different standards for what information was appropriate would want to impose their norms on the rest of the globe. International clarity on what content cannot be displayed on digital platforms is certainly the most powerful way to drive the platform environment. Turkey’s strong control over digital platforms is a good example of this. Turkey has a history of media censorship since its founding, although the approach has changed over its history(Wilson & Hahn, 2021). Adding state policies to digital platforms to sanitize them of sensitive information is the most powerful way to do this.
As a conclusion, digital platforms and the state as the author of their regulations have an inescapable responsibility for the sensitive information that appears on digital platforms. Digital platforms can reduce the occurrence of sensitive information by increasing the efficiency of automated review and by setting clearer standards for publication. The state, as the regulator of digital platforms, can be more effective in reducing this phenomenon by establishing regulations.
Reference list:
Abe, K. (2004). Everyday Policing in Japan: Surveillance, Media, Government and Public Opinion.International Sociology, 19(2), 215–231. https://doi.org/10.1177/0268580904042901
All Platforms Moderate. (2018). In Gillespie, Custodians of the Internet : Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (pp. 1–23). Yale University Press,. https://doi.org/10.12987/9780300235029
A third of children have adult social media accounts – Ofcom. (2022). Retrieved 14 October 2022, from https://www.bbc.com/news/technology-63204605
de Kloet, Poell, T., Guohua, Z., & Yiu Fai, C. (2019). The platformization of Chinese Society: infrastructure, governance, and practice. Chinese Journal of Communication, 12(3), 249–256. https://doi.org/10.1080/17544750.2019.1644008
Flew, Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal of Digital Media & Policy, 10(1), 33–50. https://doi.org/10.1386/jdmp.10.1.33_1
Gerrard, Y., & Thornham, H. (2020). Content moderation: Social media’s sexist assemblages. New Media & Society, 22(7), 1266–1286. https://doi.org/10.1177/1461444820912540
Leitch, S., 2022. Clickbait extremism, mass shootings, and the assault on democracy – time for a rethink of social media?. [online] The Conversation. Available at: <https://theconversation.com/clickbait-extremism-mass-shootings-and-the-assault-on-democracy-time-for-a-rethink-of-social-media-187176> [Accessed 14 October 2022].
Riemer, K., & Peter, S. (2021). Algorithmic Audiencing: Why we need to rethink free speech on social media.https://doi.org/10.1177/02683962211013358
Massanari. (2017). Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329–346. https://doi.org/10.1177/1461444815608807
Schlesinger. (2020). After the post-public sphere. Media, Culture & Society, 42(7-8), 1545–1563. https://doi.org/10.1177/0163443720948003
The importance of government social media – contentgroup. (2022). Retrieved 14 October 2022, from https://contentgroup.com.au/2017/03/importance-social-media-government/
Use of social media (2022). Retrieved 8 July 2019, from https://www.bbc.com/editorialguidelines/news/use-of-social-media
Wilson, J., & Hahn, A. (2021). Twitter and Turkey: Social Media Surveillance at the Intersection of Corporate Ethics and International Policy. Journal of Information Policy (University Park, Pa.), 11, 444–477. https://doi.org/10.5325/jinfopoli.11.2021.0444