Introduction
Inappropriate content on social media is gradually becoming a social problem. These problematic contents include but are not limited to bullying, harassment, violent content, hate, porn, etc. Problematic content can negatively impact Internet users and cause psychological and physical harm. 56% of teens have seen inappropriate content on digital platforms, which could be more harmful to children than adults. Stopping problematic content on social media is a social issue that needs to be taken seriously. This article will explore who is responsible for halting those contents and how to stop them.
- Governments, platforms and users are responsible for stopping problematic content on digital media by taking different measures and approaches.
“I found the internet!” by Abraham.Williams is
licensed under CC BY-SA. 2.0
Government
Government regulation is necessary to stop the spread of questionable content in digital media, but the extent of control is debatable. The government’s control of social media can be considered the bottom line on the internet, prohibiting some content that can cause serious harm to users. To combat misinformation and questionable, harmful content on digital platforms, at least 17 countries have passed or proposed laws restricting social media content (Chung & Wihbey, 2022).
However, government involvement may lead to unfree speech on social media and even manipulation of information on the internet. Social media can become a political vehicle for excessive government intervention, resulting in undemocratic and free speech (Lin & Alstyne, 2022).
Therefore, the government needs to grasp the boundaries of digital platform control. The government cannot interfere too much with digital platforms or technology companies. Government control should only consider online content’s bottom line, not the usual control measures.
“8 12 09 Bearman Cartoon Freedom of Speech”
by Bearman2007 is licensed under CC BY-NC-ND 2.0.
Platform

Platform regulation is essential to stop spreading of harmful information on digital platforms. Self-regulation of platforms is more important than government intervention. To prevent excessive government intervention, platforms should be more active in self-regulation (Cusumano et al., 2021). Content moderation is a significant part of platform regulation, which includes removing problematic content and thus Reducing damage to other users. Platforms and technology companies removing content is a government- and public-supported behaviour, and most users expect platforms to remove problematic content from digital platforms faster and more transparently (Riedl et al., 2020). AI usually discovers and removes this content through intelligent means such as keyword and image recognition. Some content is discovered and removed through manual review by technology companies. Tech companies need to protect users from objectionable content and have a duty to protect the public from objectionable Content (Ballard, 2019).
However, some tech companies do not self-regulate to gain greater profits. Such behaviour will destroy users’ trust in the platform, resulting in more significant consequences (Cusumano et al., 2021). Furthermore, suppose most platforms do not actively implement self-regulation. In that case, the government may intervene with stricter laws or policies, which Will result in lost profits for the tech company and will also impact the platform’s users. Excessive government intervention will end the freedom of information, and a regulation-based system of self-regulation by tech companies and platforms is necessary (Fengler et al., 2015).
User’s self-regulating

Self-regulation by users is also crucial in stopping the distribution of questionable content on digital platforms. According to Chung & Wihbey (2022), people tend to overestimate their ability to recognise misinformation and may even assume false information to be true. Users may lack the ability to recognise misinformation, so regulation by platforms and governments is necessary. However, users can effectively identify information that makes them uncomfortable and take effective action. One of the most effective measures is to report and block problematic content when users see it and hand it over to the platform for review. Platforms and artificial intelligence cannot effectively filter all questionable information, so reporting it when users see it can prevent similar content from appearing.
Excessive government interference can have many consequences, and the law is only the bottom line for content on digital media. As a result, much of the content on digital platforms is ‘lawful but awful ‘. This content causes discomfort to users but may not violate the relevant laws or platform regulations. In such cases, users may block such content by blocking other users.
Age Rating Policies
The age rating is an effective measure to stop inappropriate content from appearing on digital platforms for minors. The age rating is not only applied to film and television productions. It is also a standard control measure in mainstream digital platforms. Traditionally, age-rating measures have been found mainly in film and television, where children are not allowed to access violent and pornographic content. Age ratings can effectively prevent underage children from having early access to age-inappropriate online content. However, implementing age-rating measures requires a shared responsibility between the platform and the user.
In mainstream developed countries, film classification boards help regulate and categorise the content of films shown in cinemas, and legislation is in place to restrict access to violent and pornographic content for under-18s. However, because children are active on social networks, the problematic content they have to access is not limited to violent and pornographic content. They may be exposed to gambling, drug use, racism, or even self-harm or suicide-related content. Platforms need to strictly prohibit such problematic content from appearing on social media in the first place and to classify content that is appropriate for adults but inappropriate for minors. Parents should also monitor their children’s use of digital media, guide them in the proper use of the internet, and set their children’s mobile devices to teenage mode.
Conclusion
Overall, the government, platforms and users all need to take responsibility to stop the spread of this problematic content on digital platforms. The role of these three parties in stopping this problematic content is mutual and may be less effective in the absence of either party. Governments need to put in place measures to limit problematic content in social media, and certain elements should never be allowed in digital media, such as violent or criminal content. Nevertheless, governments must not interfere too much with digital media content, as this could lead to unfettered online speech and develop digital media as part of a political strategy. Platforms need to self-regulate based on regulations, which include but are not limited to REMOVE content, age ratings, and reporting systems. Platforms need to be regulated more strictly than governments to minimize the spread of harmful content on their platforms and the harm it causes to users. Platform regulation should aim to stop ‘lawful but awful’ content and create a better online environment for users. For users, the first thing they need to learn is how to block questionable content on social media and report it. Technology companies sometimes cannot identify problematic content in the first place and can take it down more quickly when other users have reported it.
Furthermore, parents of users should monitor their children to implement an age rating system, as some content on digital platforms can be considered appropriate for adults but not children. For example, pornographic or violent messages and films are optional for adults but not minors. To prevent children from accessing questionable content at an age when they are not yet old enough to recognize misinformation, it is essential that parents supervise their children and set their mobile devices to the child or teenage mode.
References
Ballard, J. (2019, April 29). Most conservatives believe removing content and comments on social media is suppressing free speech. YouGov. Retrieved from:
https://today.yougov.com/topics/technology/articles-reports/2019/04/29/content-moderation-social-media-free-speech-poll
Chung, M., & Wihbey, J. (2022). Social media regulation, third-person effect, and public views: A comparative study of the United States, the United Kingdom, South Korea, and Mexico. New Media & Society, 14614448221122996.
Cusumano, M. A., Gawer, A., & Yoffie, D. B. (2021). Social media companies should self-regulate. Now. Harvard Business Review, 15.
Herbert Lin & Marshall V. A., (2022). Should the government regulate social media. Divided We Fall, Retrieved from:
https://dividedwefall.org/should-the-government-regulate-social-media/
Internet matters (2021), Learn about inappropriate content, Retrieved from:
Riedl, M. J., Whipple, K. N., & Wallace, R. (2022). Antecedents of support for social media content moderation and platform regulation: the role of presumed effects on self and others. Information, Communication & Society, 25(11), 1632-1649.
Rainn. (2021). How to Filter, Block, and Report Harmful Content on social media. Retrieved from:
https://www.rainn.org/articles/how-filter-block-and-report-harmful-content-social-media
SWG TV (2021). Report Harmful Content – Reporting Legal But Harmful Content Online. YouTube. Retrieved from:
Safe, secure, online. (2022). Report harmful content. Retrieved from:
https://swgfl.org.uk/services/report-harmful-content/#:~:text=In%20simple%20terms%2C%20harmful%20content,an%20issue%20by%20someone%20else.