Harmful and problematic content circulates on digital platforms. Who should be responsible and how to control it?

"Content regulation on social media platform" by Sanskriti IAS is licensed under CC BY 2.0.


As we all know, the internet has been evolving in recent years and various digital platforms have become very popular, including social media platforms, video sites, short video apps, and so on. The number of users on these digital platforms is constantly increasing and countless pieces of information are circulating and spreading. People can produce or receive all types of information all over the world, all the time. With this comes a flood of harmful information, including bullying, harassment, violent content, hate, porn, and other problematic content spread across digital platforms. This is a very serious problem, which can seriously affect young people, as well as the values and worldviews of many people. Nowadays, teenagers spend most of their time addicted to the Internet. When they receive information on violence, bullying, porn, and so on, their immature minds and curiosity can get them deep into it, which can be very harmful to their growth and mental health. Teenagers may imitate and learn from these messages, especially the underage population, which is a very bad influence on their development. A large amount of information on violence and bullying can cause social disruption and a sense of discomfort. Pornographic and harassing messages can provoke more and more violent gender confrontations and conflicts. All these messages can increase social disruption and cause more serious consequences. Therefore, it has become essential to limit the spread of these messages and to regulate and control them. So Who should be responsible for this and how do it? In my opinion, the most crucial thing is the regulation of the platforms themselves and government intervention.

Platforms regulation

” Automotive Social Media” by socialautomotive is licensed under CC BY 2.0.

With the rapid development of the internet industry, digital platforms now have an increasingly huge impact on society, they attract countless users, countless capital and generate countless profits. Therefore, in the age of the information explosion, these platforms, as producers and disseminators of information, have the most important responsibility of controlling the spread of harmful information. The platform itself is monitored and the management is without doubt the most important. As a platform for publishing and disseminating information, it should establish strict rules and regulations to control the spread of harmful information. The digital platform should focus on data security. Before that, Facebook had a problem with a lack of data security (Weissmann, 2019, p. 58). Facebook provided Cambridge Analytica with data intended only for academic use, but they can not ensure it. This can cause a great deal of confusion, including data leakage, privacy breaches, and so on.

Therefore, in addition to detailed and complete rules, platforms must also improve their technology and pay attention to data security. In the case of harmful information, the platform will restrict the release of some words through the most basic AI. However, AI processing is flawed and there is still a lot of harmful information spreading on the platform, so relying on AI alone is not enough. Human regulation should work together with AI, which acts as the first line of defence, screening out some restricted keywords, after which it is a human job. Digital platforms should focus on manual regulation as a position and invest more in budget and staff. The most important restrictive measure is the person who regulates, who will manually screen out any harmful information and remove and control it. In addition to this, there is the reporting of harmful information by users and the hearing of this. Platforms should provide incentives for users to report harmful information and reward them for doing so, which will greatly improve the effectiveness of controlling harmful information.

Government intervention

“Government” by Nick Youngson is licensed under CC BY-SA 3.0.Retrieved from:https://www.picpedia.org/highway-signs/g/government.html

As the information on the internet becomes more and more complex, the government should also be involved in the management of information on the internet and has a responsibility to clean up the internet environment. The first point is the comprehensive management of the internet and digital platforms. Nowadays, more and more government departments have joined social media platforms to publish information on them (Bertot et al., 2012, p. 30). At the same time, the government should also take on the responsibility of managing the online environment and monitoring information on the platforms, especially the police and regulatory authorities. Studies show that internet users spend nearly three hours a day on social media, which has become a platform for public discussion and voice(Smith & Niker, 2021, p. 613). It is also for this reason that the government should step in and act as a guide to prevent the increasing amount of harmful information that comes to mislead the public and spreads all kinds of rumours that affect the social order.

The first thing I think is important is the development of laws and policies. In today’s Internet age, people are all for freedom of expression and they do not want any restraint or interference, and as a result, a lot of harmful information is generated. Therefore, it is fundamental to have well-developed laws and regulations. The best results can only be achieved if the information on the internet falls under the control of the law. People who produce and distribute harmful information will therefore consider whether it is worth doing so and whether they are breaking the law. But the complex point is the definition and certification of harmful information. Information on the internet is so complex and complicated that it is difficult to have a standard to define whether it is harmful or harmless. Therefore, in addition to the law, the government should work with the platforms themselves to develop more detailed policies and rules, depending on the situation, and listen to the suggestions of the public. Another point is the importance attached to the cyber police department, which is now very important in the Internet age. The government should increase its investment and support in this area, and the cyber police should also focus on the control and handling of harmful information. For serious producers and distributors, legal penalties should be imposed.

However, it is important to note that the government should regulate Bullying, harassment, violent content, hate, porn and other problematic content, rather than control and direct all content. Government involvement should remain in the control of harmful information and criminality, not in the direct control of online speech, or in the control of speech for political purposes (Lin et al., 2022). As I said above, it is difficult to define content as harmful, it may be false, or it may be wrong, but it is not necessarily harmful. Therefore, it requires targeted regulation by government departments and platforms, rather than allowing government departments to exercise all-around control and guidance.


Overall, digital platforms should cooperate and communicate with the government, with each side taking responsibility for trying to create a better online environment. Platforms themselves should be responsible and not relax controls on harmful information for the sake of immediate profit and popularity. This is always the foundation of a platform, and only by keeping this information under strict control can we attract more users and create a better platform. Manual and AI audits, and hearings for reports should all be carried out consistently and the mechanism itself should be constantly improved. Government departments should develop and continuously improve laws and regulations and formulate policies to address the proliferation of harmful information.

Reference list:

Weissmann, S. (2019). How Not to Regulate Social Media. The New Atlantis, 58, 58–64. https://www.jstor.org/stable/26609117

Smith, L., & Niker, F. (2021). What Social Media Facilitates, Social Media should Regulate: Duties in the New Public Sphere. The Political Quarterly, 92(4), 613–620. https://doi.org/10.1111/1467-923x.13011

Bertot, J. C., Jaeger, P. T., & Hansen, D. (2012). The impact of polices on government social media usage: Issues, challenges, and recommendations. Government Information Quarterly, 29(1), 30–40. https://doi.org/10.1016/j.giq.2011.04.004

Lin, H., Alstyne, M. V., & Alstyne, H. L. and M. V. (2022, May 18). Should the Government Regulate Social Media? Divided We Fall. https://dividedwefall.org/should-the-government-regulate-social-media/