In today’s world of rampant harmful content, what’s the best way to do it?


With the development of technology, the Internet takes up more and more of people’s time and energy. The scenarios and frequency that people can use it are also getting higher and higher. Naturally, there are advantages and disadvantages to this. Today’s Internet is full of negative content: online violence, hate, racism, and pornography… The nature of the Internet makes it extremely complex and difficult to regulate its content. There are several existing regulatory mechanisms: government regulation, Self-regulation by platforms, or regulate by online intermediaries. Each of these approaches has its advantages and disadvantages. At present, after considering all factors, it seems that the platform’s regulation may be the most suitable way to go. At the same time, the government needs to assume a certain role, not necessarily as an implementer, but at least as a facilitator

Social media’s self regulation?

Social media, as an intermediary platform for users to express their opinions, should have the responsibility to regulate undesirable information. At the same time, self-regulation is one of the fastest, most effective, and cheapest ways to regulate online content (Cusumano et al., 2021). Different social media have different ways of self-regulation. It can be an automatic review by an algorithm, or it can be done manually. A few years ago, when content regulation was gradually taken seriously but the algorithm was not yet perfect, the review was mainly conducted manually by humans, and the review was mainly based on reports from other users. Their respective advantages and disadvantages exist, and nowadays the proportion of automatic reviews by algorithms is gradually increasing (Cobbe, 2020). The characteristics of the algorithm make it possible to review all information efficiently 24/7. It also avoids the bias brought by hu

Cyberbullying licensed under CC-BY 2.0

man subjective consciousness  (Gillespie,2018).  In July-December 2019, Tik Tok removed 49 million rule-breaking videos. This speaks volumes about the importance of the algorithm, which is unable to review such a huge amount of video just by human manual review.

However, self-regulation also brings some drawbacks. Social networking platforms are all commercial companies, and commercial companies naturally put their commercial interests first. When their interest maximization conflicts with the public or social interest, they usually choose their interest maximization. Fake news, conspiracy theories, and news that is usually factually inverted or overblown are often more appealing to viewers, this leads to more views and business interests (Cusumano et al., 2021). This leads to the fact that if social media are left to regulate themselves, they often do not do their best to regulate harmful information (Cusumano et al., 2021).

After Russia invaded Ukraine, Facebook changed its rules against hate speech, allowing users to post hate speech against Russians, even if they did not involve in the invasion, just because of their nationality. This behavior is profit-oriented, they hope to profit from these over-the-top statements by generating discussion. Also, self-regulation does not have to be done only within a single firm, but can also be negotiated by different firms in the same market. to try to get a more equitable solution. The vast majority of industries have their own recognized regulatory schemes, such as tobacco, alcohol, movies, etc.

Would government intervention be better?

under Surveillance licensed under CC-BY 2.0

Government involvement is also needed for content regulation. Like the other industries mentioned earlier, industry regulations are often improved with government intervention. However, different governments have very different views on how and to what extent regulation should be applied. For Western countries, mainly the United States, content regulation and speech liberalism are often contradictory. Therefore, they are very cautious about regulating speech. One of the typical examples is section 230. Section 230 of the Communications Decency Act provides that the content posted by users on the platform is not related to the platform itself and largely protects social media platforms. The existence of this law gives social media discretion over what is said on their platforms. On the opposite, in EU countries and Asia, government control policies are significantly stricter, especially in China. In recent years, the Chinese government has fined many social media platforms for posting illegal or incorrect information. In late 2021, Weibo was fined 3 million yuan ($470,000) for repeatedly posting illegal information. This clearly shows the difference between the United States and China regarding speech regulation. This also depends to some extent on the difference between the systems of the two countries, with the United States placing more emphasis on the individual and China placing more emphasis on the group.

This vast difference between different governments makes a government-led regulatory system unrealistic, especially for multinational giants like Facebook or Tik Tok. Many countries have criticized the U.S. communications giant for not doing a good job of combating misinformation(Gillespie,2018). This leaves communication intermediaries caught in the middle of two heads, with no way to find a way to have it both ways. Because the two are completely different from each other in terms of values at the bottom line.

The previous section illustrates that it is unrealistic to have a government acting as a regulator, but it does not mean that the government cannot go about strengthening the regulation of online content. When mentioning self-regulation by enterprises, it is said that self-regulation by enterprises alone is not perfect, there is no way to confirm how willing these giant companies are to strengthen their discipline and regulation, even if it comes at the expense of their interests. government restraint and supervision are needed. Companies need to make their judgments about the scale of regulation in different countries.

For countries with stricter standards, such as China, the existence of bad information on the Internet can be minimized, while the system of user’s information enterprises is responsible for makes it necessary for enterprises to do their best to regulate wrong speech whether they want to or not. What this brings is damage to freedom of expression, and companies are likely to delete or block any unnecessary accounts or content for fear of being penalized.

 In contrast, the more freedom-focused system in the West does guarantee freedom of expression, but it also allows for the dissemination of false information involving violence, harassment, or conspiracy theories. How to balance the two has become a problem for social media platforms. This is also a major problem in the management of bad information on the Internet nowadays.


To conclude, both social media platforms and governments have the responsibility and obligation to stop offending speech, and only if they work together, they will be able to do so. Also, the scale of government intervention is another issue worth discussing. Nowadays, the United States, EU countries, and China can be said to intervene in three degrees. The United States is the lightest and China the heaviest. The lightness of the intervention brings its advantages and disadvantages, whether the focus is on the removal and prevention of offending content, or whether the greatest degree of freedom of expression is guaranteed becomes the area to focus on. It’s undeniable that China’s tougher controls did lead to a significant reduction in offending content, but currently, freedom of speech is one of the most valued things. Likewise, overly strict controls make it extremely difficult for the underprivileged to make their voice. But the overly lax oversight in the U.S. is equally unacceptable internationally, and the public would prefer a combination of the two. This is why self-regulation of platforms is by far the most viable approach.


Cobbe, J. (2020). Algorithmic Censorship by Social Platforms: Power and Resistance. Philosophy & Technology.

Cusumano, M. A., Gawer, A., & Yoffie, D. (2021). Can Self-Regulation Save Digital Platforms? SSRN Electronic Journal.

Gillespie, T. (2018). All Platforms Moderate. In Custodians of the Internet : Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (pp. 1–23). New Haven: Yale University Press.

Gillespie, T. (2018). Regulation of and by Platforms. In J. Burgess, A. Marwick, & T. Poell, The SAGE Handbook of Social Media (pp. 254–278). SAGE Publications Ltd.

TikTok deleted 49 million ‘rule-breaking’ videos – BBC News

Facebook allows war posts urging violence against Russian invaders | Reuters

Weibo fined by Chinese regulator for publishing illegal information | CNN Business