Social platforms’ obligation: remove hate speech and illegal speech

Internet and social network are a new world of opportunities and menaces. Social networks have emerged in the 21st century and have played an irreplaceable role in people’s lives. It promotes people’s borderless communication, allows the expression of various ideologies, and allows the inflow of all kinds of news. In this process, the most contradictory point lies in regulating hate speech and illegal speech under the premise of ensuring freedom of speech. In the Council of Europe Committee of Ministers Declaration on freedom of communication on the Internet, it also pointed out that ‘freedom of communication on the Internet should not prejudice the human dignity, human rights and fundamental freedoms of others, especially minors.’ In this era, the public consciousness is built-in online editorials and web forums, instead of streets and parks. (Natalie,2018) It can be said that the Internet constitutes an ideal environment for hate speech to develop. One of the reasons is a large number of users. Those speeches will have a broad audience. Simultaneously, the anonymity of the Internet allows them not to take any responsibility for their words. Due to these characteristics, there will always be hate speech on the Internet and may harm its victims. Most IT companies like Twitter and Facebook have reached a consensus with the European Union to ensure that online platforms do not provide opportunities for illegal online hatred to spread virally. (Code of Conduct on Countering Illegal Hate Speech Online,2016) However, despite these platforms’ supervision, the trend of online hate speech is still on the rise.


Hate speech and illegal speech online

For hate speech, each platform has a different definition. YouTube’s terms of service state that, ‘speech which attacks or demeans a group based on race or ethnic origin, religion, disability, gender, age, veteran status, and sexual orientation/gender identity.’ Moreover, in the Facebook community, ‘content that directly attacks people based on their race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender or gender identity or serious disabilities or diseases’ are not allowed to exist. Therefore, hate speech is offensive contexts aimed at a specific individual or group of people.

From the end of last year until this year, people worldwide have experienced an enormous catastrophe, a coronavirus. In the process of fighting the virus, the Chinese seem to have encountered significant resistance, especially on social media platforms. Hate speech against China on Twitter rose by 900%. They attributed the virus to China and created many hateful hashtags such as #chinaliepeopledied and #Chinavirus. During that particular period, many people developed anti-China sentiments due to their fear of the virus. As a Chinese, I often feel panic and anxiety because of their words on the Internet. There were even many demonstrations in various places to boycott the Chinese. This is my personal experience, and I know how much hate speech has on personal impact. It is women who are most vilified on the Internet. They are often harassed, slandered, and even threatened. Part of the generation of hate speech is due to the inferior nature of human beings, and the other reason is the promotion of the Internet. The impact of hate speech on individuals or groups is enormous, and it may even trigger conflicts and struggles. Social media is even used as a weapon in war. The emergence of the terrorist organization ISIS has made people aware of the potential dangers of social media. Social media has enhanced ISIS’s ability to recruit and used the Internet to spread panic. The release of some inflammatory pictures, specially arranged videos, and gun battles all attracted a large audience. This is detrimental to the entire social environment. Therefore, this situation cannot be allowed to develop. The platform is necessary for the supervision of these things.


Obligations of the platform

To know what the platform should do, we must first discuss why hate speech causes such a response. Nowadays, the audience is no longer a group that passively receives information. Everyone is the creator, collector, and distributor of information. (Emerson T,2016) There is a big difference between online speech and offline speech. The first is the anonymity of the Internet. It can make people’s speech more cruel, annoying, or hateful than real-life behavior. (Alexander,2017) Not being tracked down allows people to avoid the consequence of being held accountable and allows people to speak unscrupulously. Simultaneously, the attacked party will also carry out a series of indiscriminate attacks, which will cause conflicts, but the attacking party will not be physically damaged. Face-to-face hate speakers run the risk of being beaten by the person they verbally abused or others on the scene. (Alexander,2017) Risk reduction makes people full of courage. The operation of online hate speech does not have regular social and psychological hints of compassion and blame and tends to curb harmful or antisocial behavior. Online communication usually means that the direct impact of speech acts is invisible to the offender. If people fail to see the emotional harm caused by online hate speech, they will likely underestimate their importance. (Alexander,2017) People are more likely to resort to inflammatory behavior when there are no social cues, such as facial expressions. (Citron, 2014) Online hate speech can also cause an echo chamber effect, and the users will be exposed to more speech that has the same ideas as them. In this way, they will think that their views are agreed upon, and they will become more radical to consolidate their arguments. These problems are difficult for users to solve and can only be managed through some platform supervision. Most platforms now use the method to review the content and remove inappropriate content. There are differences in the evasion of hate speech and the formulation of rules on various platforms. However, all of them are mainly pre-prompt guidance, content review, and remove and related measures after the occurrence. (Gillespie,2017) When people know that everything about themselves is exposed to everyone’s eyes, they will constrain some excessive behaviors, thereby reducing related content production. Directly removing hate speech is also a way to prevent people from accessing this type of information, reduce such speakers, and protect them. Removing hate speech is what platforms must do and their obligation.



Actual situation in Australia


This photo taken on November 21, 2019, shows the logo of the social media video sharing app Tiktok displayed on a tablet screen in Paris. (Photo by Lionel BONAVENTURE / AFP) (Photo by LIONEL BONAVENTURE/AFP via Getty Images)

Recently, Tiktok has also joined the European Commission’s Code of Conduct on Countering Illegal Hate Speech Online. According to a report from the Anti-Defamation League, TikTok had already removed 1,000 accounts during the year for violating hate speech policies and had taken down hundreds of thousands of videos under those same guidelines. In the U.S. newsroom post, Tiktok had banned more than 1,300 accounts for hateful content or behavior, removed more than 380,000 videos for violation of its hate speech policy, and removed over 64,000 hateful comments. They also proposed that after the content is deleted, users can request a review of the action. This is an example of transparency.



With the continuous development of network technology, hate speech and illegal speech are likely to increase. People have become increasingly dependent on social platforms and vent their emotions there. It makes social platforms bear an outstanding legal obligation to maintain the right environment in our society and protect various groups from harm. Perhaps the contradiction between freedom of speech and management of speech cannot be resolved. We can now reduce these contradictions as much as possible, find an appropriate stage, and establish a better mechanism. Many platforms do the same. Through continuous improvement, these speeches will gradually decrease, and it is also hoped that people can spontaneously resist these speeches, rather than relying solely on the management of the platform. When most people realize that these contents are wrong, then the situation will start to get better.






Brooking, E. T., & Singer, P. W. (2016). WAR GOES VIRAL. The Atlantic Monthly, 318(4).

ISSN: 1072-7825

Gillespie, T. (2018). All platforms moderate. In Custodians of the internet: platforms, content moderation, and the hidden decisions that shape social media (pp. 197-199). New Haven: Yale University Press.

ISBN: 030023502X,9780300235029


Alkiviadou, N. (2019). Hate speech on social media networks: towards a regulatory framework?. Information & Communications Technology Law28(1), 19-35.

Brown, A. (2018). What is so special about online (as compared to offline) hate speech?. Ethnicities18(3), 297-326.

Gillespie, T. (2017). Governance of and by platforms. SAGE handbook of social media, 254-278.

IVAN METHTA, (March. 2020) Twitter sees 900% increase in hate speech towards China-because coronavirus. [News]

Sarah Perez, TechCrunch (22 October 2020) TikTok details how it’s taking further action against hateful ideologies. [News]

Tristan Harris, Marc Fennell, Julie Inman Grant, Matt Ford and Jack Steele, Sally Rugg, Jocelyn Brewer (19 Oct 2020) social disconnect [Video]