The Obligation of Social Media Platforms to Remove Hate Speech and Illegal Speech

Say NO to Hate Speech and Illegal Speech

Introduction

Social media platforms have introduced new ways in which people from different parts of the world can connect, and communicate with each other instantaneously and share information. For the most part, these social media platforms have a positive influence on society as they have ensured that people can communicate easily (Bailey, Cao, Kuchler, Stroebel, & Wong, 2018). However, it has also created an avenue whereby people can easily harm and spread hate. The ability to communicate with the mass audience has changed the way people engage in politics, social issues, and each other. Social media platforms such as Facebook and Twitter have created an avenue where people can send hateful messages, and incite others to violence towards a specific group of people.

The regulation of hate speech in these social media platforms means that there is a need to evaluate what can be considered to be legitimate freedom of speech and what can be termed as hate speech. The reason for this is that, in a majority of the countries, freedom of speech is protected by the laws that are in the country’s constitution. In addition to that, the social media platforms should remove any content that can be considered to be hate-or illegal-speech from their platforms (Bailey, Cao, Kuchler, Stroebel, & Wong, 2018). This can be achieved by these companies implementing different strategies to ensure that what is posted on their platforms at all times can be categorized as appropriate content. Social media platforms need to fully implement the Code of Conduct practices to reduce the hate-speech posts that are posted on these platforms.

 

Scope of the Problem

Hate speech incidents on social media platforms are on the rise. The reason for this is that a majority of people today, communicate via social media. This has led to the individuals who are inclined towards racism, homophobia, and misogyny to use these platforms as avenues whereby they can reinforce their views. In addition to that, it provides individuals who are inclined towards violence with an opportunity to publicize their actions. According to a study that was conducted by the Pew Research Center (2015), it showed that the percentage of people from different continents who felt that it was right to send statements that can be deemed to be offensive to different groups publicly

 

Social scientists have found that there is a correlation between hateful social media posts and incidences of violence. In Germany for instance, researchers established a correlation between anti-refugee Facebook posts that were mainly posted by the Alternative for Germany party and violent attacks on refugees in the country. In a study that was conducted by Muller and Schwarz (2020), they observed that there was a spike of anti-refugee graffiti, assault, arson on refugee homes, and other violent incidents between 2015 and 2017 whenever there were anti-refugee posts on the AfD Facebook page. In the United States, there have been incidents whereby hate speeches that have been posted on social media platforms such as Facebook have led to targeted violence. For instance, the prosecutors of the Charleston church shooter who killed nine African American worshippers in June 2015, had been engaging in a self-learning process online that led him to believe that the ultimate goal of white supremacy was to conduct violent action against minority populations (Laub, 2019). In addition to that, the 2018 Pittsburgh synagogue shooter was a frequent user of Gab, a social media network, which is mainly used by extremists that have been banned from other platforms such as Facebook and Twitter for their ideologies. Through Gab, he was exposed to the conspiracy that Jews were bringing immigrants into the United States intending to make Whites become a minority population in the United States (Laub, 2019). This led him to kill 11 people at a refugee-themed Shabbat service.

Such incidences are a clear indication that social media platforms are increasingly being used by extremists to push their hateful agenda on people. An important point to note is that these social media platforms such as Facebook have been designed to earn revenue by enabling advertisers of different products to target specific audiences based on their interests (BBC, 2017). This strategy has been utilized by extremists to promote their hateful content and find people who are interested in their view.

 

Germany and European Union Efforts to Regulate Hate Speech Online

An important point to note is that the 28-member states have different forms of legislation of the hate speech issue that is posted on social media. However, they adhere to common principles. For instance, their legislation is not only meant to regulate speech that incites violence, but speech that contributes to hatred, and denies, or undermines crimes against humanity (European Commission, 2020). In the recent past, there have been stringent efforts that have been undertaken by the members of the European Union to reduce or control hate speech mainly because of the increase of refugees, or migrants into European countries. To ensure that these laws are followed, or adhered to by the social media platforms, the European Union and major technological companies agreed to a code of conduct whereby these companies are tasked with the responsibility of reviewing all the posts that have been flagged by users and take down all the posts that violate the EU standards within a period of 24 hours (European Commission, 2020). In Germany in particular, it set out laws in 2018 that require the social media platforms to ‘pull down’ posts that are deemed to promote hate in any form.

Efforts By Social Media Platforms to Enforce these Laws

The social media companies have devised different strategies to enforce these laws and regulations that have been provided by the European Union and individual countries such as Germany (Müller & Schwarz, 2020). An important point to note is that these organizations rely on a combination of artificial intelligence, users reporting and content moderators to enforce these laws and ensure that only appropriate content is posted. However, the challenge that these content moderators are experiencing is dealing with a high volume of disturbing content. In addition to that, social media companies have not been effective in several geographical locations because of the small number of content moderators who are fluent in different ethnic languages, the artificial intelligence being poorly adapted to different ethnic languages, and different laws that govern what is considered as hate speech in certain countries (Müller & Schwarz, 2020). An important point to note is that by Facebook mainly adopting the European standards of what can be termed as hate speech, it has led to censorships of groups that are fighting for their rights such as activists in Palestinian and Crimea territories.

 

Need for Application in Australia

The Code of Conduct on countering illegal hate speech needs to be applied in Australia. The reason for this is that, in this country just like any other region, there are incidences of one form, or another of discrimination, which may contribute to hate speech being meted out to a particular group. The implementation of the Code of Conduct means that hateful, or illegal speeches will be removed by the social media companies within a period of 24 hours. It will reduce the number of incidents where people or groups are targeted by extremists. An important point to note is that Australia, unlike the other developed countries such as the United States and Germany does not take hate crimes seriously. There have only been 21 convictions from hate crimes in the country. In regions such as Tasmania, things such as racial vilification are not considered to be a crime. However, social media companies have an obligation towards all its users to ensure that they are not subjected to any form of attack because of their differences. This means that they should take it upon themselves to ensure that hate speech does not become a common issue in Australia.

 

Conclusion

The social media platforms should be mandated to remove hate and illegal speech from their platforms within 24 hours. The Code of Conduct provides these platforms with the aspects that they should look at when determining the type of posts that can be deemed as hateful, or illegal. The need for such stringent measures to be undertaken is based on the results of allowing these posts to be shared by different people. Promoting hate in a majority of cases contributes to an increase in violent cases, especially towards the targeted groups. It is the responsibility of the social media platforms to ensure that the platforms that they provide to the users are only used for the right purposes. This means that they should undertake stringent measures to ensure that their platforms are not used to promote hate to the public. Even though the Code of Conduct has been determined to contribute to extremists in oppressed regions being banned, or their posts being pulled down, this is a small population where other measures can be undertaken to control the likelihood of hate, or illegal speech being posted on these social media platforms.

 

References

Bailey, M., Cao, R., Kuchler, T., Stroebel, J., & Wong, A. (2018). Social connectedness: Measurement, determinants, and effects. Journal of Economic Perspectives32(3), 259-280. doi:10.1257/jep.32.3.259

BBC. (2017). Social Media Warned to Crack Down on Hate Speech.

Becker, S. O., & Pascali, L. (2019). Religion, division of labor, and conflict: Anti-semitism in Germany over 600 years. American Economic Review109(5), 1764-1804. doi:10.1257/aer.20170279

European Commission. (2020, June 22). Press corner. Retrieved from https://ec.europa.eu/commission/presscorner/detail/en/qanda_20_1135

Gelber, K. (2019). The precarious protection of free speech in Australia: The Banerji case. Australian Journal of Human Rights25(3), 511-519. doi:10.1080/1323238x.2019.1690833

Laub, Z. (2019, April 11). Hate speech on social media: Global comparisons. Retrieved from https://www.cfr.org/backgrounder/hate-speech-social-media-global-comparisons

Müller, K., & Schwarz, C. (2020). Fanning the flames of hate: Social media and hate crime. Journal of the European Economic Association. doi:10.1093/jeea/jvaa045

 

 

Avatar
About Cheng 3 Articles
amateurish writer