Bullying, harassment, violent content, hate, porn and other problematic content circulates on digital platforms. Who should be responsible for stoping the spread of this content and how?

RE13 / Yunhao Li / Tutor: Mahsa

Introduction

With the continuous development of the Internet, it becomes easier and easier for people to get different kinds of information. At the same time, a large amount of unhealthy and negative information is also spread through the Internet and the platforms it provides. Examples include hate speech, sexual content, harassing messages and so on. Such negative information may mislead the people politically or provoke. Violence and sexual information can have a negative impact on the mental health of minors. What is more common is the long history of bullying, personal attacks and harassment on the Internet, which can happen to every user. There is no doubt that these phenomena are in urgent need of management and supervision.

Bad language on social media. Slander is written on a message board. Attacked on social networking sites by Tarokichi is licensed under CC BY 2.0. Retrieved from: https://www.dreamstime.com/slander-image193028523

The bad information that exists on the Internet can cause negative effects

There are still a lot of bad information on the Internet and social media platforms. Although each platform has relevant rules and censorship system, the environment of the Internet and social media is still worrying due to the huge number of users.

Take hate speech, for example. Hate speech on the Internet is almost everywhere, and it doesn’t cost anything. If someone is jealous or resentful of another person, even a stranger, he can launch a full range of hate speech and even personal attacks on the person he hates through social media. For adults, being verbally abused online may only take a moment to calm down and tell yourself that it’s not worth it. For young people, however, the impact of such events is hard to gauge. Victims of online hate speech may feel their personal dignity has been violated. They feel lost, like the whole world is against them, and they feel isolated. They stagnate in their social development and even develop social anxiety (SELMA HACKING HATE, 2019). When hate speech escalates to personal attacks and even online racism, the damage is not just psychological. Personal attacks and online racism are more likely to have a serious negative impact on a certain type of person, race or ethnicity. And this negative effect is extremely difficult to eliminate. Instead, it is more likely to be elevated to a political level, creating discontent with the community and government among certain groups in one or several regions. Therefore, the bad information on the Internet seems to be just the irresponsible speech of strangers on the Internet, but when the negative impact is expanded, it will cause a series of terrible consequences.

Cyberbullying haters online bullying internet trolling and hate speech tiny girl put dislikes by marta1995k is license under CC BY 2.0. Retrieved from: https://www.freepik.com/premium-vector/cyberbullying-haters-online-bullying-internet-trolling-hate-speech-tiny-girl-put-dislikes_24450606.htm

Who should manage the bad speech and information on the Internet and social media platforms? And how?

Social media platform

The platform must maintain moderation in some way, both to protect on user from another user and to protect a group from an adversary. Also remove offensive, mean-spirited or illegal content while putting your advertisers and partners (Gillespie, 2019). It looks like social media platforms have the most to do to curb the spread of bad information. Although platform operators are reluctant to take on the content moderation project, it turns out that platforms find themselves in multiple roles such as rule-makers, whether they want to or not (Gillespie, 2019). So what approach should social media platforms take? Take Facebook, the software that lets more than 2 billion people express themselves freely in dozens of languages. Based on feedback from people and advice from experts in technology, public safety and human rights, the Facebook team took great care to develop as comprehensive a set of standards as possible for what can and cannot be done on Facebook (Meta, n.d.). In fact, this set of standards does cover every detail as well as possible, and it’s because of this set of standards that Facebook’s environment remains relatively good. This is due to the adoption and consideration of multiple suggestions and factors in the development of the standard. However, even the best standards can’t completely eliminate negative information on Facebook. The standard can’t have a good effect on controversial topics and posts and may only cause more controversy. This also means that social media operators alone cannot completely block bad information.

Who decides what is acceptable speech on social media platforms? by PBS NewsHour. All rights reserved. Retrieved from: https://www.youtube.com/watch?v=m_wNvtKEOd4

Joint efforts of various parties

Since social media platforms alone cannot stop the spread of harmful information, it would be feasible to try to introduce multi-stakeholder forms of governance, including governments. The traditional government control scheme for such companies is command and control, in which the government uses the threat of legal and economic penalties to get companies to comply (Gorwa, 2019). But heavy lobbying by the industry to protect its interests, and the fact that such companies are often multinational and located in different jurisdictions, have made the traditional approach ineffective. Thus, the current new governance model consists of two actors from at least four groups (e.g., states, NGOs, corporations, and the United Nations), distributed throughout the organization in a pluralistic manner, with one actor not making unilateral decisions (Gorwa, 2019). This also proves that if you want to effectively control the Internet and social media platforms, only one aspect of the effort is not enough and incomplete. It would be better to have multiple parties involved in management.

Meanwhile, as the most important link in the operation of the Internet and social media, users also need to participate in the control of the Internet. As the basis of the operation of the Internet and social media, users can make little contribution to the management and supervision of the Internet. Even if users do have the ability to limit the spread of bad information through reporting and other means. Even if users get together and publish principles that might provide guidance to governments or companies, the truth is that only actors can actually decide to implement those principles (Gorwa, 2019). This is no doubt contradictory. As the basis of the Internet’s operation, users have no right to directly influence the making of rules. But if you put that power entirely in the hands of users, the Internet could go haywire. So it’s a balancing act, how to give users more influence over the rules without making the rules of the Internet a mess.

Social Media and Hate Speech: Who Gets to Decide? by The AtlanticAll rights reserved. Retrieved from: https://www.youtube.com/watch?v=bghTL5gU6fs

Conclusion

Every multi-stakeholder related to the Internet should be responsible for preventing problematic content from spreading on the platform and working to make the online environment better. At present, it seems that multi-stakeholder doctrine can effectively manage the Internet and social media platforms. Although there are still inadequacies in some aspects, for example, users have little or no say in making rules. Over time and through concerted efforts, this situation can be improved.

 

 

 

 

 

 

 

 

Reference list

Gillespie, T. (2019). All Platforms Moderate. In Custodians of the Internet (pp. 1–23). Yale University Press. https://doi.org/10.12987/9780300235029-001

Gorwa, R. (2019). The platform governance triangle: Conceptualising the informal regulation of online content. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1407

Marta1995k. (n.d.) Cyberbullying haters online bullying internet trolling and hate speech tiny girl put dislikes. Freepik. Retrieved October 11, 2022, from https://www.freepik.com/premium-vector/cyberbullying-haters-online-bullying-internet-trolling-hate-speech-tiny-girl-put-dislikes_24450606.htm

Meta. (n.d.). Facebook Community Standards. Meta. Retrieved October 10, 2022, from https://transparency.fb.com/policies/community-standards/

SELMA HACKING HATE. (2019, April 8). The consequences of online hate speech—a teenager’s perspective. https://hackinghate.eu/news/the-consequences-of-online-hate-speech-a-teenager-s-perspective/  

Tarokichi. (n.d.) Bad language on social media. Slander is written on a message board. Attacked on social networking sites. Dreamstime. https://www.dreamstime.com/slander-image193028523

The Atlantic. (2018, August 24). Social Media and Hate Speech: Who Gets to Decide? YouTube. https://www.youtube.com/watch?v=bghTL5gU6fs

PBS NewsHour. (2022, October 11). Who decides what is acceptable speech on social media platforms? YouTube. https://www.youtube.com/watch?v=m_wNvtKEOd4