Who is responsible for the spread of bullying, harassment, violence, hate speech and other issues on digital platforms, and how can they be addressed?

"Social Media Keyboard" by Shahid Abdullah is marked with CC0 1.0.

Introduction

In recent years, super digital platforms have become the biggest variables in the future of national governance and global order, with Facebook, Twitter, and China’s Weibo and Baidu emerging as Internet super platforms with huge active user bases and powerful social mobilisation and social order-shaping capabilities. In online digital platforms, people are allowed to create an identity that is completely different from the real world (Marwick, 2013, p. 368), and this anonymity has weakened the social morality and legal awareness of online users. Some users use the comments section of online platforms as a place to vent their emotions, while others, with the mentality that the law cannot be enforced when everyone is an offender, boldly follow these random users and blindly express their irrational opinions. The French sociologist Durkheim proposed anomie, which refers to a state of chaos and disorder in which the regulation of individual desires and behaviour is unregulated, poorly institutionalised and unintegrated (Dearden et al., 2021, p. 313). The anomie in the online platform, on the other hand, can be understood as an online environment where people can fully express themselves without the excessive constraints of traditional values, but where some people turn the internet into a place for negative energy to be ventilated and where self-consciousness is greatly reduced, resulting in cyberbullying, hate speech, violation of others in the online environment anomie, and more likely to achieve a certain amount of cybercrime. Therefore, the question of whether the spread of online verbal violence and hate speech on digital platforms should be regulated by the platforms themselves, or whether the government should intervene to help manage them, has become a question worthy of consideration.

Cyberbullying, would you do it?” by kid-josh is licensed under CC BY-NC-SA 2.0. Retrieved from: https://creativecommons.org/licenses/by-nc-sa/2.0/?ref=openverse.

 

#Facebook–Lack of government supervision

Social activity has shifted to the internet on a large scale in recent years. The use of social media platforms is accompanied by a wide range of expressions, among which cyberbullying, harassment, hate speech, and other negative effects have been plaguing social media platform providers. They are often criticised for not removing hate speech from their platforms, such as incitement to crime, sexism, religious attacks, racial slurs and so on (Gillespie, 2018, p.256). Facebook has reportedly developed AI software technology that can automatically monitor 94.7% of hate speech posted on the platform and actively remove it.

“Chart of the Day” by Statista is licensed under CC BY-ND 3.0. Retrieved from: https://www.statista.com/chart/21704/hate-speech-content-removed-by-facebook/

 

But removing some user posts and ads with offensive content seems to be a tricky task that never ends. Part of the reason is that the difference between an artistic nude painting and an exploitative photograph is not straightforward for everyone to understand, and AI is no exception. Or words and images that seem innocent but are actually offensive to some groups under certain circumstances. Although online digital platforms are public platforms available to every user, they are still capital operations by nature, and when there is a conflict between the public interest and their own interests, they choose to optimise their own interests, for example, by making more money. In October 2021, a former Facebook employee revealed in a 60 Minutes interview in the US that Facebook uses an algorithm to amplify hate speech algorithm for profit. Thus, the AI software technology that actively identifies and removes hate speech is more of a solution used by Facebook to defend its corporate image based on a social and moral critique, rather than a realistic approach to solve the problems that arise on its platform. Facebook should be subject to more regulation and surveillance based on the fact that it amplifies hate speech, helps spread fake news, violates racial rights and damages the mental health of individuals (Bucher, 2012, p.1165).

 

#Weibo–Government involvement in the regulation

“Government” by Nick Youngson is licensed under CC BY-SA 3.0. Retrieved from:                         https://www.picpedia.org/highway-signs/g/government.html

 

As a Chinese microblogging digital media platform, Sina-Weibo, just like Facebook, provides a platform for many users to discuss a wide range of topics, and it is objectively inevitable that different users will have different opinions. However, if, because of different views or other contradictions, one organises and incites an unspecified majority of users on the platform to attack others, or if one goes to other platforms or offline to violate the rights and interests of others and disrupt the social order, this will cause great harm to the platform and the entire online environment. The difficulty in the governance of the Weibo community is the lack of rules and means to manage group behaviour. The reason for this is that it is not just a matter of the propagation of hatred that leads to confrontation. This is the core content of the Operation Qinglang campaign launched by the Central Government in conjunction with the Cyberspace Administration of China (CAC), which aims to improve the Chinese Internet environment, maintain order and prohibit the dissemination of gory violence, strange stories, vulgarity and other content that is contrary to the core moral values of society. Weibo responded to the CAC’s Operation Qinglang by issuing a clearly defined announcement on the platform’s promotion of hatred and antagonism, and strictly enforcing the two-month campaign. The CAC will continue to promote the special operation in depth and supervise the website platform to effectively fulfil the main responsibility of information content management (de Kloet et al., 2019, p.252). Strengthen daily monitoring, strictly punish violations, and publish typical cases one after another, strengthen warnings and exposures, guide the majority of netizens to consciously resist, and actively create clean and upright cyberspace.

 

Conclusion

Bullying, violence, harassment, hate speech and other such online chaos that appears on digital platforms, its strict arrest and crackdown does not amount to a wholesale rejection of the development of online platforms, but rather is one of the manifestations of the progress of human society, where digital platforms connect people and link the world together. But this does not mean that the direction of development is free to grow, it needs to be subject to certain regulations and restrictions. To want a more friendly and harmonious network environment, this needs to be rectified with appropriate intensity, neither letting slip the net nor should it be overkill, and the rectification of the network environment is supposed to be a continuous event, a collaborative platform-based governance model that requires the autonomy of the platform plus the supervision of the national government and other multiple subjects. According to the network information dissemination model, the subjects of regulation will be divided into content producers, content service platforms and content service users, according to their respective roles to distinguish their rights and obligations, strengthen the relevant laws and regulations to refine the program, from time to time to organise platform clean-up operations, and jointly build good cyberspace and maintain a good network ecology.

 

 

“Hate speech spreading on social media” by Scripps National News. All rights reserved. Retrieved from: https://www. youtube. com/watch?v=Vz8yvXAkqek

 

 

Reference

Bucher, T. (2012). Want to Be on the top? Algorithmic Power and the Threat of Invisibility on Facebook. New Media & Society, 14(7), 1164–1180. https://doi.org/10.1177/1461444812440159

 

Dearden, T. E., Parti, K., & Hawdon, J. (2021). Institutional Anomie Theory and Cybercrime—Cybercrime and the American Dream, Now Available Online. Journal of Contemporary Criminal Justice, 37(3), 311–332. https://doi.org/10.1177/10439862211001590

 

de Kloet, J., Poell, T., Guohua, Z., & Yiu Fai, C. (2019). The plaformization of Chinese Society: infrastructure, governance, and practice. Chinese Journal of Communication, 12(3), 249–256. 

https://doi.org/10.1080/17544750.2019.1644008

 

Gillespie, T. (2018). Governance by and through Platforms. In J. Burgess, A. E. Marwick, & T. Poell (Eds.), The SAGE handbook of social media (pp. 254–278). SAGE Publications.

 

Global Times. (2020, July 24). Weibo to rectify behavior in entertainment, sports, gender equality – Global Times. Www.globaltimes.cn. https://www.globaltimes.cn/page/202007/1195503.shtml?id=11

 

Global Times. (2022, August 24). Harmful online information removed in nationwide campaign to crack down on cyber violence – Global Times. Www.globaltimes.cn. https://www.globaltimes.cn/page/202208/1273719.shtml

 

Marwick, A. E. (2013). Online Identity. In A Companion to New Media Dynamics (pp. 355–364). Wiley‐Blackwell.

 

Pelley, S. (2021, October 4). Whistleblower: Facebook is misleading the public on progress against hate speech, violence, misinformation. Www.cbsnews.com. https://www.cbsnews.com/news/facebook-whistleblower-frances-haugen-misinformation-public-60-minutes-2021-10-03/

 

Shead, S. (2020, November 19). TECH Facebook claims A.I. now detects 94.7% of the hate speech that gets removed from its platform. CNBC. https://www.cnbc.com/2020/11/19/facebook-says-ai-detects-94point7percent-of-hate-speech-removed-from-platform.html