Bullying, harassment, violent content, hate, porn and other problematic content circulates on digital platforms. Who should be responsible for stoping the spread of this content and how?

Internet Open” by Blaise Alleyne is licensed under CC BY 2.0.

Introduction

The widespread usage of digital platforms is the best evidence that the information era has arrived in our world because of the Internet’s continual expansion. However, the rise of digital platforms has had a few unfavorable repercussions in addition to bringing ease to the lives of the general public. The aim of this blog is to provide examples of cyberbullying and the spread of hate speech to demonstrate the necessity for regulation of digital platforms. Additionally, it will cover who should oversee policing digital platforms, how to prevent the spread of unfavorable material, and instances of China’s supervision of such platforms.

                       Why do digital platforms need regulation?

Stop Bullying”by Nilufer Gadgieva is licensed under CC BY-NC 2.0.

Cyberbullying, which is the use of electronic communication technologies like phones, instant messaging, and social media to commit harmful, embarrassing, hostile, or provocative behaviors against individuals rather than groups of people, is one of the most prevalent issues on digital platforms (Zhou,2021). Adolescence is a specific time for impulsiveness and self-esteem to reach extremes, and many teenagers are depressed and even suicidal as a result of cyberbullying. Adolescents exhibit an immature attitude in terms of emotional and behavioral control. As a result, while also raising significant public concern, the growth of the worldwide Internet has created new potential for online platforms to be abused for adolescent cyberbullying (Zhou,2021). For example, a Chinese teenager named Liu Xuezhou gained a lot of attention on social media after posting a video on a digital platform requesting help in locating his biological parents (News, 2020). The fact that Liu Xuezhou was able to find his original parents with the aid of the online community should have been exciting news, but immediately after, he wrote a suicide letter on Weibo and made the decision to end his life. Liu Xuezhou’s biological parents chose to cut off communication with him after his reunion with them and after he showed a need for financial assistance. They then accused Liu Xuezhou online of merely seeking a house and money and not actually wanting to live with them. Liu Xuezhou couldn’t endure the cyberbullying that occurred; some claim he only used the digital platform to make money to win the public’s compassion (News, 2020). He was just 17 years old, but the cyberbullying he experienced was unbearable. His social media profiles weren’t again flooded with supportive remarks and rage against online bullies until after he took his own life. Who is to blame for Liu Xuezhou’s passing? Who turned the digital platform hostile?

BrickArms Lewis Gun prototype (clip version)”by Andrew Becraft is licensed under CC BY-NC-SA 2.0.

In addition, one of the issues that appear frequently on internet platforms is the spread of extremism and hatred. Platforms are a component of the online media ecosystem, where hate speech is amplified and distributed using digital platforms as weapons (Costello et al., 2016). For example, the 2019 mosque shooting in New Zealand exposed to the world how YouTube and Facebook can disseminate hate speech. The shooter streamed his shooting live on Facebook and made a variety of comments about his animosity for Muslims. Moreover, numerous Internet users were still able to upload these shooting recordings and hate speech on YouTube after the perpetrator was apprehended. Despite YouTube’s assurances that they are making every attempt to delete the violent video, it is already widely and rapidly disseminating nasty material, which is likely to encourage more people to commit hate crimes (Timberg et al., 2019). Therefore, the New Zealand Prime Minister launched a global initiative to combat hate speech online following the tragedy, expecting that digital platforms would do their best to reduce and restrict the publication of hate speech while the general public would refrain from using hate speech as well (Menon, 2020). These two real cases serve as a reminder of the importance of regulating digital platforms and the fact that, in order to prevent the spread of harmful information, regulating digital platforms requires the cooperation of several different stakeholders.

                       Who should oversee overseeing digital platform    regulation?

                       How can false information be stopped?

The traditional, single regulatory model has been difficult to cope with the huge regulatory content of digital platforms, and co-regulation and shared governance between platforms and governments is currently the most effective approach (Donovan, 2019). Firstly, the global span of digital platforms makes platform regulation more complex and creates a whole new challenge, which requires coordination between national governments to solve (Australia Government, 2022). Platform regulators are unable to fulfil this duty on their own; instead, they must coordinate with platform management agencies of other national governments in order to finish the regulation. As a result, many global platforms can collaborate to establish digital platform regulatory codes, share data and information, and interact with counterparts around the world to improve the regulatory policy (Australia Government, 2022). Second, by empowering platform regulators to “establish general norms and laws” and by collaborating with platforms to keep an eye on content on digital platforms, governments can innovate in the regulatory domain. Platforms might continue by outlining their own policy initiatives to address particular platform challenges (like the automated removal of hate speech content), while working with the tech sector to implement artificial intelligence-based regulation (Popiel, 2022). The government would then lobby authorities for technological solutions to support these technological efforts and hasten the platform’s content evaluation. In addition, removing content during this content review process is the greatest strategy to prevent the submission of offensive or problematic information (Popiel, 2022). Furthermore, if and when obscure objectionable content is successfully posted, platform regulators should be the first to check whether such information it has caused public discussion, whether it has had a negative impact, and to delete it as quickly as possible.

For example, in China, the government regulates digital platforms primarily, and the platforms are forced to take on the “primary responsibility” of the platforms. The Chinese government first controls online discourse on digital platforms by establishing sensitive keywords and particular terms for quicker content suppression (Wang, 2020). Additionally, the rules governing digital platforms are continually being updated and expanded by the Chinese government. This means that platform regulators not only have to abide by the regulatory approach and content control parts of the government, but also have to create rules to better enforce those aspects. Moreover, unlike in Western nations, China’s digital platforms enforce a real-name policy, requiring users to submit accurate information about their identities before posting any content,the platforms can “legally” collect massive amounts of user data (Wang, 2020). This fundamentally deters Internet users from publishing objectionable content. However, there are drawbacks to excessive cooperation between digital platforms and government watchdogs, such as limiting freedom of expression online.

Black and white portrait of a man with mask with text Stop COVID-19” by Jernej Furman is licensed under CC BY 2.0.

For example, COVID-19 exacerbates the Chinese government’s restrictions on freedom of expression on digital platforms. Li Wenliang, a doctor from Wuhan, was ordered to sign a statement promising to stop engaging in behaviors that gravely disturbs social order after being detained for posting messages to social media alerting his friends about the virus’s progress (Moynihan & Patel, 2021). We can infer from this case that the government disregarded the digital platform’s own rules regarding content regulation and review, completely self-governed. Dr. Li’s death prompted demands for online speech freedom, but the government only temporarily stopped overreglementing digital platforms. Therefore, joint governance and co-regulation between the government and platforms should be impartial and include checks and balances on both sides. When one party has too much regulatory authority, it might make the regulation not as effective as desired or over-regulated.

Conclusion

In conclusion, the rising of digital platforms has given us a powerful network effect, creating unprecedented new functions and values, but at the same time we also must deal with the negative impacts it brings. Taking the cyberbullying suffered by Liu Xuezhou and the New Zealand Mosque shooting as examples, we learn that digital platforms must be maintained clean through regulation. Furthermore, platforms and governments should replicate regulation of digital platforms to achieve co-regulation, shared governance, and mutual checks and balances between the two to maintain a neutral state to manage digital platforms.

 

References:

Australia Government. (2022, June 29). Digital Platform Regulators Forum communique. Home. https://www.oaic.gov.au/updates/news-and-media/digital-platform-regulators-forum-communique

 

Costello, M., Hawdon, J., & Ratliff, T. N. (2016). Confronting online extremism. Social Science Computer Review, 35(5), 587–605. https://doi.org/10.1177/0894439316666272

 

Donovan, J. (2019, October 28). Navigating the tech stack: When, where and how should we moderate content? Centre for International Governance Innovation. https://www.cigionline.org/articles/navigating-tech-stack-when-where-and-how-should-we-moderate-content/?utm_source=google_ads&utm_medium=grant&gclid=EAIaIQobChMIpcLbsLzX-gIVA5lmAh31_APCEAAYAiAAEgKWp_D_BwE

 

Moynihan, H., & Patel, C. (2021, March 17). Chatham House – international affairs think tank. Chatham House – International Affairs Think Tank. https://www.chathamhouse.org/2021/03/restrictions-online-freedom-expression-china/chinas-domestic-restrictions-online-freedom

 

News, B. (2022, January 25). Liu Xuezhou: Outrage over death of “twice abandoned” China teen. BBC News. https://www.bbc.com/news/world-asia-china-60080061

 

Popiel, P. (2022). Digital Platforms as Policy Actors. In Digital Platform Regulation: Global Perspectives on Internet Governance. Springer Nature.

 

Timberg, C., Harwell, D., Shaban, H., Tran, A. B., & Fung, B. (2019, March 15). The New Zealand shooting shows how YouTube and Facebook spread hate and violent images — yet again. The Washington Post. https://www.washingtonpost.com/technology/2019/03/15/facebook-youtube-twitter-amplified-video-christchurch-mosque-shooting/

 

Wang, J. (2020, June 30). Regulation of Digital Media Platforms: The case of China. FLJS. https://www.fljs.org/regulation-digital-media-platforms-case-china

 

Zhou, S. (2021). Status and risk factors of Chinese teenagers’ exposure to cyberbullying. SAGE Open, 11(4), 215824402110566. https://doi.org/10.1177/21582440211056626

This work is licensed under a Creative Commons Attribution 4.0 International License.