Responsibility for platform content regulation

"Hate Speech stoppen_05.06.2019_0013" by campact is licensed under CC BY-NC 2.0 .

Introduction

Digital platforms have become indispensable in people’s daily lives. However, some people take advantage of the freedom of speech on digital platforms to post offensive and insulting content on some social media platforms. Therefore, users, digital platforms and the government need to take responsibility for stopping harmful content. Users are responsible for what they say and what they do through self-regulation. However, not every user has the self-awareness to self-regulate, so digital platforms create effective measures to review and regulate content. Despite the actions identified by the digital media, the government still needs to develop policies to monitor the content on the platforms and stop the spread of undesirable content.

 

Who is responsible for stopping the spread of undesirable content

Users

When some users encounter unpleasant situations in their daily lives, they use their right to freedom of expression as a tool for verbal attacks, posting violent, offensive and insulting content and comments on social media platforms to vent their frustrations. Many users post their opinions on various topics online, but some post inappropriate comments that insult individuals. (Yenela et al., 2017). Celebrities have also encountered abusive content, such as Years & Years’ lead singer Olly Alexander (BBC, 2022). Olly Alexander mentioned that he chooses not to look at social media platforms after he sings on TV to avoid seeing insulting comments. Some users have even made jokes about rape to him. While users need to be aware of their responsibility to self-regulate, some users cannot self-regulate and post offensive content on the platform when unaware of the harm abusive language can do to others. Therefore, it is the platform’s responsibility to take relevant measures to stop the spread of undesirable content to users who cannot regulate themselves.

 

Platforms

facebook (do we) connect?” by MrTopf is licensed under CC BY-NC 2.0 .

Platforms need to take further responsibility when individual users lack the sense of self-responsibility and the ability to self-regulate. Because of the speed and scope of distribution on social media platforms, it is the responsibility of the forum to stop the spread of undesirable content to maintain the experience and online safety of the platform. Social media platforms are responsible for the behaviour of users who post objectionable content and take steps to stop it (Gillespie, 2018). Therefore, platforms are reliable, to a certain extent, for providing measures to intervene to halt objectionable content. Even though media are responsible for distributing undesirable content, platform regulation sometimes does not have a visible effect. Therefore, the government also needs to take responsibility and provide assistance.

 

Government

Stopping the spread of undesirable content should not all depend on the regulatory responsibility of the platforms; the government should also be responsible. Because platform measures sometimes lack authority, they do not substantially affect some users. Therefore, governments provide more deterrent and authoritative policies to help platforms regulate. The government has a vital role in cyber security and law enforcement, as it can use the internet to learn about content that threatens public safety and take action (O’Hara et al., 2018). Therefore, when the government is involved in maintaining the platform’s security and taking relevant measures, the authority of joint regulation by the venue and the government is higher than that of self-regulation by the government, and the effect will be more significant.

 

Measures to stop the spread of undesirable content

Users improve self-regulation

Users can self-regulate by talking to their inner selves and become aware of whether their actions and words are working, thus promoting regulation of self. Talking to one’s inner self in self-management allows people to reflect on their actions and behaviours, which helps them to become aware of their actions and behaviours and to self-regulate them (de Rooij, 2022). Everyone needs to have a sense of responsibility and accountability, to take responsibility for their actions and behaviour and monitor their activities. When people are educated and aware of their responsibility for self-regulation, they can avoid posting undesirable content on the platform and stop it from spreading.

 

Users tagging campaigns in the platform

Through hashtag campaigns on different platforms, users are posting their experiences of online abuse and using their power to stop the spread of lousy content by calling on platforms and even governments to take the harm of harmful content on their platforms seriously. Many platforms provide a space for women to have a voice, and when women find users who post content that is insulting to women, women dare to go public with these people (Mendes et al., 2018). #Me Too Movement is an example.

“#Metoo: how it is changing the world” by The Economist. All rights reserved. Retrieved from: https://www.youtube.com/watch?v=ATYK2svJ6eM

Women posting their experiences on the platform and in the real world through the #Me Too hashtag resonated with many users and made the platform and the government aware of the seriousness of the spread of harmful content on the forum. As a result, users are taking action to stop the spread of undesirable content by fighting it themselves.

Automatic content review and filtering in the platform

Other undesirable content inevitably appears on the platform. Although the platform has measures to deal with this, it is sometimes impossible to solve the phenomenon in the first place, so the platform needs an automatic content review and automatic filtering. Automated content review and automatic filtering of inappropriate content and comments will help improve users’ quality of use (Yenala et al., 2017). When content is automatically reviewed and filtered, the platform stops the spread of inappropriate content and improves user satisfaction and experience. As the platform retains and attracts more users, the platform grows financially.

Conversely, when undesirable content affects user satisfaction, the number of users decreases and the platform’s economy suffers. Offensive and angry content such as pornography and violence can frighten users to the extent that outlets fear that users will leave the platform because they are overwhelmed by pornography and violence. As users gradually leave, the platform loses money (Gillespie, 2018). For example, the demise of the MySpace platform (Sivakumar, 2020).

What Killed MySpace? (It Was not Facebook).” by ColdFusion. All rights reserved. Retrieved from: https://www.youtube.com/watch?v=Xs5bOyNTPLw

Hate speech is still evident on MySpace, and users are more likely to see lousy content when using the site, hence the software’s deteriorating reputation. When the platform’s reputation deteriorates, the number of users decreases, and economic performance suffers. Therefore, when content is automatically reviewed and screened, it not only ensures user satisfaction and experience and maintains the financial viability of the platform but also goes some way to stopping the development of undesirable content.

 

Freedom of Expression” by littlestar19 is licensed under CC BY-NC 2.0 .

Some users feel that the platform has taken regulatory measures to restrict their freedom of expression. Patterns of speech restriction have emerged on the internet, and consumer protection of speech will be affected as platforms outwardly define themselves as open and fair when closed (Pasquale, 2016). However, some users feel that it is necessary to remove content that is not conducive to public health to maintain a healthy general environment for users. For most users, eliminating content that conflicts with the platform’s standards does not represent a restriction on freedom of expression (Jorgensen et al., 2020). Hence, platforms weigh up on which regulation will not only stop the spread of objectionable content but also ensure that users’ right to freedom of expression is not infringed.

 

European Commission

European Flags” by Xavier Häpe is licensed under CC BY-NC 2.0 .

Regulations set by the government play a vital role in maintaining the public safety of the platform and stopping the development of undesirable content. The government’s purpose is to help promote the safe use of digital media and to urge outlets to be aware of their responsibility to regulate the content on their platforms (Asia News Monitor, 2020). For example, the European Commission (European Commission, 2017) has issued recommendations and measures for platforms to deal with illegal content. In the measure, the European Commission refers to digital media as a driver for developing the digital economy, with their business models more closely connected to their users. However, the emergence of undesirable content on the internet has become critical. The European Commission proposes to improve the dialogue with the sectors and to make them more vigilant about content review.

 

Conclusion

Users, digital platforms and governments should all be responsible for stopping the spread of undesirable content. Users are responsible for raising their awareness. However, self-regulation by users does not make a substantial difference. When some users remain undisciplined, the media stops disseminating undesirable content through automated content detection and screening. Authoritative regulations issued by governments and commissions can help to prevent the spread of undesirable content effectively.

 

Reference List

BBC. (2022). 7 stars who have personal experiences of online bullying. Retrieved in https://www.bbc.co.uk/programmes/articles/3QcD9W13Dr0bxmt4CMWVkGk/7-stars-who-have-personal-experiences-of-online-bullying

de Rooij, A. (2022). Speaking to your Inner Muse: How Self-Regulation by Inner Speaking influences Confidence during Idea Evaluation. Creativity Research Journal, ahead-of-print(ahead-of-print), 1–18. https://doi.org/10.1080/10400419.2022.2124356

EUROPEAN COMMISSION. (2017, September 28). Tackling Illegal Content Online. Retrieved from https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52017DC0555

Gillespie, T. (2018). Regulation of and by Platforms. In J. Burgess., A. E. Marwick., & T. Poell (Ed.), The SAGE Handbook of Social Media (pp. 254-278). SAGE Publications.

Jørgensen, R. F., & Zuleta, L. (2020). Private Governance of Freedom of Expression on Social Media Platforms: EU content regulation through the lens of human rights standards. Nordicom Review, 41(1), 51–67. https://doi.org/10.2478/nor-2020-0003

Mendes, K., Ringrose, J., & Keller, J. (2018). MeToo and the promise and pitfalls of challenging rape culture through digital feminist activism. The European Journal of Women’s Studies, 25(2), 236–246. https://doi.org/10.1177/1350506818765318

Norway: Norway has responded to the European Commission concerning the regulation of digital platforms. (2020). Asia News Monitor.

O’Hara, K & Hall, W. (2018). Four Internets: The Geopolitics of Digital Governance (No. 206). Centre for International Governance Innovation. https://www.cigionline.org/publications/four-internets-geopolitics-digital-governance

Pasquale, F. (2016). Platform neutrality: enhancing freedom of expression in spheres of private power. Theoretical Inquiries in Law, 17(2), 487–513. https://doi.org/10.1515/til-2016-0018

Sivakumar, B. (2020). Does Myspace Still Exist? | Why Did Myspace Fail? Feedough. Retrieved from https://www.feedough.com/does-myspace-still-exist-why-did-myspace-fail/

Yenala, H., Jhanwar, A., Chinnakotla, M. K., & Goyal, J. (2017). Deep learning for detecting inappropriate content in text. International Journal of Data Science and Analytics, 6(4), 273–286. https://doi.org/10.1007/s41060-017-0088-4