Online dissemination of harmful content on online platforms: who should be responsible and how?

"Social Media Icons Color Splash Montage - Landscape" by Blogtrepreneur is licensed under CC BY 2.0.

In today’s social media era, the anonymity of online platforms and the concealment of communication channels make harmful speech proliferate, while the interactive and social nature of online platforms also provides space for the spread of speech. This paper argues that harmful speech can bring economic, political, and social hazards and that measures to address problematic speech need to be taken by platforms, governments, and users individually or jointly.

Platform and Self-regulation

To begin with, social media platforms are responsible for stopping the spread of harmful content because they act as intermediary services that allow users to post harmful content online and as creators of online social spaces that allow harmful content to spread quickly (Price, 2022). According to an online survey of U.S. adults conducted in August 2021, more than 90 percent of respondents believe that social media platforms make it easier for people to make comments online that they would not make offline, harass or threaten others, and spread false information. Therefore, for a healthy online environment, platforms need to limit the presentation of speech by establishing regulatory rules.

First, platforms need to make their processes for exercising their power more transparent, for example, by informing users of the specific rules they are violating before removing their posts and providing them with the opportunity to explain and rebut the enforcement(European Commission, 2022). This is because some platforms, such as Facebook, claim that defining the boundaries of harmful content is sometimes difficult and that judgments about online language require contextual understanding, making it difficult to always properly enforce policies.

“University of Maryland and Sourcefire Announce New Cybersecurity Partnership” by Merrill College of Journalism Press Releases is licensed under CC BY-NC 2.0.

Second, if platforms are unable to achieve comprehensive regulation of published harmful content, improvements can be made at the technical level to target content distribution. Since platform algorithms are based on big data management, platforms can develop AI review software to intelligently screen and filter harmful content recommended by algorithms to users (Gorwa et al., 2020). This can meet the personalized needs of users and can also resist the spread of harmful content, which fully responds to the platform’s self-regulation and can respond to the complex characteristics of the platform and innovative development in a timely manner, more timely from the time dimension. Finally, it is undeniable that there are still many shortcomings in platform self-regulation so far, for example, the platform’s business model based on targeted advertising requires a large and active user base to gain benefits (Gillespie, 2017), so in order to retain users sometimes need to challenge the moral dimension. Therefore, in order to better manage harmful content on online platforms, appropriate government intervention and regulation are needed.

Government and co-regulation

Because harmful speech on the Internet can raise issues such as infringing on citizens’ rights, interfering with public order, and undermining national security, it is necessary for the government to participate in platform regulation to stop the spread of harmful content. Self-regulation by platforms is insufficient and requires co-regulation implemented with the government, meaning that the state provides a broader legal framework and coordinates and oversees platform regulation, while technology companies have a great deal of decision-making power (Eva et al., 2005).

"government" by Mike Lawrence is licensed under CC BY 2.0
“government” by Mike Lawrence is licensed under CC BY 2.0

First, the government can legislate how online platforms should deal with harmful content and impose punitive measures. A German law called NetzDG, for example, makes all platforms with more than 2 million users in Germany subject to the law and must ensure that complaints online are thoroughly investigated and all illegal content is removed within 24 hours, or the platform is subject to fines of up to 50 million euros. This German regulation has achieved strict control over the spread of harmful information on the Internet and has the advantage of motivating the Internet industry to consciously comply with the public interest, as the state imposes greater accountability. But the bill has been controversial because many believe it restricts freedom of expression. Second, the government can review whether platforms have a sound algorithmic system for checking harmful content and suggest improvements (Zuckerberg, 2019).

“Facebook” by chriscorneschi is licensed under CC BY-SA 2.0.

Since January 2018, the French government has sent teams of senior civil servants to validate the effectiveness of Facebook’s content governance. This approach diminishes the power of the platform to regulate content and gives the state a modest role in content auditing. That is, it chose to participate indirectly by changing the tech company’s procedures and operating model rather than directly participating in the platform’s content review. This can reduce concerns that the government will unduly impede freedom of expression and achieve the goal of balancing the fundamental right to freedom of expression and the public interest in achieving it (Eva et al., 2005). In any case, legal foundations and government involvement can go some way to providing platform users with safeguards against the rampant spread of harmful content. But this requires regulation not only by the government and the platform but also by users of their own behavior.

Users and Conscious management

Finally, individual users are responsible to some extent for the dissemination of harmful content on the platform. In today’s Internet age, platforms own content while users create it. Users need to be fully aware of how harmful content traumatizes social development and the physical and mental health of other users and know how to take responsibility for their own speech. In 2017, a Pew Research Center study reported that the number of Americans who have experienced online harassment reached 41 percent, a 35 percent increase from 2014. Also In 2018, 53 percent of the Defamation League’s survey of Americans said they have experienced hate speech and harassment online. These harmful statements are not just through personal private messages but refer more to subjective statements made by users spread on the platform that make the target audience feel violated and uncomfortable.

“Algorithmic Contaminations” by derekGavey is licensed under CC BY 2.0.

First, in order to stop the spread of harmful content, users need to get out of the situation where they are being manipulated by the platform. For example, the platform’s “filter bubble” is an algorithmic mechanism that manipulates users, narrowing their exposure to information, making it difficult to reach different points of view, and causing social fragmentation (Flew et al., 2019). Because platforms recommend a lot of information that matches people’s worldviews, people are misled into believing that their own views are mainstream and correct. Users should be aware of the importance of diversity in online society and that extreme expression of their views is undesirable. In addition, users should actively report harmful content on the platforms. Many nonprofit organizations have accounts on various platforms, such as Report Harmful Content on Twitter, which regularly posts and links to help users report harmful content online, and users can follow them to learn how to deal with harmful content they encounter online.

Therefore, people need to be more digitally literate and actively involved in cleaning up harmful content online to stop the further spread of harmful content on the platform.

 

To conclude, stopping the spread of harmful content on online platforms is urgent for the healthy development of the Internet today, and platforms, governments, and users should all take responsibility for this. However, the variety of ways to spread information on the Internet, and the speed, and widespread of the Internet have increased the difficulty of the network platform’s harmful information governance. This is a complex and long-term process, it requires the technical level, the legal level, and the industry itself and other parties to participate in order to build a civilized and healthy network environment. We need to work harder on the road of network governance, there are still many issues that need to be further explored, such as the legal positioning of the platform, the balance between business models and platform regulation.

 

Reference

European Commission. (2022, June 7). Illegal content on online platforms. Shaping Europe’s digital future. Retrieved October 15, 2022, from https://digital-strategy.ec.europa.eu/en/policies/illegal-content-online-platforms

Eva, L., Jos, D., & Partrick, R. (2005). The Co-Protection of Minors in New Media: A European Approach to Co-Regulation. ReaserchGate, 97–151.

Flew, Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal of Digital Media & Policy, 10(1), 33–50. https://doi.org/10.1386/jdmp.10.1.33_1

Gillespie, T. (2017). Regulation of and by Platforms. In The SAGE Handbook  Social Media (pp. 254–278).

Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society, 7(1), 205395171989794. https://doi.org/10.1177/2053951719897945

Price, L. (2022,). Platform responsibility for online harms: Towards a duty of care for online hazards. Journal of Media Law, 13(2), 238–261. https://doi.org/10.1080/17577632.2021.2022331

Zuckerberg, M. (2019, April 1). Opinion | Mark Zuckerberg: The internet needs new rules. let’s start in these four areas. The Washington Post. Retrieved October 15, 2022, from https://www.washingtonpost.com/opinions/mark-zuckerberg-the-internet-needs-new-rules-lets-start-in-these-four-areas/2019/03/29/9e6f0504-521a-11e9-a3f7-78b7525a8d5f_story.html