Bullying, harassment, violent content, hate, porn and other problematic content circulates on digital platforms. Who should be responsible for stoping the spread of this content and how?

The spread of problematic content: Who is responsible and How to solve ?

Bullying, harassment, violent content, hate, porn and other problematic content circulates on digital platforms. Who should be responsible for stoping the spread of this content and how?

 

 

“Social media” by Jason Howie is licensed under CC BY 2.0. Retrieved from:
https://commons.wikimedia.org/wiki/File:Social_media.png

 

By Yanglijing Chen

Arin2610

—-

Introduction

Over the past decade, digital platforms have brought convenience to communication and connection among people in an unprecedented way, but this convenience has also opened the door for the rampant spread of problematic content. Hate speech, harassment, cyberbullying, pornography, terrorism and other harmful content are widely circulating on digital platforms. Duggan’s (2017) report has shown that approximately 40% of Americans have suffered harassment from strangers. In this essay, I will examine two main roles that are at play when it comes to preventing the spread of this problematic content on digital platforms: platforms and governments. Second, I will discuss the primary measures adopted by digital platforms and governments to limit the dissemination of harmful content.

 

Why should digital platforms account for the spread of problematic content?

There have been public appeals for digital platforms to take on the responsibility of preventing the spread of problematic content. The reasons for these appeals are manifold, but chief among them is that digital platforms are service providers. To be specific, digital platforms allow users to generate, upload, share, and search for content on their platforms and disseminate this user-generated content quickly to the public. In addition, with the advent of Web 2.0, digital platforms are becoming the primary source for people to acquire information. This means that any potential piece of misinformation or disinformation can easily affect people and cloud their judgement. Harms can be amplified by disseminating them widely on digital platforms.

 

“Pinterest + iPad = Love” by Bunches and Bits is licensed under CC BY 2.0. Retrieved from https://www.flickr.com/photos/37825841@N04/6137341753

One example is Pinterest, a user-generated platform designed to share images and discover information, which has been criticised for allowing the spread of misinformation regarding the efficacy and side effects of vaccinations. In this case, Pinterest’s embedded search function highlights its critical role as people’s primary source of information and significant decisions. Also, as a user-generated platform, one characteristic of Pinterest is that any user can be authoritative and post their ideas on the platform. However, the unknown user credibility has posed a risk to public health. Harmful health misinformation with regard to vaccine safety and measles misled the public, making many people believe that vaccinations were hazardous. This “Vaccine hesitancy,” which is defined by WHO, emphasises the responsibilities of Pinterest in bringing authoritative vaccine results and stopping the spread of harmful content (The Lancet Child & Adolescent Health, 2019). It follows that Pinterest, as a service provider, hosts the content of users while the bad consequences flowing from hosting problematic content have become the key responsibility of Pinterest.

Hosting harmful content may pose more problems than just public health concerns. Violent, extremist, and illegal content, porn, hate speech, and harassment are circulating on the internet every day (Price, 2021). Taking on the responsibility of regulating problematic content not only improves users’ retention but also avoids breaches of copyright, controversy, and criticism (Gillespie, 2018). Therefore, the service provided by digital platforms has determined their responsibility in terms of stopping the spread of harmful content.

How can digital media beat back the distribution of problematic content?

The main digital platforms have taken a combination approach of moderators, authorities, and algorithms to remove harmful content from the platforms. The combination approach of moderators and algorithms can be subdivided into five categories:

Automated moderation

Automated moderation is one of the major tools used in the content moderation process. It involves artificial intelligence and machine learning to process user-generated content and either replace, ban, or flag misleading content based on defined rules. The most basic automated moderation is the word filter, which is set up by experts on moderation and industry trends. The world filter has a list of words that are not allowed, and the tool will either filter out the questionable word, replace it with a defined alternative, or block the content altogether. The benefit of adopting automated moderation is that the tool can address problematic content quickly and cost-efficiently, as is attested by the CSAM detection technology, which can identify harmful content across a wide spectrum in microseconds.

However, automated moderation has limitations in that algorithms sometimes fail to detect problematic content and remove appropriate content instead. Because automated moderation is not perfect, there is often a call for more humans to engage in the content moderation process, providing more accurate censorship (Gillespie, 2018). This brings out the need for another type of moderating content: manual moderation.

 

“Tweet up” by MDGovpics is licensed under CC BY 2.0. Retrieved from:https://www.flickr.com/photos/64018555@N03/6756406881

Manual moderation

Over the past few years, social media platforms such as Facebook, YouTube, and Twitter have hired tens of thousands of people to moderate the user-generated content on their platforms. Pre-moderation and post-moderation are two major methods adopted by these big tech companies.

  • Manual pre-moderation

By manual pre-moderation, a human moderator will screen each piece of user-generated content to determine whether it should be removed, replaced, or revised before it is visible to other users. The benefit of manual pre-moderation is that undesirable content will be kept off the visible community sections of platforms. It is highly effective against potential online behaviour of child sexual exploitation and grooming.

  • Manual post-moderation

With manual post-moderation, content is allowed to be posted instantly on platforms before being reviewed by content moderators. It provides a decent user experience and a faster-paced community since user interactions occur in real time.

 

 

“Computer Science Technology” by Vaniercollege is licensed under CC BY 2.0. Retrieved from:https://www.flickr.com/photos/12930265@N04/7725405016

 

Reactive moderation

Reactive moderation relies on users to flag content that is considered to be problematic. In this way, users hold each other accountable for the regulations defined by the platforms. For instance, Twitter users can flag questionable content to moderators by simply clicking the report button. It is a highly useful tool used to supplement manual moderation and automated moderation.

 

“The End Of The Government Shutdown 2013” by Stephen D. Melkisethian is licensed under CC BY 2.0. Retrieved from:https://www.flickr.com/photos/stephenmelkisethian/10320387414/in/photostream/

 

Why are governments responsible for regulating harmful content online?

Even though digital platforms have implemented a series of rules and regulations to regulate their platforms in the past few years, harmful content has continued to grow online. Scandals involving harassment, election interference, misinformation, fake news, and violations of privacy call into question the self-regulation of digital platforms.

According to Section 230 of the Communications Decency Act of the United States, a powerful law that shields digital platforms from liability for user-generated content. This means digital platforms are not responsible for regulating and removing harmful content. Because of this, the self-regulation model has allowed digital platforms to ignore the safety of their platforms but prioritise “free speech” of users. For example, Twitter’s refusal to regulate user-generated content has turned Twitter from an internet hero into an internet villain since the ongoing increase of problematic content on its platform (Klonick, 2018). Concern about losing users is another reason why digital platforms refuse to police content. Since one of the attractive points of digital platforms is that they support freedom of expression, harsh moderation may make users lose their opportunities for interaction, thereby losing their trust in platforms. Furthermore, despite the fact that digital platforms have adopted some measures to stop the spread of harmful content, the decisions platforms make to moderate content are opaque and vague (Brown, 2021). Millions of pieces of content posted by users from all over the world are moderated based on general and vague rules. Thus, digital platforms do not do enough to moderate content, and to ensure digital platforms do not abuse their positions, it requires more effective and targeted government interventions.

 

“Misinformation” by Jimjarmo is licensed under CC BY 2.0. Retrieved from:https://www.flickr.com/photos/stephenmelkisethian/10320387414/in/photostream/

 

How can governments limit the spread of harmful content?

Although each country has its own unique national situation, government regulation is the common method adopted by global nations to regulate digital platforms and limit the dissemination of problematic content. Government regulation is also known as “command and control” regulation, in which governments threaten digital platform firms with legal and financial penalties to compel them to adhere to the rules (Gorwa, 2019). For instance, Germany’s Network Enforcement Act forces digital platform companies to take down inciting racial hatred content, yet digital platform companies end up with 49 million euro fines due to failure to delete illegal content. Australia has also enacted the News Media and Digital Platforms Mandatory Bargaining Code, a mandatory code that addresses the tension between Australian news businesses and digital platforms, which benefits from the bargaining power imbalance.

Conclusion

In conclusion, digital platforms and governments around the world are all responsible for the dissemination of problematic content. Digital platforms have adopted a series of self-regulation approaches to moderate user-generated content. However, due to the lack of regulations on platforms and attractive profits, they fall short in their efforts to remove harmful content. Digital platforms need to be regulated by the government to make sure they do not take advantage of their power.

 

 

 

—————————————————————-

References

Brown, N. (2021). Regulatory Goldilocks. Texas A&M Law Review, 8(3), 451–494. https://doi.org/10.37419/lr.v8.i3.1

Duggan, M. (2017, July 11). Online Harassment 2017. Pew Research Center: Internet, Science & Tech; Pew Research Center: Internet, Science & Tech. https://www.pewresearch.org/internet/2017/07/11/online-harassment-2017/

Echikson, W. (2019, August 2). The Impact of the German NetzdG law. CEPS. https://www.ceps.eu/ceps-projects/the-impact-of-the-german-netzdg-law/

Electronic Frontier Foundation. (2019). Section 230 of the Communications Decency Act. Electronic Frontier Foundation. https://www.eff.org/issues/cda230

Everything in Moderation. (n.d.). New America. https://www.newamerica.org/oti/reports/everything-moderation-analysis-how-internet-platforms-are-using-artificial-intelligence-moderate-user-generated-content/how-automated-tools-are-used-in-the-content-moderation-process/

Gillespie, T. (2018). All Platforms Moderate. In Custodians of the internet : platforms, content moderation, and the hidden decisions that shape social media. New Haven Yale University Press.

Gorwa, R. (2019). The platform governance triangle: conceptualising the informal regulation of online content. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1407

Grimes-Viort, B. (2010). 6 types of content moderation you need to know about. Social Media Today. https://www.socialmediatoday.com/content/6-types-content-moderation-you-need-know-about

Hirsh, J. (2019, September 11). Why Social Platforms Are Taking Some Responsibility for Content. Centre for International Governance Innovation. https://www.cigionline.org/articles/why-social-platforms-are-taking-some-responsibility-content/

Klonick, K. (2018). THE NEW GOVERNORS: THE PEOPLE, RULES, AND PROCESSES GOVERNING ONLINE SPEECH. Harvard Law Review, 131(6), 1598+.

Price, L. (2021). Platform responsibility for online harms: towards a duty of care for online hazards. Journal of Media Law, 13(2), 238–261. https://doi.org/10.1080/17577632.2021.2022331

Rochefort, A. (2020). Regulating Social Media Platforms: A Comparative Policy Analysis. Communication Law and Policy, 25(2), 225–260. https://doi.org/10.1080/10811680.2020.1735194

Strandell, J. (2016, March 15). 5 Moderation Methods You Should Understand. Besedo. https://besedo.com/knowledge-hub/blog/5-moderation-methods-you-should-understand/

The Lancet Child & Adolescent Health. (2019). Vaccine hesitancy: a generation at risk. The Lancet Child & Adolescent Health, 3(5), 281. https://doi.org/10.1016/s2352-4642(19)30092-6