The Role of Content Moderation in Addressing Online Harms

Identification and Removal of Harmful Content, Combating Misinformation and Fake News and Protecting Children and Underage Users

Online harassment lit review by Wikimedia Commons is licensed under CC BY-SA 4.0.

Introduction

The fast expansion of the internet and social media platforms has transformed how we communicate, exchange information, and engage with others. However, the digital revolution has brought with it a slew of new difficulties, including the spread of online harassments such as hate speech, cyberbullying, disinformation, and the exploitation of minors and underage users. The research shows that 15% of Americans experienced serious harassment in 2014, which included physical threats, stalking, sexual harassment, and persistent harassment. This figure increased to 18% in 2017 and will reach 25% by 2021. Furthermore, many persons report experiencing numerous types of severe harassment at the same time (Xiao et al., 2023). By detecting and eliminating dangerous content, combatting disinformation, and safeguarding vulnerable users, content moderation plays a critical role in addressing these online ills. This essay will look at the role of content moderation in addressing these concerns in three key areas, as well as the obstacles and potential solutions.

Identification and Removal of Harmful Content

The discovery and removal of hazardous or improper information from internet platforms is a part of content moderation. This procedure is critical for ensuring a safe and enjoyable user experience. Hate speech, cyberbullying, harassment, and criminal acts are all commonplace on the internet, and content moderation can assist to limit their impact. Platforms may help prevent the spread of hate speech, which can promote violence and prejudice, by quickly recognizing and deleting such content. Content moderation also aids in the fight against cyberbullying and harassment by providing a safe environment for users to express themselves without fear of being targeted or assaulted. Furthermore, content moderation is critical in upholding legal and ethical norms, ensuring that illicit acts, such as child exploitation, terrorism, and drug trafficking, are not enabled or encouraged on online platforms.

Online harassment of women journalists by Wikimedia Commons is licensed under CC BY-SA 4.0.

To identify hazardous content, content moderation teams use a variety of techniques. They employ automatic tools and algorithms to identify potentially harmful information, which is subsequently evaluated by human moderators. These moderators have been trained to read and enforce community standards consistently and equitably, without bias or censorship. They are critical in finding and eliminating content that breaches these criteria, safeguarding users from damage. In addition, artificial intelligence services are common tools. With great granularity, AI content classifiers can recognize sexual, violent, hatred, and self-harm content, and content moderation severity scores show the seriousness of the harmful content. Moderators may use the danger Score Engine to swiftly uncover unknown breaches and priorities those that offer the highest danger. Trusted flaggers are required to detect potentially hazardous information that platforms must delete (Schneider & Rizoiu, 2023). Platforms have historically depended on people to assist them detect hazardous content, and most platforms provide buttons for users to report harmful content. While this strategy has resulted in the removal of large amounts of banned contents, the content may have already spread uncontrollably. Furthermore, people may report material just because they disagree with it; this misuse of the service can harass or suppress others while also clogging the site moderators’ cueing system (Paul & Reininger, 2021).

Combating Misinformation and Fake News

Fake News – Computer Screen Reading Fake News by Wikimedia Commons is licensed under CC BY 2.0.

In the digital age, the spread of disinformation and fake news has become a major worry. Content moderation is critical in the fight against the dissemination of incorrect information. Content moderators help to ensure the integrity of information provided online by identifying and removing misleading or fraudulent content. Content moderation prevents the spread of erroneous or misleading content, which can have real-world repercussions, by fact-checking and validating information. Content moderation teams work with fact-checking organizations to discover and highlight disinformation using AI-powered methods. They examine reported information to determine its veracity and reliability. If material is discovered to be inaccurate or misleading, it is either deleted or labelled as such, giving users with the context they need to make educated decisions.

For example, social media businesses are using content moderation to prevent disinformation and fake news amid the COVID-19 outbreak. Social media has played a critical role in notifying people about the COVID-19 epidemic. They have an impact on how information is viewed, transmitted, and shared online, especially when physical barriers are in place. While these tools have enabled people to participate in public conversations concerning COVID-19, the narratives shared on social media have also been marked by doubt, disagreement, and incorrect or misleading advice (Baker et al., 2020). We’ve all heard some phony news articles regarding Covid-19: perhaps the sickness is caused by 5G networks, or perhaps the entire outbreak was staged, and France’s hospitals are empty. They may be amusing, but such assertions have severe ramifications, as both the WHO and the EU have cautioned (Griffin, 2020). To address these concerns, global technology corporations have implemented new content filtering standards based on the notion of damage to combat the spread of disinformation and misinformation online (Baker et al., 2020). However, the efforts implemented to prevent fake news concerning COVID-19 endanger free expression (Griffin, 2020).

Protecting Children and Underage Users

Circles of care by DFID is licensed under CC BY 2.0.

Content moderation is critical in safeguarding children and young people from internet dangers. Online platforms must make their systems secure and acceptable for young users. Content moderators are critical in detecting and eliminating dangerous or unsuitable content for minors, such as sexual or violent material. They also aim to prevent child grooming, exploitation, and other types of internet abuse. Content moderation teams use a combination of automatic algorithms and human assessment to safeguard youngsters. They detect and eliminate obscene or improper information using picture recognition technologies. In addition, they monitor user interactions and report any questionable or dangerous behavior. Age limits are in place to protect minors from viewing unsuitable content.

Online hazards for children include improper content, cyberbullying, predatory online grooming, and technology addiction (ICMEC, 2023). The UK Online Safety Bill aims to safeguard children and adults in the UK who use the internet. The Digital Services Act, the UK’s Online Safety Bill, and the US’ often modified COPPA have all made child protection a moral and legal duty. COPPA sets specific obligations on owners of websites or online services aimed towards children under the age of 13. Online age verification may be accomplished using a variety of methods. These are referred to as age assurance technologies (GOV.UK, 2022).

Conclusion

By detecting and eliminating dangerous content, combatting disinformation and fake news, and safeguarding minors and underage users, content moderation plays a critical role in mitigating online problems. It contributes to the preservation of a secure and healthy online environment by limiting the propagation of hate speech, cyberbullying, harassment, and criminal actions. By combatting disinformation and fake news, content moderation also adds to the integrity of information given online. Furthermore, it safeguards minors and underage users against improper information and online abuse.

Ninja, R. (2021). CONTENT MODERATION JOB – Description, Qualification, What does it take to be one? [Video]. In YouTube. https://www.youtube.com/watch?v=na9kL7hWpi8

However, content moderation is a difficult undertaking that must be approached with caution in order to find a balance between safeguarding free expression and preventing harm. Transparency, accountability, and coordination among platforms, users, and authorities are required to address online hazards successfully through content moderation. Continued research and development of AI-powered solutions can improve the efficacy of content moderation in combating online harms.

Reference List

Azure AI content safety – AI content moderation. (n.d.). Microsoft Azure. Retrieved October 5, 2023, from https://azure.microsoft.com/en-us/products/ai-services/ai-content-safety

Baker, S. A., Wade, M., & Walsh, M. J. (2020). The challenges of responding to misinformation during a pandemic: Content moderation and the limitations of the concept of harm. Media International Australia177(1), 103–107. https://doi.org/10.1177/1329878×20951301

Bhatt, H. (2023, April 13). COPPA Regulations: Ensuring children’s online privacy and staying compliant. Spectrum Labs. https://www.spectrumlabsai.com/the-blog/coppa-compliance-spectrum-labs

Bhatt, H. (2022, August 3). The Digital Services Act (EU): What is it and how to comply with it? Spectrum Labs. https://www.spectrumlabsai.com/the-blog/european-union-dsa-compliance

Bhatt, H. (2023, February 2). The UK Online Safety Bill and its requirements for compliance. Spectrum Labs. https://www.spectrumlabsai.com/the-blog/uk-online-safety-bill

GOV.UK. (2022, December 16). A guide to the Online Safety Bill. GOV.UK. https://www.gov.uk/guidance/a-guide-to-the-online-safety-bill

Griffin, R. (2020, July 9). Social media and content moderation in times of Covid-19. Sciences Pohttps://www.sciencespo.fr/public/chaire-numerique/en/2020/07/09/social-media-and-content-moderation-in-times-of-covid-19/

ICMEC. (2023, May 16). Protecting children online: The importance of content moderation. Mediumhttps://icmec.medium.com/protecting-children-online-the-importance-of-content-moderation-5e096d9df369

Paul, C., & Reininger, H. (2021, July 20). Platforms should use algorithms to help users help themselves. Carnegie Endowment for International Peacehttps://carnegieendowment.org/2021/07/20/platforms-should-use-algorithms-to-help-users-help-themselves-pub-84994

Schneider, P. J., & Rizoiu, M.-A. (2023). The effectiveness of moderating harmful online content. Proceedings of the National Academy of Sciences120(34). https://doi.org/10.1073/pnas.2307360120

Xiao, S., Jhaver, S., & Salehi, N. (2023). Addressing interpersonal harm in online gaming communities: The opportunities and challenges for a restorative justice approach. ACM Transactions on Computer-Human Interaction30(6), 1–36. https://doi.org/10.1145/3603625

The Role of Content Moderation in Addressing Online Harms © 2023 by Keyi Tang is licensed under CC BY-NC 4.0.