The Internet’s Guardians: Balancing Automation and Human Oversight

Social Media” by MySign AG is licensed under CC BY 2.0.

Introduction

  • The Growth of the Global Web
  • Concerns about Hazardous Information

In the contemporary era, the global web has grown to develop into a huge collection of knowledge, ideas and relationships. As a consequence of this development, there is considerable concern on the transmission of potentially hazardous information on web-based platforms, which has resulted in arguments about the most suitable approaches for controlling content. The difficulties in monitoring objectionable content, which might range from incorrect information to open discrimination (Gorwa et al., 2020). Although automatic solutions for content moderation have been introduced by technical breakthroughs, the crucial function that societal engagement plays in this field is being recognised more and more (Jhaver et al., 2019). This essay argues that in order to address internet challenges, in case for the critical role the community has to contribute to content moderation, which emphasises the beneficial effects of the community monitoring programmes, community initiatives, and multi-stakeholder participation.

Dialogue With American Civil Society on Civil and Political Rights in the United States” by US Mission Geneva is licensed under CC BY-ND 2.0.

Societal Engagement in Content Moderation

  • The Value of Online Communities
  • Community Oversight and Reporting Systems
  • Success Stories: Wikipedia and Reddit

In the first place, societal engagement, which includes the combined efforts of online communities, is essential for resolving online harms because it provides an amount of knowledge and context awareness that automated solutions sometimes miss (Dias Oliva et al., 2020). Central to the ethos of societal participation are the intertwined concepts of community oversight and reporting systems. However, community monitoring involves more than simply moderation; it additionally requires promoting an environment of responsibility. Users are obligated to participate in content moderation, curation, and even shaping in order to guarantee that platforms not only follow a set of rules but also develop based on the collective understanding of their target audience (Sander 2020). Reporting systems, complementing this, empower users to be the eyes and ears on the digital ground (Hoque et al., 2021). They may alert platforms to inappropriate or dangerous information, guaranteeing rapid response whilst offering them beneficial suggestions on emerging online hazards. For example, the success of Wikipedia is proof of the value of social engagement, which is a massive collection of information with millions of articles in many different languages (Vincent & Hecht, 2021). Nevertheless, its community-based content control has proven to be surprisingly successful. Volunteers globally review edits, correct inaccuracies, and ensure content neutrality. They are committed because they both believe in the free exchange of knowledge. Notwithstanding predictable disagreements, Wikipedia’s success serves as a reminder of the power of a properly organised community in preserving online material. Furthermore, Reddit, a website renowned for its many communities, or “subreddits,” provides another perspective on the value of societal engagement. Each subreddit, governed by its set of rules, relies on both moderators and community members to uphold content standards. For instance, a subreddit dedicated to scientific discussions will have stringent rules against misinformation, and its members actively debunk false claims, ensuring the integrity of discussions (Krishnan et al., 2021). This granularity in moderation, tailored to specific community needs, showcases the adaptability and context-awareness of societal participation.

Policy Talks @ the Ford School: Ruth C. Browne, CEO of the Arthur Ashe Institute” by University of Michigan’s Ford School is licensed under CC BY-ND 2.0.

Collaboration for Effective Content Moderation

  • Multi-Stakeholder Approach
  • Educational Programming
  • YouTube’s NGO Partnership
  • Initiatives like “Safer Internet Day”

Furthermore, the collaboration involving platforms, society organisations, and governments becomes as an important approach to effectively deal with online harms in the changing environment of the digital world (Cortesi et al., 2020). To be more precise, collaboration among multiple parties and public education are the core principles of this interactive method, which emphasises that various organisations, from IT massive companies to community-based groups, and they have contributed responsibility for influencing the digital environment. In addition, educational programming emphasises offering individuals the knowledge and abilities that they require to utilise the internet securely and responsibly (Caena & Redecker, 2019). A significant example of collaboration in addressing misinformation is observed in the partnership between YouTube and various non-governmental organizations (NGOs). YouTube has enlisted the help of NGOs skilled in fact-checking and media literacy due to the inherent difficulties of automated algorithms in recognising nuanced disinformation subtleties (Abuín-Penas, 2023). In order to improve the accuracy of content management, these organisations firstly examine recorded content, and then provide perceptions on potentially misleading information, assisting in the development of educational materials which demonstrate users how to recognise reliable sources (Burgess et al., 2016). This combination makes users become more selective when choosing information although additionally enhancing the accuracy of content filtering, which has convincingly argued that the community monitoring is efficient and beneficial. Similarly, initiatives including “Safer Internet Day,” as technology companies, academic institutions, and even governmental groups takes part, promoting the idea of a more secure online and moral participation in the digital world. Through an array of workshops, campaigns, and resources, the importance of digital literacy, online decorum, and awareness of potential online threats is accentuated (Nishina et al., 2005). It can be seen that as technology evolves, the synergy between multi-stakeholder collaboration and public education appears as a guiding light, which protect the digital domain to be a more secure and accessible internet.

Robot building” by Bill Selak is licensed under CC BY-ND 2.0.

Automated Content Moderation Technologies

  • The Power of Machine Learning and AI
  • Twitter’s Automated System
  • Scalability vs. Human Judgment

While societal participation offers depth and nuance, automated content moderation technologies, driven by advancements in machine learning and artificial intelligence, present scalable solutions to the vast expanse of digital content (Gorwa et al., 2020). In the real world, computer learning, and AI have grown into the fundamental components of the modern content moderation, helping platforms to rapidly filter through massive volumes of data, finding and reporting content that may violate platform standards (Gongane et al., 2022). Twitter, with its millions of tweets generated daily, leverages automated systems to monitor and manage content (Gorwa et al., 2020). Its computations, which have been trained on massive datasets, can detect and flag dangerous content, which covers hate speech to disinformation. This automatic system enables immediate time moderation, guaranteeing that harmful information is eliminated as soon as possible, frequently before it takes traction. Because of its efficiency and scalability, many academics advocated for automated systems. They argue that given the sheer volume of content generated every minute, manual or societal moderation alone is insufficient. For instance, platforms like Facebook or Instagram, with billions of posts, comments, and images uploaded daily, would require an impractical number of human moderators to oversee content effectively (Cook et al., 2021). For example, YouTube’s Content ID system, which automatically analyses submitted movies against a database of copyrighted materials, and then it displays the possibilities of automation (Gray & Suzor, 2020). It allows copyright holders to manage their content at scale, something that would be nearly impossible through manual means. While automated content moderation systems have limits, their capability to analyse large volumes of data rapidly and effectively cannot be underestimated, which renders them an important aspect of the content moderation field (Gorwa et al., 2020).

Made In Deptford (15)” by ljcybergal is licensed under CC BY-ND 2.0.

Limitations and the Need for Human Judgment

  • Errors in Automated Content Moderation
  • Unforeseen Effects and Moral Dilemmas
  • The Essential Role of Human Moderators

Although automated content moderation tools have outstanding scalability and immediate time reactivity, they are not without restrictions mostly, which frequently result in unforeseen effects. For instance, it’s crucial to consider errors in judgement, moral dilemmas, and a lack of transparency in AI-driven judgements when critiquing automated systems (Anagnostou et al., 2022). While these computer programmes are sophisticated, they typically lack the in-depth information that human judgement provides, which leads to content moderation errors. A notable instance of such misjudgment is YouTube’s automated system, which, in its zeal to combat copyright infringement, has mistakenly taken down legitimate content (Choi et al., 2022). Independent creators, educators, and even official channels have faced unwarranted content removals or demonetization due to the system’s inability to discern between genuine violations and fair use or other legitimate uses (Anagnostou et al., 2022). These limitations of relying only on automated approaches are shown by these mistakes. Although they have unquestionable benefits in terms of speed and scalability, human judgement is frequently needed to handle the complexities of content filtering (Endsley, 2016). This illustrates the essential function that human managers and increased public participation play in creating an equitable and balanced digital economy.

Conclusion

In conclusion, while automated systems, with their scalability and real-time capabilities, provide potential answers, they are not without shortcomings, frequently lacking the nuanced knowledge that human judgement provides. Therefore, the collective responsibilities and active participation among individuals are essential to a reliable and more accessible internet.

Bibliography:

Abuín-Penas, J. (2023). FACT-CHECKING EN YOUTUBE EN ESPAÑA: TIPOLOGÍA DE VERIFICACIONES EN VÍDEO EN 2021. INDEX COMUNICACION, 13(01), 247–269. https://doi.org/10.33732/ixc/13/01factch

Anagnostou, M., Karvounidou, O., Katritzidaki, C., Kechagia, C., Melidou, K., Mpeza, E., Konstantinidis, I., Kapantai, E., Berberidis, C., Magnisalis, I., & Peristeras, V. (2022). Characteristics and challenges in the industries towards responsible AI: a systematic literature review. Ethics and Information Technology, 24(3). https://doi.org/10.1007/s10676-022-09634-1

Burgess, S., Bingley, S., & A Banks, D. (2016). Blending Audience Response Systems into an Information Systems Professional Course. Issues in Informing Science and Information Technology, 13, 245–267. https://doi.org/10.28945/3488

Caena, F., & Redecker, C. (2019). Aligning teacher competence frameworks to 21st century challenges: The case for the European Digital Competence Framework for Educators (Digcompedu). European Journal of Education, 54(3), 356–369. https://doi.org/10.1111/ejed.12345

Cortesi, S. C., Hasse, A., Lombana, A., Kim, S., & Gasser, U. (2020). Youth and Digital Citizenship+ (Plus): Understanding Skills for a Digital World. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3557518

Cook, C. L., Patel, A., & Wohn, D. Y. (2021). Commercial Versus Volunteer: Comparing User Perceptions of Toxicity and Transparency in Content Moderation Across Social Media Platforms. Frontiers in Human Dynamics, 3. https://doi.org/10.3389/fhumd.2021.626409

Choi, D., Lee, U., & Hong, H. (2022). “It’s not wrong, but I’m quite disappointed”: Toward an Inclusive Algorithmic Experience for Content Creators with Disabilities. CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3491102.3517574

Dias Oliva, T., Antonialli, D. M., & Gomes, A. (2020). Fighting Hate Speech, Silencing Drag Queens? Artificial Intelligence in Content Moderation and Risks to LGBTQ Voices Online. Sexuality & Culture, 25. https://doi.org/10.1007/s12119-020-09790-w

Endsley, M. R. (2016). From Here to Autonomy. Human Factors: The Journal of the Human Factors and Ergonomics Society, 59(1), 5–27. https://doi.org/10.1177/0018720816681350

Gray, J. E., & Suzor, N. P. (2020). Playing with machines: Using machine learning to understand automated copyright enforcement at scale. Big Data & Society, 7(1), 205395172091996. https://doi.org/10.1177/2053951720919963

Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society, 7(1), 205395171989794. https://doi.org/10.1177/2053951719897945

Gongane, V. U., Munot, M. V., & Anuse, A. D. (2022). Detection and moderation of detrimental content on social media platforms: current status and future directions. Social Network Analysis and Mining, 12(1). https://doi.org/10.1007/s13278-022-00951-3

Hoque, M. A., Ferdous, M. S., Khan, M., & Tarkoma, S. (2021). Real, Forged or Deep Fake? Enabling the Ground Truth on the Internet. IEEE Access, 9, 160471–160484. https://doi.org/10.1109/access.2021.3131517

Jhaver, S., Birman, I., Gilbert, E., & Bruckman, A. (2019). Human-Machine Collaboration for Content Regulation. ACM Transactions on Computer-Human Interaction, 26(5), 1–35. https://doi.org/10.1145/3338243

Krishnan, N., Gu, J., Tromble, R., & Abroms, L. C. (2021). Research note: Examining how various social media platforms have responded to COVID-19 misinformation. Harvard Kennedy School Misinformation Review, 2(6). https://doi.org/10.37016/mr-2020-85

Nishina, A., Juvonen, J., & Witkow, M. R. (2005). Sticks and Stones May Break My Bones, but Names Will Make Me Feel Sick: The Psychosocial, Somatic, and Scholastic Consequences of Peer Harassment. Journal of Clinical Child & Adolescent Psychology, 34(1), 37–48. https://doi.org/10.1207/s15374424jccp3401_4

Sander, B. (2020). Freedom of expression in the age of online platforms: the promise and pitfalls of human rights-based approach to content moderation. Fordham International Law Journal, 43(4), 939-1006. https://heinonline.org/HOL/Page?handle=hein.journals/frdint43&div=29&g_sent=1&casa_token=&collection=journals

Vincent, N., & Hecht, B. (2021). A Deeper Investigation of the Importance of Wikipedia Links to Search Engine Results. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1), 1–15. https://doi.org/10.1145/3449078