Vetting techniques and public participation: addressing the challenges of manipulation, discrimination and bias in online content vetting

The ideals of ‘open’ and ‘free’ platforms that resonate with democratic and utopian principles are fallacious; platforms must somehow manage content (Gillespie, 2018).

Introduction

This paper proposes a research topic, “Moderation and Public Social Engagement in Content Review Technologies”, which aims to effectively address the current challenges of mass media manipulation, gender/racial discrimination, and algorithmic bias (machine automation) in the content review environment. In the age of the Internet, the effectiveness of online content censorship in combating rampant misinformation and extremist content on the web has become a very familiar and controversial topic today.

With the rapid growth of the internet, “the shift of more and more public discourse, cultural production, and social interaction online, and the continued growth of large-scale, privately owned digital intermediaries in economic and cultural power” (Gillespie, 2018) the spread of misinformation, breaches of personal privacy, data misuse practices, and the increasing online hate speech or harassment on platforms. The existence of these issues has forced the academic field of content review to face pressing challenges related to policy development. This, in turn, has led to more opportunities for content auditing to promote diversity, inclusion, and fairness on online social media platforms. This research contributes to a deeper understanding of important issues in the field of content review and improves effective recommendations for policy development and practice.

social media” by Sean MacEntee is licensed under CC BY 2.0.

Responsibility for content auditing

Content auditing, broadly defined as “the construction of governance mechanisms for community engagement to promote co-operation and prevent abuse” (Grimmelmann, 2015), is more than simple information filtering; it is a comprehensive community management and co-operation facilitation mechanism.

Content auditing is supposed to be the responsibility of the community, and community members usually work together to uphold the rules and values of the community and ensure the quality and compliance of the content. However, as it has evolved, the scale as well as complexity of content has grown and has expanded to the responsibility of social media platforms, or on a larger level. As Gerrard and Thornham argue, the power and policy of social media content auditing lies not only in its process and outcomes, but also in decisions about what to audit and why. (Gerrard & Thornham, 2020) shapes the overall environment and influence of online social media. However platforms manipulate with “deliberately blunt language” (Gillespie, 2018) when reweighing their vetting algorithms.

Gender and Cultural Challenges of Social Media Content Review

Gillespie argues that the ‘voice’ of a platform’s community norms often aligns with its ‘character’ (Gillespie, 2018). (Gillespie, 2018) The strength of the vetting system is heavily influenced by the culture of the community (the culture, values and worldview of the employees). According to Gillespie (2018), “most full-time employees of social media platforms are overwhelmingly white, overwhelmingly male, overwhelmingly educated, overwhelmingly liberal or libertarian, and overwhelmingly technologically skilled and worldly”. (Gillespie, 2018) This can lead to algorithms that are male-dominated, meaning that inappropriate content may bypass censorship or continue to circulate on the fringes of mainstream platforms through the use of further ‘softened’ rhetoric and cryptic images.

The problem is particularly acute on non-mainstream online forums such as Incels.co and 4chan, where a range of derogatory, defamatory and discriminatory comments are made against specific groups. For example, incel terms such as “stacys ,” “noodlewhores ,” “femoid ,” and “meme gender,” and other incel terms were prevalent. This type of vocabulary allows partialisms to be digested, understood, and re-circulated to a wide audience. (Won, 2021) Further exacerbating the problem of gender inequality in society.

Fight for a world without sexism” by ctrouper is licensed under CC BY 2.0.

Therefore in terms of policy responses and platform actions for the future, a diverse employee screening system should be adopted. Focusing on issues such as employee demographics, work practices and policies, identity markers and labour issues as suggested by Gerrard & Thornham. (Gerrard & Thornham, 2020) thereby reducing the contentious issue of inequality on online platforms in terms of gender and racial discrimination, etc., and increasing the fairness and inclusiveness of vetting.

Limitations of AI review technology
With massive content generation on online platforms, the huge base of text, images, and videos cannot be handled by manual review. Public attention to the issue of content review has increased dramatically, catalysed by a number of high-profile events: the 2016 US presidential election, Gamergate, and random killings on the streets. The work pressure of content auditing can no longer be relied upon to be carried out by traditional manual auditing methods, and alternatives need to be found to alleviate the inherent problems.

With public opinion and government pressure exerted, “AI algorithmic review systems are increasingly being used by major platforms such as Facebook, YouTube, and Twitter to conduct large-scale content audits of user-generated content.” (Gorwa et al., 2020) During this time, due to the Covid-19 pandemic most reviewers were unable to work as they had in the past, and “Twitter and many mainstream social media outlets were forced to move to automated review altogether.” (Gillespie, 2020)

Automated vetting is a technology that prevents the posting of illegal and illicit content by using computers to automatically identify and analyse video content and analyse the security of text, images, audio and video content distributed on the web. While intelligent tools accelerate the review process and reduce the burden of manual review, there are limitations.

Artificial Intelligence & AI & Machine Learning” by mikemacmarketing is licensed under CC BY 2.0.

The shift to AI for mass censorship should be subject to careful and critical scrutiny. Artificial intelligence detection technologies have difficulty in detecting inappropriate or offending content due to the difficulty of interpreting context as well as subtle connections when it comes to understanding and differentiating between complex aspects of content. This can result in unnecessary removal and blocking of legitimate content, causing harm to the user experience and infringement of freedom of expression.

In addition, “the need for cultural contextualisation in detection systems is a widely recognised limitation, as there are no all-encompassing algorithms that can be applied to different contexts.” (Sahana Udupa et al., 2023) As a result, hate groups continue to be able to avoid detection by AI through satire, humour, coded language, puns and cryptic wording, and continue to post harmful information online. For example, when trying to post text and images containing sensitive or offending content in software such as Weibo, PostBar, and Douban, they can blur, obscure, or rotate the sensitive parts upside down, making it difficult for the Ai system to identify whether the images contain harmful information. Of course, as AI detection technology advances, malicious users will continue to find new ways to bypass detection.

Therefore, when using automated review tools, their limitations need to be carefully considered. As Gillespi (2020) points out, “automated tools are best suited for identifying the majority of cases, leaving less obvious or more controversial identifications to human reviewers.” This combination of AI review and manual review is used to deal with complex, controversial situations involving ethical judgements. Ensure a balance between freedom of expression and effective content management while increasing the accuracy and intelligence of AI technology.

“Not to replace repetitive human judgement, but to augment the intelligence of the facilitator and the user community” (Gillespie, 2020)

Public social engagement: safeguarding freedom of expression and content censorship watchdogs

While platforms conduct content review through manual and AI technologies, public participation in the review of online content is also crucial. In the digital age, the internet has become the primary platform for expressing and sharing opinions, and users, as the direct recipients of content, are able to quickly identify and report issues.

‘Peace is the only battle worth waging’ (51927833964)” by Alisdare Hickson from Woolwich, United Kingdom is licensed under CC BY-SA 2.0.

According to research, social media platforms offer the opportunity to inform a wide range of people about issues and invite users to participate in a variety of ways. (US EPA, 2016) This action empowers people to be free from tyranny, above other authorities, and to become watchdogs of the platforms. When users actively participate in reporting relevant false and sensitive content, the relevant platforms combine manual and automated censorship techniques to identify as well as tag relevant keywords, quickly stifling misinformation at its source and more effectively safeguarding the public’s freedom of expression, as well as the dissemination and proliferation of relevant forces.

By encouraging the public to actively participate in content review, more people can be involved in monitoring social media platforms, (crowd-sourcing mechanisms) thus effectively counteracting potential abuse of power and censorship. As SharmaTwitter user noted in his reaction to the COVID-19-related migrant crisis in India, “Whenever an image of a distraught citizen appears in the mass media, an angry public urges the authorities to take strong corrective action.” (Sharma, 2022) Users on social media joining the right to travelling censorship of mainstream events can amplify the moral outrage on social media and push the platforms to take action by acting as a whole. This leads to continuous improvement of the censorship mechanism to make it more fair and transparent.

Conclusion

In the era of Internet platformisation, content moderation has become a core issue. Different Internet platforms have different management rules, but they all need to operate within the scope of legal compliance and must obey criminal laws and regulations.

Facing the challenge of content appropriateness in the era of Internet platformisation requires a comprehensive consideration of various factors such as gender, culture, AI technology and public participation. It is only through collaborative efforts that we can find the balance to safeguard freedom of expression while preserving equity and diversity in society to meet the challenges of the evolving digital age.

References

Gerrard, Y., & Thornham, H. (2020). Content moderation: Social media’s sexist assemblages. New Media & Society, 22(7), 1266–1286. https://doi.org/10.1177/1461444820912540

Gillespie, T. (2018). All Platforms Moderate. In Custodians of the Internet : Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (pp. 1–23). Yale University Press.

Gillespie, T. (2020). Content moderation, AI, and the question of scale. Big Data & Society, 7(2), 205395172094323. https://doi.org/10.1177/2053951720943234

Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society, 7(1), 205395171989794. https://doi.org/10.1177/2053951719897945

Grimmelmann, J. (2015). The Virtues of Moderation. Yale Journal of Law and Technology. https://openyls.law.yale.edu/handle/20.500.13051/7798

Sahana Udupa, Antonis Maronikolakis, & Axel Wisiorek. (2023). Ethical scaling for content moderation: Extreme speech and the (in)significance of artificial intelligence. 10(1), 205395172311724-205395172311724. https://doi.org/10.1177/20539517231172424

Sharma, N. (2022). Digital Moral Outrage, Collective Guilt, And Collective Action: An Examination of How Twitter Users Expressed Their Anguish During India’s Covid-19 Related Migrant Crisis. Journal of Communication Inquiry, 019685992210811. https://doi.org/10.1177/01968599221081127

US EPA, O. (2016, August 22). Public Participation Guide: Social Media. Www.epa.gov. https://www.epa.gov/international-cooperation/public-participation-guide-social-media

Won, Y. B. (2021, April 6). Male Supremacism, Borderline Content, and Gaps in Existing Moderation Efforts. GNET. https://gnet-research.org/2021/04/06/male-supremacism-borderline-content-and-gaps-in-existing-moderation-efforts/