In response to increasing pressure from governments and the public, digital platforms actively seek technological solutions for content moderation, particularly in addressing issues like hate speech and misinformation. In recent years, the rise in the number of social media users has amplified the need for compelling content moderation. As more users contribute content, the importance of this process is likely to grow significantly. Artificial intelligence (AI) plays a crucial role in platform content moderation as it helps cope with the ever-expanding user base and content volume. Prominent social media platforms, such as Facebook and YouTube, have adopted a hybrid approach, combining technology with human reviewers to conduct large-scale content moderation of user-generated content. This essay explores AI’s current advantages and disadvantages in content moderation, specifically focusing on Facebook and YouTube. Artificial Intelligence accelerates content moderation and reduces costs. However, Artificial Intelligence has difficulty understanding context, and bias is possible.
Artificial Intelligence accelerated content moderation.
One of the advantages of Artificial Intelligence is its remarkable speed in promptly identifying and eliminating harmful content, including hate speech, harassment, and graphic violence, minimising the exposure of users to dangerous content. Artificial intelligence teams build a learning model to identify image content and analyse text in posts. Then, these models can determine whether the content complies with the rules of the platform and whether action needs to be taken to remove or reduce its distribution. For example, the digital platform builds an Artificial Intelligence model to understand whether a piece of content contains gore. As a result, Artificial intelligence can be trained to spot objectionable content before anyone reports it, detect and remove content that violates the community of platforms standards, and remove, block, or filter it before it is uploaded (Elkin-Koren, 2020).
There are some examples that demonstrate the importance of timely removal of harmful content to maintain the safety of online communities. A terrorist live-streamed on Facebook that he entered the Al Noor Mosque in Christchurch, New Zealand, killing more than 50 people inside on March 15, 2019（Gorwa & Katzenbach, 2020). Only a few hundred Facebook users initially saw the gory content before Facebook removed the video（Gorwa & Katzenbach, 2020). However, the video had been copied and re-posted on a number of platforms, including Facebook, YouTube and Twitter, before it was deleted（Gorwa & Katzenbach, 2020). Facebook representatives said that video versions had been uploaded at least 1.5 million times in the first 24 hours, with about 80 per cent of the video automatically blocked before being uploaded after using Artificial intelligence in content moderation（Gorwa & Katzenbach, 2020）. Hence, the accelerated processing capability offered by Artificial Intelligence is significant for maintaining online communities.
Artificial Intelligence reduce cost
Artificial Intelligence has been instrumental in significantly decreasing costs in content moderation for various online platforms. However, there is an up-front cost of training the Artificial Intelligence model to learn and an ongoing cost of maintenance and updating. Once the model is successfully implemented for content review, it will still be more cost-effective than manual review (Gillespie, 2020). The labour-intensive and repetitive process of examining and regulating user-generated content can be automated using artificial intelligence. As a result, there will be no need for a sizable human moderator team, which will save much money on employees’ salaries, benefits, and overhead. On the one hand, Artificial Intelligence models do not need to be hired or trained again like human moderators do when content quantities rise. This scalability guarantees cost-effectiveness even during periods of high demand. On the other hand, Artificial Intelligence can work 24/7 without shifts or overtime pay. And Artificial Intelligence algorithms process content faster. This allows platforms to process more significant amounts of content with fewer resources, reducing the operational costs associated with content review. For example, in the case of enabling a terrorist to live-stream a massacre, Facebook handled this through extensive artificial intelligence (Kniazieva, 2023). Each video will be hashed, compared to a database, and ultimately blocked if a match is found (Kniazieva, 2023). In addition, YouTube reports that 98 per cent of violent extremist videos are now labelled by Artificial Intelligence algorithms” (Gorwa & Katzenbach, 2020). As a result, Artificial Intelligence effectively analyses user-generated content to find and remove offensive or unsafe material at less cost.
Artificial Intelligence has difficulty for context understanding
Although Artificial Intelligence is learning human language, images and culture, it can still miss sarcasm, humour or context that is easily understood by humans, leading to incorrect removal or approval of content. According to the Financial Times, YouTube’s over-reliance on AI review has led to many videos being removed in error (Tech, 2020). From April to June 2020, AI removed approximately 11 million videos from YouTube, double the previous rate of video removals (Tech, 2020). Moreover, about half of the videos were restored on appeal after deletion based on YouTube (Tech, 2020). Furthermore, it has been suggested that many artificial intelligence models may struggle to recognize specific boundaries. For instance, the famous “Napalm Girl” photo showing the brutal photos of the Vietnam War features a nude Kim Phuc, whose back, neck and arms were burned by napalm (Gillespie, 2018). However, this image was removed by Facebook, which judged it to carry nudity (Gillespie, 2018). There is often no clear line between pictures of nudity or violence with global and historical significance. In summary, Artificial Intelligence has difficulty making difficult contextual decisions about unclear concepts, leading to large-scale automated systems that can make hundreds or thousands of erroneous daily deletions.
Bias and Fairness
Artificial Intelligence models may have some bias and unfairness in content review. In the concept of Artificial Intelligence, algorithms are a crucial part of implementing intelligent behaviour. The algorithm is an ordered collection of operations used to perform a specific computational task. Artificial Intelligence is a broad concept, and algorithms are used to implement AI. Artificial Intelligence models can inherit biases in the training data, leading to uneven content review (Gorwa et al., 2020). This bias can disproportionately affect certain groups or viewpoints, leading to allegations of censorship or discrimination (Gorwa et al., 2020).
According to this video, Facebook’s image recognition software mistakenly categorised black men as ‘primates’. Therefore, the stance of the employee who writes the algorithm affects the stance of the final artificial intelligence. On the other hand, because of the position of Facebook’s content policy employees, the rules of content moderation are written in a way that fails to take into account the vastly disproportionate impact of racial discrimination in much of the Global North, as well as the intersectional nature of disadvantage, and therefore fail to establish hate speech protections for specific ‘subcategories’ (Gorwa et al., 2020). As a result, there are biases and fairness in social media platforms such as Facebook and YouTube.
Keeping learning and updating
While artificial intelligence has many weaknesses, it is good at learning and updating. Artificial intelligence systems can analyse large amounts of data related to content moderation, including flagged content, user reports and decision outcomes. This data provides valuable insights into emerging trends, evolving strategies of malicious actors and user behaviour. Artificial intelligence systems can integrate feedback from humans. As human moderators review and correct artificial intelligence decisions, artificial intelligence models are trained to make better judgements in the future. This continuous feedback loop helps in constant learning and improvement.
Recently, Facebook and YouTube companies have made notable efforts to make their review practices more transparent and increase fairness (Gorwa et al., 2020). Artificial intelligence systems can be used to identify and address bias in content review decisions. By analysing patterns of discrimination in past decisions, platforms can reduce discriminatory outcomes and increase fairness.
In conclusion, the role of artificial intelligence in content moderation on platforms like Facebook and YouTube offers significant benefits and notable limitations. On the one hand, Artificial intelligence provides speed and reduces costs. However, these advantages come with limitations. Artificial intelligence potentially leads to the removal of legitimate content or the failure to catch harmful content accurately. The technology struggles with contextual understanding, humour, and cultural nuances, often needing more subtleties than humans can quickly grasp. Bias and discrimination can also creep into AI models if the training data contains preferences, raising concerns about fairness and inclusivity. However, Artificial Intelligence algorithms can adapt and improve through continuous learning, making them valuable tools for addressing evolving challenges. In light of these benefits and limitations, a balanced approach that combines the strengths of AI with human oversight is crucial. By leveraging AI’s efficiency and scalability while preserving human moderators’ judgment and contextual understanding, platforms like Facebook and YouTube can strive to create safer and more inclusive online environments. Striking this balance represents a dynamic challenge in the ever-evolving landscape of content moderation.
Elkin-Koren, N. (2020). Contesting algorithms: Restoring the public interest in content filtering by artificial intelligence. Big Data & Society, 7(2). https://doi.org/10.1177/2053951720932296
Gillespie, T. (2018). All Platforms Moderate. In Custodians of the Internet (pp. 1–23). Yale University Press. https://doi.org/10.12987/9780300235029-001
Gillespie, T. (2020). Content moderation, AI, and the question of scale. Big Data & Society, 7(2), 205395172094323–. https://doi.org/10.1177/2053951720943234
Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society, 7(1). https://doi.org/10.1177/2053951719897945
Kniazieva, Y. (2023, March 30). Ai content moderation. High quality data annotation for Machine Learning. https://labelyourdata.com/articles/ai-content-moderation#case_3_youtube
Tech, H. (2020, September 22). YouTube brings back human moderators after AI algorithms fail to deliver. HT Tech. https://tech.hindustantimes.com/tech/news/youtube-brings-back-human-moderators-after-ai-algorithms-fail-to-deliver-71600749070541.html