A person using artificial intelligence to make a decision” by 紅色死神 is licensed under CC BY-NC-SA 2.0.”
In the 20th century, people relied on mediums such as radio and newspapers to gain insights into the occurrences within their communities or the world. A century later, social media has undergone a qualitative transformation. Today, the identity of “media” is no longer confined to individual groups or organizations, as practically anyone has the potential to become the role of a “self-media” contributor. Therefore, content moderation has emerged as a crucial facet of social media platforms, representing an indispensable component of a successful platform. It directly impacts the interests of every user. A stable content moderation mechanism and algorithm can effectively mitigate hate speech, biases, verbal abuse, gender-based attacks, racial discrimination, and other offensive and harmful content. Simultaneously, it can filter out content that incites violence, sexual assault, self-harm, and the promotion of illegal or regulated goods or services, ensuring a positive user experience for everyone. For the platform, a well-functioning content moderation system can significantly enhance traffic, conversion rates, and brand visibility. These factors are of paramount importance to platform founders. As artificial intelligence (AI) has found widespread application across various industries, major social media platforms have chosen to incorporate AI technology into their content moderation efforts. This revolutionary technology has brought about numerous benefits. But it has also exposed several issues and vulnerabilities.
The benefits and impact of AI
Why are most industries attempting to integrate AI technology into their respective fields? Mainly because AI saves more time and expenses and greatly improves the efficiency of platform content review. Based on estimations from the World Economic Forum, the daily volume of data generated by humans is expected to reach approximately 463 exabytes by 2025. However, the traditional content moderation process is laborious. It is quite challenging for individuals or organizations to review the content of this vast amount of data efficiently.
In this context, AI leverages its high-speed automated data processing capabilities, continuous updating, and real-time monitoring, demonstrating clear advantages compared to manual content moderation. Platform administrators need only introduce AI technology, establish a learning model, and incorporate the platform’s management policies into the database. AI can then efficiently and comprehensively automate the review of large volumes of text, audio, video, and other data, based on predefined criteria and guidelines. The thresholds for automated content moderation of sensitive or inappropriate content can be adjusted by administrators as well. This accelerates the process of filtering or removing inappropriate content, resulting in significant time savings, potentially amounting to thousands of work hours. According to the article, 94% of violated videos on Youtube were detected by its AI moderation tool before they reached 10 views (Wang, 2023). This sufficiently demonstrates the efficiency and speed of AI technology in content moderation.
In recent years, industries such as e-commerce, entertainment, and news have seen vibrant growth through live streaming on platforms. AI’s continuously improving automated machine learning models have made real-time monitoring on live-streaming platforms more efficient. Twitch, an American video live streaming service platform, has introduced AutoMod, an artificial intelligence tool designed to filter offensive and inappropriate content from Twitch chat.
Content moderation often involves content in multiple languages, and AI’s handling of multilingual content makes the moderation process easier for global platforms and websites. Traditional moderation methods typically involve hiring or outsourcing moderation teams with the necessary language expertise, which can be both challenging and costly. As mentioned earlier, the adoption of AI for content moderation can significantly reduce expenses for platforms. It is reported that content moderation expenses often constitute a substantial portion of their annual expenditures. As it claims, Facebook has asserted that it has allocated an expenditure of $13 billion to enhance safety and security measures since 2016. The reduced costs resulting from the integration of AI technology can be allocated to other areas of platform improvement or used for increased business promotion and development.
It is undoubtedly a revolutionary change. However, it is not perfect.
Why AI is not perfect?
Despite the seemingly countless advantages of AI in content moderation, it still possesses several shortcomings and limitations. AI is highly likely to be unable to detect emerging trends and cultures due to insufficient data. The internet evolves rapidly, with online content and culture undergoing constant updates. Imagine a blogger from 2010 attempting to read a page of the online forum from 2020; they would struggle to fully understand the content due to the numerous emerging internet cultures and terminologies over the past decade. It is like ocean waves, as none last indefinitely and are destined to be overtaken and submerged by subsequent waves. Some abbreviations and phrases that appear on the Internet today are likely to have gender, nationality, and racial discrimination connotations. These are things that AI cannot detect based on existing databases.
Moreover, due to the limitations of AI databases, certain forms of expression, such as local dialects or non-English languages, may be misinterpreted by different cultural backgrounds. A recent article has indicated that the possibility of African-American English tweets receiving offensive labels is twice that of other tweets (Oliva, 2021). Similar issues also are stated in reports related to built-in bias in AI. This has become an issue of undeniable significance, as the majority of platforms employ AI technology for content moderation, often encountering multilingual and multicultural content. To improve the performance of AI moderation, the platform administrators should take the responsibility of constantly updating the database, enabling AI to engage in objective learning processes. Otherwise, AI could become a biased and hateful weapon controlled by the platform administers.
AI has, to some extent, achieved efficient automated content moderation and has improved user experiences. However, issues related to data security and privacy remain highly contentious. AI content moderation typically involves the collection and analysis of user-generated data, such as text, images, and videos. These data may contain personal information, such as names and geographical locations. However, most platform users generally lack transparency regarding the algorithms and data processing methods used for content moderation. Platforms need to ensure that the handling of such sensitive content respects user privacy. As a globally renowned smartphone brand, Apple has introduced a sensitive content warning feature in its latest systems, which will also be extended to third-party applications like WhatsApp and Snapchat. While Apple claims that this feature processes all images and videos locally and does not infringe upon user privacy, whether user privacy is truly safeguarded remains subject to observation and scrutiny.
Will AI take over content moderation?
With the widespread use of AI in content moderation, people have begun to ponder whether it will replace humans in this role in the future. Based on the perspectives outlined above, it is challenging. AI has limitations, such as its inability to handle emerging cultural and complex contextual issues, built-in biases in AI learning models, and significant privacy concerns associated with AI content moderation processes. In the current context, it seems that effective content moderation on platforms can only be achieved through the collaboration of AI and human moderators, combining the strengths of both, to provide users with the best possible experience.
AI’s performance in content moderation may not be perfect, but it does not negate the fact that it has improved efficiency and reduced costs in this regard. While it may be challenging to conclusively determine whether AI will replace humans in content moderation work in the future, its undeniable impact on human life and culture is evident. Over the next 50 years, people will continue to witness AI’s ongoing influence in the field of content moderation.
Oliva, T. D, Antonialli, D. M., & Gomes, A. (2021). Fighting Hate Speech, Silencing Drag Queens? Artificial Intelligence in Content Moderation and Risks to LGBTQ Voices Online. Sexuality & Culture, 25(2), 700–732. https://doi.org/10.1007/s12119-020-09790-w
Desjardins, J. (2020, April 17). How much data is generated each day? WORLD ECONOMIC FORUM. https://www.weforum.org/agenda/2019/04/how-much-data-is-generated-each-day-cf4bddf29f/
Kamen, M. (2016, December 13). Twitch now blocks trolls and hatespeech in real-time: Streamers will be able to vary how much comments are filtered in new anti-abuse system. WIRED. https://www.wired.co.uk/article/twitch-introduces-anti-troll-automod-for-game-streams
Sap, M., Card, D., Gabriel, S., Choi, Y., & Smith, N. A. (2019). The risk of racial bias in hate speech detection. In Proceedings of the 57th annual meeting of the association for computational linguistics, https://www.aclweb.org/anthology/P19-1163.pdf.
Shahid, F. (2023, June 6). The iOS 17 Will Bring A Sensitive Content Warning, Protecting You Against Unsolicited Pictures And Other Media. https://wccftech.com/the-ios-17-will-bring-a-sensitive-content-warning-protecting-you-against-unsolicited-pictures-and-other-media/
Translated by Content Engine, L. L. C. (2023, May 19). Why artificial intelligence would worsen online content moderation. CE Noticias Financieras http://ezproxy.library.usyd.edu.au/login?url=https://www.proquest.com/wire-feeds/why-artificial-intelligence-would-worsen-online/docview/2816294877/se-2
Wagner, K., & Bloomberg. (2021, September 22). Facebook says it has spent $13 billion on safety and security efforts since 2016. Fortune. https://fortune.com/2021/09/21/facebook-says-it-has-spent-13-billion-on-safety-and-security-efforts-since-2016/
Wang, S. (2023). Factors related to user perceptions of artificial intelligence (AI)-based content moderation on social media. Computers in Human Behavior, 149, 107971–. https://doi.org/10.1016/j.chb.2023.107971