From manual to intelligent: the development process of automated content moderation

automated content moderation
Live and automated moderation picture from new media services

With the development of internet technology, there are more and more internet media platforms for people to use (to create online communities). In the beginning, internet user was considered passive content consumers. Now they can produce or copy and distribute content.  Every internet user has the ability to add content and contribute to participatory journalism. (Veglis, 2014). As the amount increases, the format of the content is constantly changing. In addition to traditional graphic content, the proportion of audio, long video, short video and live broadcast is also increasing. So the quality of content and authenticity of news of content are important to be check and in order to avoid legal issues. As the increasing of influence and variation of user-generated content, there are many different types of content moderation to protect brands and end-users at all times. (New Media Services, 2020). This web essay will research about the genesis of automated content moderation, and how it affects online platforms user’s life. Also the benefits of it in social view and some political problems.

 Fig. 1 Automated moderator including AI content moderator.

 

The genesis of automated content moderation

https://youtu.be/q_YRsRHYkFc

 

Talking about the genesis of content moderation, first to think about what is automated content moderation? It refers to the automatic acceptance or rejection of any user-generated content submitted to the online platform for manual review in accordance with the rules and guidelines of major online platforms. (Besedo, 2019).

In early online communities, most content moderation was always assured by volunteers, their working purpose typically based on the constraint of local rules of community norms around and user behaviour. On the early text-based internet, the content moderation was visible for users, including requiring users to change their contributions to eliminate offensive or insulting materials, delete posts. Also, use text filers to prohibit the posting of specific types of words or content and other public review operations. (Sarah, 2017). With the development of technology, content review work no longer relies on real-person operations but uses technology to allow AI to automatically conduct an initial review and then hand it over to the moderator to review the content. Content moderation is same as Artificial Intelligence (AI) — the tricky area for regulators to reach consensus on. (Besedo, 2020). According to the different requirements of each platform, content moderation is considered to be used to monitor and judge the appropriateness of the post and website speech behaviour including the legality, whether it contains pornographic violence and harassing content, etc. In the historical process, there are five different types of content review: pre-moderation, post moderation, reactive moderation, distributed moderation and automated moderation. (Besedo, 2016).

Different content moderation method has its own way of working. The system for automatic content review started with mostly manually coded rules and functions and later became more generalized driven. Early work focused on identifying abusive and malicious messages with handcrafted features and decision trees. (Binns, Veale, Kleek and Shadbolt, 2017).  Automated content moderation is to automatically hide bad content when the published content matches the associated words that the platform has disabled or replaced.

Automated content moderation gradually became part of the information management trend

Fig. 2 Number of group members posting on internet. “Group pool activity – Side by side comparison (28/01/2017) – Activity related to free pictures only” by Carlos ZGZ is marked with CC0 1.0

The reason why he will become part of the information management trend is that the current platform has a large demand for content review and human energy is limited, and it is not sustainable to rely on human control alone. The machine learning model is highly accurate, and can automatically refuse to approve the reported content, and it can also accurately capture unwanted user-generated content. (Besedo, 2019). The number, speed, and type of automatic content review go beyond manual processing capabilities. (Ling, Link and Hellingrath, 2016). At the same time, sites and platforms are required to implement their rules and applicable laws, because publishing inappropriate content is considered the main source of responsibility. (Sarah, 2017).

Manual review always has a subjective bias, and the accuracy of content review is a kind of reduction. If the human auditor has not undergone professional training, the prejudice may affect the moderator and thus conflict with platform users, which will have a bad influence on the brand. (New Media Services, 2020). So automated content moderation has become part of the information management trend.

Controller and owner of automatic review mechanism

The online community showcases the specifications for acceptable voice. Who defines these norms is a controversial issue. (Binns, Veale, Kleek and Shadbolt, 2017). The key business of the automatic review mechanism depends on the content and target audience of the website. The webmaster will Decide which type of user content is appropriate, and then delegate the responsibility for filtering content to smaller moderators. (Vigils, 2014) Decision-makers usually require social media companies to identify and eliminate hate speech, terrorist propaganda, harassment, fake news or false information. (Duarte, Lianso and Loup, 2018). Each platform has its own biases and rules and audiences For groups, for a review system such as automated content moderation, platform owners can configure algorithms to filter speech and images that violate policies and laws. (Besedo, 2020). Automatic content review is usually based on different platforms in each region, as rules around, to set a site or platform level, and reflect that platform’s brand and reputation. By Web 2.0 sites, the media firms began to employ a variety of techniques. (Sarah, 2017).

The positive and negative effects of content moderation policy changes

In terms of policy and economics, the use of an automatic content review mechanism can give brands the opportunity to program and formulate a series of guidelines that adapt to the theme of the brand. In this process, interference and unacceptable content can be prevented from appearing on the company’s online brand channels. (New media service, 2020). Human-moderator can easily cover the content of online crooks artificially. Brands can use the automated content moderator to distinguish the deception mode of online crooks to obtain better UGC content discrimination.

From a social point of view, the automatic review mechanism protects the mental health of the moderator. Human-moderator inevitably needs to repeatedly read some biased words, videos, pictures, etc., which will cause great harm to human mental health. Automatic review technology can help manual avoid this risk and reduce the constant pressure on manual reviewers. (New Media Services, 2020). Therefore, human-moderator is beneficial in the transformation process.

https://youtu.be/ApxGXifkV7I

In terms of culture, due to the large differences in the laws and implementation levels of online review between continents, countries and regions. The automatic review has a good impact on the platform’s home country and local area, but for the world, there will be certain reviews deviation. Some content will be hidden in some national policies, but the platform is global, such as Facebook, and these hidden contents can be viewed in other countries. The changes in the automatic content review have both positive and negative impacts on users. (Betesdo, 2020).

Conclusion —- hybrid moderation

Modern information and communication technology has greatly changed journalism. Participatory journalism is one of the most profound changes that have occurred. Every user now has the ability to become a content producer. There are many tools that can be used for participatory journalism. Of course, this new type of journalism has many negative effects that have caused many concerns (defamation, hate speech, intellectual property). Therefore, the content review is very necessary. Now different platforms are more inclined to a mixed review system-automatic review in the early stage and manual review in the later stage. Also, choose a more suitable audit method according to the different nature of each platform. The automatic review system has gradually become part of the historical trend of information management because of its accuracy and efficiency. Under the management of the platform owner, it has had a good impact on the host from political, economic, social and cultural aspects, and it will also have a negative impact. The best thing now is that manual review and smart review complement each other and conduct review at the same time to ensure a harmonious environment for online websites and social platforms.

Resources:

1. Besedo. 2020. AI And Content Moderation: How The Regulatory Landscape Is Shaping Up. Retrieved from: https://besedo.com/resources/blog/ai-and-content-moderation-how-the-regulatory-landscape-is-shaping-up/

2. Besedo. 2016. 5 Moderation Methods You Should Understand. Retrieved from https://besedo.com/resources/blog/5-moderation-methods-you-should-understand/

3. Besedo. 2019. What Is Content Moderation?. Retrieved from https://besedo.com/resources/blog/what-is-content-moderation/

4. Binns R., Veale M., Van Kleek M., Shadbolt N. (2017) Like Trainer, Like Bot? Inheritance of Bias in Algorithmic Content Moderation. In: Ciampaglia G., Mashhadi A., Yasseri T. (eds) Social Informatics. SocInfo 2017. Lecture Notes in Computer Science, vol 10540. Springer, Cham. https://doi.org/10.1007/978-3-319-67256-4_32

5. Duarte, N., Llanso, E., & Loup, A. C. (2018). Mixed Messages? The Limits of Automated Social Media Content Analysis. In FAT (p. 106).

6. Link, D., Hellingrath, B., & Ling, J. (2016). A Human-is-the-Loop Approach for Semi-Automated Content Moderation. In ISCRAM.

7. Newmediaservices.com.au. 2020. Difference Between Automated & Live Moderation | New Media Services. Retrieved from https://newmediaservices.com.au/automated-and-live-moderation/

8. Roberts, S. T. (2017). Content moderation.

9. Veglis, A. (2014, June). Moderation techniques for social media content. In International Conference on Social Computing and Social Media (pp. 137-148). Springer, Cham.

Dingyi Ma
About Dingyi Ma 2 Articles
Hi guys, I'm Mathea, studying in digital cultures major and visual arts major.