Improvable Automated Content Moderation: Is everything prohibited by automation justified?

An innovation in the internet revolution

automated content moderation
Live and automated moderation picture from new media services

Introduction:

With the development of the internet market, social media is rapidly integrating into people’s lives. Compared with the communities in real life, people seem to feel more enjoyable in online communities. But at the same time, network security has become a problem that we cannot ignore. In the early days of social media, people thought of hiring dedicated staff to monitor and screen content manually to ensure the safety of content posted by online communities and reduce the risk of illegal content.

 

Figure 1: Number of monthly active Facebook users worldwide as of 2nd quarter 2020 (in millions)
Figure 1: Number of monthly active Facebook users worldwide as of 2nd quarter of 2020 (in millions)

 

However, as more and more people start to use online communities, just like the number of Facebook users reached 2.6 billion in 2020 (Oberlo, 2020), the speed of human content moderation seems insufficient to meet the demand for market security. Therefore, people have developed an automatic content moderation system, which controls social media platforms by banning some keywords or topics, and more effectively stops the emergence and spread of illegal, violent, discriminatory, and other speech. There is no denying that automatic content moderation brings convenience to the human moderator and dramatically reduces the online community’s risk of speech. It also has a good prospect of development, but this does not mean that everything it bans and screens out is entirely justified.

 

What is content moderation?

As more and more people use social media and online communities, some content is spreading more widely and quickly. Most people want the community to police itself, or users better never post objectionable content in the first place (Gillespie, 2018, p.5), but things tend to go in the opposite direction. Therefore, the security of network content has become a matter that people cannot ignore, and then the idea of content moderation had invented.

Grimmelmann(2015) said that ‘moderation means the governance mechanisms that structure participation in a community to facilitate cooperation and prevent abuse.’ Meanwhile, moderation includes administrators with authority to remove content or exclude users and design decisions about how members of the organization’s community can participate with each other (Gorwa, Binns & Katzenbach, 2020). We can divide moderation into human moderators and automated moderators. Human content moderators are an early censorship method, and automated moderators incorporating more modern technologies, making content moderation more convenient and practical.

 

The generation and development of automated content moderation:

With the continuous growth of network users, the human moderator cannot satisfy mass audit work every day. Thus, people created automated content moderation.

Compared with human moderation, automated moderation has no human emotions. It does not generate negative emotions due to text or pictures’ content, making false judgments based on feelings. Therefore, it has a higher task completion quality. Besides, automated content moderation is best suited for repetitive and routine tasks because it can perform audit tasks in batches, including detecting and removing duplicate content and reading image titles and data (New media services, 2020). So, automated moderation seems superior in terms of both efficiency and quality.

Nevertheless, this does not mean that automated content moderation is perfect. Most social media platforms users may get a lot of convenience because of automatic Content moderation. Still, there is also a part of users who suffer unprecedented discrimination or “violence” because of automation’s generation and development.

 

Who can benefit from it:

  • Internet companies: Automated content moderators can replace a portion of human content moderators, saving a significant amount of money on hiring staff. Simultaneously, every company needs to consider the employees’ mental health. An automated moderator can surpass the human moderator in psychological endurance (New media services, 2020) because machines do not have emotional fluctuations, which can bring better benefits and profits to internet companies.

 

  • Government: Through the internet, people can quickly learn about current events in their own country, whether good or bad. In contrast, automated content moderation can filter out the information that the government wants to convey to the public and promote it so that everyone can see the government’s efforts and some decisions. It also encourages citizens to take a more active part in government-sponsored activities, such as voting or elections. Besides, automated content moderation can quickly prevent the widespread dissemination of negative news, such as wars or violent actions, causing social panic.
Figure 2: The guiding video for voting from the New York City Government

 

  • Internet community users: Automated content moderation can create a safe and comfortable internet social environment for all internet users. Most users no longer need to worry about being subjected to inexplicable internet violence or being suddenly exposed to bloody and anti-human images or comments.

 

Who can’t benefit from it:

  • Content moderation workers: Automated content moderation brings convenience to the internet, but it also means that the internet market will no longer need many human moderators. Therefore, most of the content moderation workers will face the risk of unemployment, which is no benefit to the content moderation workers.

 

  • Gender marginalization: Some gay community members may be subject to mandatory deletion by automated content moderation for posting intimate photos of the same sex because machines have no emotional judgment, so LGBT people have marginalized in the internet world.

 

  • Racial marginalization: In today’s internet platform, some ethnic stream special culture such as nude culture is prohibited by automated moderation. When Himba women in Namibia publish their photos on the internet, the automatic moderation force the removal of these photos immediately, but this is a racial culture belonging to them, which is prohibited by the regulatory platform machine settings.
Figure 3: Wikipedia, History of nudity- Himba women in Namibia

 

  • Physical impairments: A video of Quaden Bayles, a boy with dwarfism, crying to his mother after being bullied went viral on the internet. His mother’s sincere plea for help turned into a constant phone call and a “laughing stock” of many netizens (ABC News, 2020). At this time, the automated content moderation failed to protect him. It caused the video to spread widely, as netizens did not make discriminatory comments but just thought he was cute. But describing him as “cute” is a huge blow to a physically challenged boy.

Figure 4: A bullied boy with dwarfism Quaden Bayles and mum Yarraka share the reality of going ‘viral.’

 

The impact of innovation:

As an innovative internet technology, automated content moderation exceedingly impacts people’s work, study, and life. For example, students’ network environment will be relatively more secure, which I don’t need to worry about seeing violence or with great derogatory details and pictures accidentally when using the internet query information in the process of learning. It can not only make the learning environment safer but also can protect the students’ mental health.

Also, this innovation of moderation has a protective effect on society. For example, when ISIS invaded northern Iraq, they tweeted their victory and horrific images of what happened to those who fought back and made everyone think that the north of Iraq had already been overrun by ISIS (Brooking & Singer, 2016). Such behavior will undoubtedly cause panic and agitation in the whole society. At this time, automated content moderation can prevent the rapid spread of these terrorist activities by identifying keywords and its unique technology and minimizing terrorist activities on society.

However, this innovative technology also has its inadequacies. Compared with human, automated content moderation has poor context recognition because it does not combine the context analysis and understanding of the content under review very well, thus banning a lot of reasonable internet content in silence. Therefore, this innovative technology still needs to be continuously developed and improved.

 

Conclusion

As an innovation in the internet revolution, automated content moderation changes the network environment and enables more and more people to benefit from it as it develops. Just as the internet companies thus obtained better efficiency and more profits; The government has better informed and more social support; Internet users get a relatively secure social environment online.

Nonetheless, ‘automated content moderation is not a panacea for the ills of social media’ (Gillespie, 2020), it cannot judge context well and filter content. And at the same time, it also leads to the marginalization of particular social groups of users due to its static procedures. Therefore, as a part of Internet innovation, automated content moderation still needs to be continuously developed and improved.

 

 

 

Reference List:

Au Oberlo.com (2020). Top 10 Facebook Statistics You Need to Know in 2020. Retrieved 20 October 2020, from https://au.oberlo.com/blog/facebook-statistics#:~:text=1.-,How%20Many%20People%20Use%20Facebook%3F,site%20on%20a%20daily%20basis.

ABC News (2020). ‘I’m not 18, trust me’: Underneath the savage social media, hate is a nine-year-old boy. Retrieved 27 October 2020, from https://www.abc.net.au/news/2020-10-26/meet-the-real-quaden-bayles-who-was-bullied-for-dwarfism/12670260 

Brooking, E., & Singer, P. (2016). WAR GOES VIRAL. The Atlantic Monthly, 318(4). http://search.proquest.com/docview/1858228044/

Grimmelmann, J. (2015). The virtues of moderation. (telecommunications and copyright regulation moderation) (Introduction through II. The Grammar of Moderation C. Community Characteristics (Adjectives) 3. Ownership Concentration, p. 42-75). Yale Journal of Law & Technology, 17(1).

Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society7(1), 205395171989794. doi: 10.1177/2053951719897945

Gillespie, T. (2018). Custodians of the internet: platforms, content moderation, and the hidden decisions that shape social media (pp. 1–23). Yale University Press.

Gillespie, T. (2020). Content moderation, AI, and the question of scale. Big Data & Society, 7(2). https://doi.org/10.1177/2053951720943234

New media services (2020). Difference Between Automated & Live Moderation | New Media Services. Retrieved 23 October 2020, from https://newmediaservices.com.au/automated-and-live-moderation/