Is Automatic Content Moderation a Mature Technology?

Facebook徽标。 (Jaap Arriens / NurPhoto通过Getty Images)GETTY
Facebook Logo.(Jaap Arriens / NurPhoto by Getty Images)GETTY

Overview


As a product of globalization, the world internet has connected everyone together with no distance in recent years. Reviewing the develop of internet in last 60 years, there are four periods can be concluded which are open internet stage from 1960 to 2000, close access stage from 2000 to 2005, limited access stage from 2005 to 2010 and access in argue stage from 2010 (Palfrey 2010, PP981-984), which means there was a freedom network before year 2000 that single internet user can speech free to everyone, but from year 2000 there is limitation in internet speech.

Automated content moderation, as an important tool which may use to regularize the internet speech is widely used currently. This article will focus on exploring the develop of automated content moderation from multi-dimension such as political, economic, culture and society, it will follow by its origin with meaningful reason, its stakeholder and its impact to people’s daily life and daily work.

To summarise the main argument, first, the current content moderation method is limited on techniques, second, it provides a significant view in internet supervision by analysing from multi-dimension, third, the method widely affects people’s daily life.

Soni,D.(2019年)。 用于内容审核的机器学习—简介[图像]。 取自https://towardsdatascience.com/machine-learning-for-content-moderation-introduction-4e9353c47ae5
Soni, D. (2019). Machine Learning for Content Moderation — Introduction [Image].

Origin and reason


The automated content moderation refers to the method which can automatic recognized the illegal or negative content in internet and apply related action to them, before 2000 there is negative content be resolved only when the content published in online society and causing problem(Dibbell 1993,pp475), but up to now automated content moderation have become an integral part of internet and a significant increase requirements about the technical support to automated content moderation happens in current internet environment, the earliest record of content moderation include that “as more and more speech been located online, the ultimate power concentrated on service supplier”(Jeffrey 2008).

To discuss the develop of automated content moderation, it is important to understand the power that content moderation should take from the aspect of morality and law, internet in china was recognized as a moral panic for long time in public perception since there was plenty of internet articles report news that youth suicide by the reason of internet addition from 2005 (Szablewicz 2010,pp453-455), this case showing the impact that internet content can bring and the power that government should take to concern on online content, From 2014, the developing of AI (artificial intelligence) technique became a pillar of automated content moderation (Hartmann 2020,pp1-2), the AI based content moderation seems can resolve most content issues but also have its limitation.

克赖斯特彻奇教务长大街上的纪念馆的标语上写着“这是你的家,你应该在这里安全”。

The death toll from the Christchurch mosque shooting has now risen to 51. (ABC News: Brendan Esposito)

There is a news happened in 2019 which is 51 people dead in Christchurch mosque attack(ABC 2019), the video of this event been posted to Facebook with all violence shooting screen for more than one hours, to analyse its reason it have to back to AI technique itself, the current AI technique have the limitation from itself, which means the method is worked based on the data it trained before, such as it can clear understand almost all pornography content, but for the shooting it also trained data from shooting game which means the AI method believe the violence shooting video is kind of shooting game( 2020,pp4), all those case showing until now there still have limitation on automated content moderation itself.

 

Stakeholder


To evaluate the automated content moderation in multi-dimension view, its stakeholder should be clear, in the aspect of political perspective, even if there are still some limitation in current automated content moderation, it is also a significant method that government can protect its people to away from violence or other negative information in their country, in addition, cultural invasion is also a significant part that content moderation may helping to defence, since china had limitation to access the world internet, there are less conflict caused by democratic structure, which means the content moderation can also help in national stability(Cooper 2000,pp102).

In the aspect of economic, the content moderation injects new vitality to internet industry such as specific content moderation company, and supply some job position since the method also required manual inspection(Roberts 2014,pp153-156), there also culture benefit by analysing Chinese culture, since there was a wall to block internet which out of china, all related information was blocked by content moderation and then planned the correct way to share those information, china protected their own culture from the spread of internet(Hockx 2015,pp24-30,pp142-145), a famous example 2020 is showing by the difference between app ‘douyin’ and ‘tiktok’, ‘douyin’ have more historical deduction than ‘tiktok’ such as improved clothing style from the history of china since content moderation working as filter to avoid multi-culture elements, in sociality level, the filter of content can helping in reducing the potential risk that negative online information can bring, especially in youth group (American Academy of Pediatrics 2001).

The American Academy of Pediatrics recognizes exposure to violence in media, including television, movies, music, and video games, as a significant risk to the health of children and adolescents(American Academy of Pediatrics 2001).

To summarise, the automated content moderation can bring benefit in multi-dimension include political, economic, culture and society, all stakeholders involved can be benefited included individual, internet organization, government and country, for more detail explanation, from political level, government as one of stakeholder who take benefit from content moderation, its people take advantage from a stable country, from economic level, it bringing more opportunity to a country include new industry which means the whole country as a stakeholder get benefit from it, the individual get benefit since there is an increase in employee rate, the internet organization take benefit from the online society which is under controlled, however, at same time the internet organization cost more on working with content moderation, from culture the whole country as the main stakeholder be benefited, from society the main stakeholder include individual, group and country be benefited where the criminal motivation be limited.

 

Impact to daily work and life


Due to Facebook’s “inhumane” work practices, it still faces legal risks. From the same article in The New York Times:

“You’d go into work at 9 a.m. every morning, turn on your computer and watch someone have their head cut off,” a man who chose to remain anonymous but was quoted in the lawsuit told The Guardian last year. “Every day, every minute, that’s what you see. Heads being cut off.”

The automated content moderation as an important tool which can helps filter plenty of unnecessary, fake and negative information when doing research since its method helps to regularize the whole internet environment, as a online media worker, it takes much advantage on avoiding significant leanings and negative information on written articles and related resources, however, it also bring more cost when post an article.

Source: The Verge

For example, when writing a post it have to considering its political leanings and some specific content which involving negative information such as violence, another example is when considering the post in different website from different country, the content moderation may working from different way, which means it have to take time for the rule of related website and country, all those impacts have its benefit and drawback to related stakeholder, to evaluate from another aspect, positive information is more suitable with public perception, which means a positive information have more public agreement by comparing with negative information (Wen.et.al., 2014,pp641-645). To summarise its impact to daily work and life as a media worker, it have its own benefit and negative, however, it is a significant part to protect internet environment when adapt to everyone.

Conclusion


This article introduced the develop of automated content moderation with the history of it and exploring the significances of content moderation in multi-dimension view, from those exploration, it’s clear to show that each country had its own way to supervise the internet environment, the significances of content moderation can also showing its public perception from multi-dimension view by analysing its stakeholder, in political level, government and individual take advantage from it, in economic level, it also benefit government and individual, but for organization, it do have its benefit and drawback, in cultural level, the whole country include everyone can take advantage from it, in the aspect of society, all individual and organization been benefit from content moderation, to understand the way how the method impact peoples daily life, the most basic case is the running of content moderation helps user block all negative and fake information, therefore, although there are some limitation in automated content moderation, it still helps a lot on improving the quality or internet environment and protecting people to away from fake and negative information, for future, there would be a better solution which can increase the performance of automated content moderation.

 


References:

  • Adam, N., Awerbuch, B., Slonim, J., Wegner, P., & Yesha, Y. (1997). Globalizing business, education, culture through the Internet. Communications of the ACM, 40(2), 115-121.
  • American Academy of Pediatrics. (2001). Media violence. Pediatrics, 108(5), 1222-1226. 
  • Abc.net.au. 2019. Christchurch Mosque Attack Death Toll Rises To 51 After Man Dies In Hospital. [online] Available at: <https://www.abc.net.au/news/2019-05-03/christchurch-attack-death-toll-climbs-to-51/11075972>
  • Block, D. (2004). Globalization, transnational communication and the Internet. International journal on multicultural societies, 6(1), 13-28.
  • Chandel, S., Jingji, Z., Yunnan, Y., Jingyao, S., & Zhipeng, Z. (2019, October). The Golden Shield Project of China: A Decade Later—An in-Depth Study of the Great Firewall. In 2019 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC) (pp. 111-119). IEEE.
  • Cooper, S. D. (2000). The Dot. Com (munist) Revolution: Will the Internet Bring Democracy to China. UCLA Pac. Basin LJ, 18, 98.
  • de Borchgrave, A., Cilluffo, F. J., Cardash, S. L., & Ledgerwood, M. M. (2000, December). Cyber threats and information security. In Meeting the (Vol. 2, pp. 1-1).
  • Demers, E. A., & Lev, B. (2000). A rude awakening: Internet shakeout in 2000.
  • DeNardis, L. (2009). Protocol politics: The globalization of Internet governance. Mit Press.
  • Dibbell J (1993) A rape in cyberspace, or how an evil clown, a Haitian trickster spirit, two wizards, and a cast of dozens turned a database into a society. The Village Voice, 23 December
  • Endeshaw, A. (2004). Internet regulation in China: The never‐ending cat and mouse game. Information & Communications Technology Law, 13(1), 41-57.
  • Field of Vision – The Moderators. (2017). Retrieved 19 October 2020, from https://www.youtube.com/watch?v=k9m0axUDpro&feature=emb_title
  • Gerstenfeld, P. B., Grant, D. R., & Chiang, C. P. (2003). Hate online: A content analysis of extremist Internet sites. Analyses of social issues and public policy, 3(1), 29-44.
  • Grange, J. (2004). John Dewey, Confucius, and global philosophy. SUNY Press.
  • Hartmann, I. A. (2020). A new framework for online content moderation. Computer Law & Security Review, 36, 105376.
  • Hockx, M. (2015). Internet literature in China. Columbia University Press.
  • Jain, A. K., Ross, A., & Prabhakar, S. (2004). An introduction to biometric recognition. IEEE Transactions on circuits and systems for video technology, 14(1), 4-20.
  • Llansó, E. J. (2020). No amount of “AI” in content moderation will solve filtering’s prior-restraint problem. Big Data & Society, 7(1), 2053951720920686.
  • Murray, J. T., & Murray, M. J. (1996). The year 2000 computing crisis: A millennium date conversion plan (p. 87). McGraw-Hill.
  • Palfrey, J. (2010). Four phases of internet regulation. Social Research: An International Quarterly, 77(3), 981-996.
  • Povinelli, E. A. (2002). The cunning of recognition: Indigenous alterities and the making of Australian multiculturalism. Duke University Press.
  • Roberts, M. E. (2018). Censored: distraction and diversion inside China’s Great Firewall. Princeton University Press.
  • Roberts, S. T. (2014). Behind the screen: The hidden digital labor of commercial content moderation (Doctoral dissertation, University of Illinois at Urbana-Champaign).
  • Senator Ian Campbell, Parliamentary Secretary to the Minister for Communications, Information Technology and the Arts, Second Reading Speech, Broadcasting Services Amendment (Online Services) Bill 1999 (Cth), Australia, Senate 1999, Debates, vol S 195, p 3958.
  • Szablewicz, M. (2010). The ill effects of “opium for the spirit”: a critical cultural analysis of China’s Internet addiction moral panic. Chinese Journal of Communication, 3(4), 453-470.
  • Taylor, H. (2000). Does internet research work? International journal of market research, 42(1), 1-11.
  • Wang, S. I. (2007). Political use of the Internet, political attitudes and political participation. Asian Journal of Communication, 17(4), 381-395.
  • Wen, S., Haghighi, M. S., Chen, C., Xiang, Y., Zhou, W., & Jia, W. (2014). A sword with two edges: Propagation studies on both positive and negative information in online social networks. IEEE Transactions on Computers, 64(3), 640-653.
Ahri
About Ahri 5 Articles
Hi~ im Ahri;) if u want contact me: my instagram: AhriYuuu and my Wechat: AhriYu