Why Automated Content Moderation favoured by media companies but complained by users

twitter
"Twitter splash screen"  by Joshua Hoehne is licensed under Unsplash

Content moderation plays a vital part in the construction of media platforms. With the growth of social media, mass public participation constitutes a significant amount of internet content in digital media. The mediation of user-generated content (UGC) is necessary for both platforms’ improvement and the users’ behaviour. Media companies have developed a series of automated content moderation measures and policies. However, since the media companies’ purpose of moderating is to maintain the maximum amount of users to satisfied the advertisers for higher profit (Gillespie, 2018), there is a profit disparity between the media company and users, which related to the conflict between the moderation and users’ intention: based on company’s value and with the technical mistakes, while protecting users’ using experiences, the automated moderation might infringe users’ freedom of speech, and the users may come up measures to evade the moderation. Users from different backgrounds and possessing different identities may experience differently through social media (Gillespie, 2018), the platforms are failed to provide an equal participatory environment among various social and political status. Through the analysis of the business and impact of automated content moderation with recent cases and my personal experience, this essay is going to demonstrate that behind the universalised automated content moderation among social media, the conflict of different intentions between media companies and users leads to the issues of biased public participation environment which may indicate the unequal social power relations.

apps

“Apple iPhone 7 on blue wooden table with icons of social media facebook, instagram, twitter, snapchat application on screen. Tablet computer life style. Starting social media app. – Credit to https://www.lyncconf.com/” by nodstrum is licensed under CC BY 2.0

Automated moderation is universalised and essential in nowadays cyberspace due to the company’s aims and the pressure from the government and users. Essentially controlled by the social media companies, automated content moderation is a measure of managing platforms that relies on artificial intelligence (AI) to monitoring and audit the user-generated content, which aims at constructing a well-organised participatory platform and protect the users from abuse (Myers West, 2018). The development of social media in the digital era brings automated content moderation to a vital position in managing media platforms’ participation. With the generation of new media platforms such as Facebook and Twitter, a large amount of user-generated content constitute a large part of public participation and increasingly taking a significant place on the internet. To handle the heavy work of mediating the user-generated content, it is imperative that automated content moderation came into existence.

“Artificial Intelligence & AI & Machine Learning” by mikemacmarketing is licensed under CC BY 2.0

Firstly, the auto-moderation is vital for achieving media companies’ aims. Besides the sincere hope of shaping a friendly and vibrant atmosphere, the companies’ aim is gaining profit, either by reducing the operating costs and holding as much as users for the profit from data trade with advertising partners. For controlling the costs, it is expensive to conduct human moderation among all steps of work with a large number of various contents. With the AI technology and digital techniques such as machine learning and filters, using automated techniques to operate the moderation could be cheaper, and may also reduce the workload and the mental stress to media company’s employees (Gillespie, 2020). In terms of maintaining the users for profit, media platforms navigate users to stay active and access users’ data for targeted advertising. Moderating users’ posts can facilitate a positive environment for public participation and protect users from abuse, therefore media companies can benefit from automated content moderation.

The pressure from the authority and users is also a reason for the necessity of media platforms’ moderation. Governmental regulation pushes global media companies to take more specific responsibility since moderation is regional and the global companies’ moderations are not transparent (Myers West, 2018). Australian parliament processed law to penalise media platform providers and hosts for fail to mediating violent material on social media (Criminal Code Amendment (Sharing of Abhorrent Violent Material) Bill 2019). The legislation puts pressure on social media platforms to handle violent material such as terrorism, sexual assault, hate speech, etc. Users also have their assessments and preference of different media platforms, neither the platform is too mild or too cruel will make users giving up the participation (Gillespie, 2018). Hence, the government and users’ expectations are pushing media platform to moderate.

The difference of intention of automated content moderation between media companies and users reflect the profit disparity between companies and users, which leads to the unequal public participation atmosphere. While the media companies gaining huge profit by collecting users’ data for algorithmic advertising and trade data with business partners, the users who use social media as a communication tool to connect with others are non-profit labour who generate content and data for the company. Users are economically exploited by the company which enjoys the benefit from moderation. The profit disparity between the media company and users indicate the conflict between user-generated content and the moderation policies. While social media users are in a disadvantaged position at an economic aspect, the moderation of user-generated content also constructs an unequal participation environment. Initially, the factor of automated content moderation’s limitations leads to unequal participation. For instance, to keep online eroticism away from underaged users, social media platforms such as Instagram has automated photographic recognition of nudity, preventing users from posting erotic photo containing nudes. However, artificial inelegance might unable to have human judgment and detect human intentions (Gillespie, 2020), which brings the situation that artists and art lovers are hard to share artworks including nudity on social media. This case indicates the moderation system’s over-sexualisation of women’s nude bodies (Gerrard & Thornham, 2020).

Moreover, under the automated content moderation based on companies’ value and operation settings, the moderation is biased to different content, users holding different opinions and with various identities may have different experience online about the content moderation (Gillespie, 2018). The biased engaging atmosphere indicates the unequal social power relations. The moderation operators and the automated moderation providers are overwhelmingly mainstream, which may lead to the ignorance of the marginalised community’s perspectives (Gillespie, 2018). For example, the content moderation of Facebook is gender-biased. In June 2020, a series of hate speech includes violent content and death threat towards the LGBTQ community infringed hate speech policy of Facebook but still passed the audit of automated content moderation and avoid content removal. The platform’s protection of the LGBTQ community is unfair and the automated content moderation failed to recognise the violence. The communities at disadvantaged social power position might be excluded from the moderation’s protection (Gerrard & Thornham, 2020), and their discourse might be overlooked. Thus the marginalised group and people with less social power are less benefit from the automated content moderation.

Under media censorship from the authority, the automated content moderation reflects the issues of users’ power of speech invaded by media platforms which shaping and even establishing the mass discourse and ideology to a great extent. Through the social media participation on Weibo which is a Chinese microblog platform under severe censorship, I witness and experience the moderations’ participation in the public’s opinion and the conflict between users and moderation. As the digital media platforms are not only service providers but also participate in users’ discourse (Gillespie, 2018), Weibo reacts to social discussion with authoritative opinions and legitimacy. The recent case could be about the Chinese government suggest the COVID-19 narrative frame should be corresponding to the generation’s “correct public memory”. Weibo plays an important role in the “correction” of public memory, the platform is shaping users’ participation through the propaganda and restriction of users’ freedom of speech. In terms of the restriction of users, Weibo automated content moderation deletes media users’ perspectives and narratives that against mainstream value by detecting the keywords of sensitive content such as names and places. To evade automated content moderation, Chinese netizens have to come up with a series of measures to share the content, such as avoiding language detection by using homonyms and reversing the characters’ order. The automated content moderation affects my freedom of speech in terms of participating political discussions and sharing alternative narratives.

“Sina Weibo’s logo displayed on phone” by Rafapress is licensed under Shutterstock.com

In conclusion, the media companies benefit from automated content moderation since their goal is to gain profit by keeping users generating data, as moderation could facilitate the mass participation platform. However, automated content moderation has its technical restriction as well as biased operation based on the company’s value and intention. The conflict between users and moderation systems could embody the unequal media atmosphere to the non-mainstream community, as well as the unbalanced social and political power relations being represented through the accessibility of platform protection and media exposure.

 

Reference list

Gerrard, Y., & Thornham, H. (2020). Content moderation: Social media’s sexist

       assemblages. New Media & Society22(7), 1266-1286. doi:

       10.1177/1461444820912540

GILLESPIE, T. (2018). All platforms moderate. Custodians of the internet: platforms,

       content moderation, and the hidden decisions that shape social media (pp.

       1-23). NEW HAVEN: YALE University Press.

Gillespie, T. (2020). Content moderation, AI, and the question of scale. Big Data &

       Society7(2), 205395172094323. doi: 10.1177/2053951720943234

Insta gratification [@instagratification.official], (2020, June 15). My latest column is

       online now, and I’m talking art censorship on Instagram. In a very meta move,

       this account [Instagram photo] Retrieved from https://www.instagram.com/p/

       CBdNPJElacG/

Myers West, S. (2018). Censored, suspended, shadowbanned: User interpretations

       of content moderation on social media platforms. New Media & Society20(11),

       4366-4383. doi: 10.1177/1461444818773059

The Parliament of the Commonwealth of Australia. (2019). Criminal Code

       Amendment (Sharing of Abhorrent Violent Material) Bill 2019. Retrieved from

       https://parlinfo.aph.gov.au/parlInfo/download/legislation/bills/s1201_first-senate/

       toc_pdf/1908121.pdf;fileType=application%2Fpdf

TwitterSafety. (2019. November 8). We use technology to detect Tweets that may

       break our rules, helping us remove harmful Tweets faster so you don’t have to

       report them [Twitter post]. Retrieved from https://twitter.com/TwitterSafety/

       status/1192501871472074752

Runhe Yu
About Runhe Yu 2 Articles
USYD Art history and digital culture student