Contradiction: arising from government involvement in content moderation

Where is the contradiction manifested?

The current content moderation system is full of contradictions. On the one hand, the government emphasizes strengthening content moderation to improve the safety of users using the Internet. On the other hand, users complain that the government uses content moderation to control themselves. This contradiction is created because content moderation is necessary, but also powerfully destructive, not only undermining users’ trust in governments and platforms, but also undermining the diversity of social media content. A high-profile example is a series of lawsuits by former President Trump, who accused Facebook, Twitter and YouTube of violating his First Amendment rights by banning him from their social platforms (Online Content Moderation and Government Coercion, 2022). While lawsuits bringing similar allegations were mostly dismissed, they nonetheless suggest that the government itself is believed to be involved in restricting certain types of content, which does address constitutional concerns about free speech (Davis and Taras, 2021).

“free speech sue to gag gag SLAAP” by Prachatai is licensed under CC BY 2.0

What is content moderation?

In social platforms, content moderation exists to “monitor and regulate user-generated posts by implementing a set of pre-arranged rules and guidelines”(Walker,2020). Content moderated varies, but in most cases refers to spam, disturbing images, violence, nudity, illegal activity, etc. The way content is moderated also varies from platform to platform, e.g. accounts banned, posts deleted or temporary suspension. The reasons for content moderation vary by platform, but in essence content moderation ensures that users maintain a positive experience on those platforms.

“Fake News – Computer Screen Reading Fake News” by mikemacmarketing is licensed under CC BY 2.0

Why content moderation is necessary?

Why content moderation may be the most pressing issue? Shouldn’t the internet be free and democratic? With the increasing popularity of the Internet, content chaos has become increasingly prominent. Many online videos and pictures contain violent, pornographic and other bad content, and pedophiles and scammers can affect the lives of children and the elderly who are not familiar with the Internet at any time.  Therefore, content moderation is now the most mainstream defensive measure in social media. As Langvardt (2018) puts it, “the job of the content moderator is essential”. Without content moderation, the internet would be a confrontational place.

 

  1. Protect the brand: Cambridge Analytica data breach cost Facebook $35 billion in valuation loss and dragged Mark Zuckerberg to Senate testimony, where he struggled to explain how his social media platform works and secure customers data security.

With content moderation, digital players in e-commerce, entertainment and social media can protect their brands from the abuse and slander that Facebook has.

 

  1. Identifying and suppressing radical movements: INTERPOL has reported the misuse of online platforms to spread radical motives and recruit for terrorist groups. Similar incidents have been reported in countries such as India, Singapore, the US and Bangladesh, where online platforms are used to inflame minds and recruit them into terrorist groups.

When all social platforms start monitoring content, they can identify it and remove it before it reaches and convinces nascent minds.

 

  1. Stopplatform abuse: Whether it is recruiting people for banned organizations or spreading rumors, such rampant abuse of online platforms is not good for companies and ordinary people. The Twitter CEO has been repeatedly called to testify in the US and India due to the growing use of Twitter to spread fake news.

Therefore, using the content moderation system can judge the authenticity of the content at the first time and effectively prevent rumors. When encountering content that cannot be judged, it can be handed over to manual analysis.

 

The disruptive nature of government involvement in content moderation:

However, its drawbacks are also obvious. Because the artificial subjective judgment standards are not uniform, different reviewers for the same content will have different review results. The main reason for different review results is that cultural and political beliefs are different. Offensive in some countries and acceptable in others (Gillespie, 2018).

Kim Phuc – The Napalm Girl In Vietnam by e-strategyblog.com is licensed under CC BY 2.0

Journalist Tom Egeland included the Napalm Girl in a September 2016 article reflecting on changing the history of warfare. The photo caused his article to be removed by Facebook for nudity and violent images. He was even suspended twice. This situation has sparked widespread discussion and criticism of Facebook, some even calling Facebook “the most powerful editor in the world” (Gillespie, 2018), and under pressure from social media and the masses, Facebook has to Reinstated the article claiming that the photo violated the platform’s policies (“Facebook backs down from ‘napalm girl’ censorship and reinstates photo”, 2016).

The background of this photo is the war that the United States launched in Vietnam in 1972, so most people think that the real reason for deleting this photo is that this photo will cause people in various countries to be angry with the United States for launching this war and damage the image of the US government, the photos were eventually removed at the request of the government. It’s because of the government’s involvement that people feel that anything they post on the platform is being watched and controlled, which has sparked intense fear among users.

“TikTok” by May Gauthier is licensed under CC BY-SA 2.0

What’s more serious is that the government’s involvement in content moderation has caused devastating damage to the diversity of social media content. As early as 2010, China and the United States started a trade war. The Chinese government put forward a request for “Chinese content review” to Google under the pretext of cyber security. After the negotiation failed, Google voluntarily withdrew from the Chinese market. At the same time, the Chinese government is also very cautious to films imported by foreign countries. Films imported from foreign countries will be strictly censored, and a large number of plots will be deleted or even rejected. For example, the films of Marvel and Netflix. Although there are a large number of loyal fans, people cannot enjoy the work due to government censorship. In response, the U.S. government also hit back. A group of government officials led by Trump claimed that ByteDance would steal users’ information and transmit it to China, which would pose a security threat to the United States, and demanded that the app store remove the software ByteDance (Spotlight: Trump administration to ban TikTok and Wechat downloads starting Sunday, 2020).

The two governments used multimedia smartphones, messaging apps and social media platforms to create a global stage of engagement and wage digital warfare, transforming the country into an emerging “military-social media complex” that empowered a “citizen militia” Keyboard warriors take over and destroy other governments and media propaganda units (Merrin and Hoskins, 2020). Therefore, we have to wonder whether the other purpose of government intervention in content moderation is to create more contradictions, trap users in specific social media and watch what the government wants users to watch to control users’ minds (Zeng and Kaye, 2020).

Summary:

Government involvement in content moderation is more of a double-edged sword. Whether the government should be more involved in enforcing content moderation on social media is a hotly debated topic. Over the past two years, many governments around the world have passed 40 new laws to regulate content on digital platforms (Moderating online content: fighting harm or silencing dissent, 2021 ). These new laws stress content removal and limited judicial insight, as well as overreliance on AI moderation, thus limiting users’ human rights, especially freedom of speech. So when the government intervenes in content censorship, it becomes extra difficult to get people from all over the world to use social media together. The suitability and development of the Internet environment are still being explored, requiring mutual checks and balances among audiences, digital platforms and government agencies (Roberts, 2019).

 

References:

Davis, R., & Taras, D. (2020). Power shift? : political leadership and social media (R. Davis & D. Taras, Eds.). Routledge.

 

Facebook backs down from ‘napalm girl’ censorship and reinstates photo. (2016, September 10). The Guardian.

 

Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. Yale University Press.

 

Langvardt, K. (2018) Regulating online content moderation. The Georgetown Journal106(5), 1353-1388.

 

Merrin, W., & Hoskins, A. (2020). Tweet fast and kill things: digital war. Digital War, 1(1-3), 184–193. https://doi.org/10.1057/s42984-020-00002-1

 

Moderating online content: fighting harm or silencing dissent?. Ohchr.org. (2021). Retrieved 5 October 2022, from https://www.ohchr.org/EN/NewsEvents/Pages/Online-content-regulation.aspx.

 

Online Content Moderation and Government Coercion (LSB10742). (2022).

 

Roberts, S. T. (2019). Behind the Screen: Content Moderation in the Shadows of Social Media. Yale University Press.

 

Spotlight: Trump administration to ban TikTok and Wechat downloads starting Sunday. (2020). Retrieved 5 October 2022, from https://www.reuters.com/article/us-trump-tiktok-wechat-ban-0918-idCNKBS26A034

 

Walker, S. (2020). What is content moderation: our complete guide. Retrieved from the New Media Services website: https://newmediaservices.com.au/fundamental-basics-of-content-moderation/

 

Zeng, J., & Kaye, D. B. V. (2022). From content moderation to visibility moderation: A case study of platform governance on TikTok. Policy and Internet, 14(1), 79–95. https://doi.org/10.1002/poi3.287