Content moderation – whom to be responsible?

CC04 Thurs 11am Jenni

 “Social Media Icons” by Brantley Davidson is licensed under CC BY 2.0.

As the arrival of the Web 2.0 era has come, all forms of content are increasingly accessed on the Internet through our use of social media sites and digital technologies. By the time goes the industry grows strong and gains more power in the digital economy, tech firms today profoundly have come to be of global significance. However, it is apparent that the shadowed side has also arisen, social media has enabled the development of new formats of social behaviour and means of collective action for coordinating violent action. Content moderation, “the organised practice of screening user-generated content posted to internet sites and social media” (Roberts, 2019, p. 33) is, therefore, to come in the way, to protect individuals and the nation from any potentially harmful or inappropriate material as well as enforcing respect for privacy and national interests. 

 

Social media platforms were born out of the magnificent anarchy of the web (Langvardt, 2018) and moderation is indispensable (Gillespie, 2018). Platforms are one of the big stakeholders themselves who should be responsible for stopping the spread of problematic content. They make careful decisions considering the definition of ‘problematic’ and how they will remove it, for instance, by blocking search results, account suspensions, and post removals. Since the outbreak of the pandemic, the social network has been more under pressure to battle false allegations about coronavirus misinformation. In term of circumvention, Twitter has built bot-based collective blacklists to tackle online abuse and harassment and sanction malicious behaviours, the anti-troll tool is an example, which on activation, temporarily block accounts by automatic once those sending harm to users are spotted. Moreover, controversies over gendered-based violence are increasingly putting pressure on platforms (Dragiewicz et al., 2018), revenge porn links have been reported and Google in 2015 announced that it would be removed from its search results, similarly, Youtube made clear its harassment and bullying policy in June 2016. In May 2020, Facebook with its fellow site Youtube removed the “Plandemic” video featuring a conspiracy theory on the spread of the virus. Including one of former president Trump’s posts, Facebook and Instagram were said to take down over 20 million pieces and put warning notes on 98 million misinformation content pushing the COVID-19 vaccines. In late February 2018 when an American conspiracy theorist named Alex Jones put misinformation campaigns on national tragedies and terrorist attacks on his #Infowars channel. A wave of suspensions and removals initiated across the digital platforms Jones’ contents were unavailable for public access, Youtube suspended him after an initial 90-day ban; Facebook unpublished his page for ‘glorifying violence’ and breaches of its hate speech and bullying policy; Twitter gave him a week’s suspension but then permanently deleted his account after his harassment to a CNN journalist which violated the ‘abusive behaviour’ policy.

Platforms nonetheless in ways complicate the boundaries between personal interactions and public communications and have long been “criticised for failing to eliminate online hate speech, abuse, and harassment or to stem the promotion of terrorism” (Flew et el., 2019, p. 34). While platforms ban abuse and harassment, they are to admit that all of these are enforced unevenly and unequally. Some have argued that platforms’ failure of governance regarding harassment can be attributed to their economic strategies, online hate fuels traffic and engagements, subsequently translating into platform revenues which explains their lack of response. Similarly, decisions on content removal and distribution in certain circumstances reflect cultural prejudice.

National governments are increasingly challenging the power and influence of these global media companies and seeking tactics to regulate their contents and operations. Here we introduce the notion of Internet censorship that is, a major Internet regulation mechanism by which policymakers are largely implicated. The Chinese government as a vivid example often constructs media policies and legislative frameworks, and allegedly hires a massive force of paid online commentators who work under direction to sway public opinion during sensitive national periods. The ‘Great Firewall’ (Roberts, 2018) is typical that in the case of international sites, the government blocks sites such as Google, Facebook, Instagram, etc, preventing Chinese citizens from assessing content the government claims undesirable. Weibo in particular, the Chinese version of Twitter, is observed to implement automated filtering and surveillance where users cannot post things that contain certain keywords in relation to national interests, and sensitive political and religious discourses, and even if not spotted, they could still be manually deleted and removed. China has been a unique case since its contribution to the Internet censorship regulation with the least internet freedom whilst credited as having the largest number of internet users globally, but this does not guarantee freedom of browsing to citizens. 

 

Policies consequently are followed by massive protests amongst youths, activists, industries, and foreign companies that their power and autonomy over themselves are violated, from which certain contents will either immediately be lost or replaced with other hot issues under the government’s control. European regulators as well have taken an increasingly activist stance towards digital platform companies, by which Germany enacted strict speech laws that impose severe fines subjecting platforms that fail to promptly remove content deemed to be unlawful or hate speech, while France, Brazil, and the United Kingdom have all considered passing legislation requiring digital platforms to remove ‘fake news’.

 

The internet is a double-sided sword, social networking sprouted by the internet not only allows people to know each other but can also be misguiding and cruel. Openness is both beneficial and dangerous, users nowadays can express words everywhere, but rather than those simple ideas and informational sharing posts, people contribute more to the whatsoever gossip. The number of people active on platforms is staggering and mostly requires real-name authentications, creating a profound threat to privacy and reputations that plays a pivotal role in society, and preserving private details of one’s life is fundamental, we as users must live with digital baggage (Rodrigues, 2011), particularly when posting photos and videos. It remains crucial for us users to maintain control over our own information, even if pervasive sharing is already the present case, that to a large extent prevents harm and abuse of personal information in any association. We are to pay respect in recognising our responsibilities for ourselves to participate safely, ethically, and respectfully online, and think about how we and the community we belong to, could potentially be affected by our own behaviours.

 

Regulating the internet poses distinct challenges, and no regulatory regime will be absolutely effective. The rise of digital platforms creates social opportunities in gathering people around the globe, but as more complexities and diversity come in the way, chances of problematic content are also enhanced. Content regulation has long been a feature of media policy, and greater transparency is required in order to go around ongoing debates about the responsibilities of the platform. It is indeed required also for both government regulators and civil society to account for policies and procedures, thus, helping develop industry and community standards and shared knowledge about efficient responses to strengthen practices, as well as to reduce the vulnerability of platforms to escalating public criticism and controversy.

 

 

References

Dragiewicz, M., Burgess, J., Matamoros-Fernández, A., Salter, M., Suzor, N., Woodlock, D., & Harris, B. (2018). Technology facilitated coercive control: domestic violence and the competing roles of digital media platforms. Feminist Media Studies, 18(4), 609-625. https://doi.org/10.1080/14680777.2018.1447341

Economy, E. (2018). The great firewall of China: Xi Jinping’s internet shutdown. the Guardian. Retrieved 12 October 2022, from https://www.theguardian.com/news/2018/jun/29/the-great-firewall-of-china-xi-jinpings-internet-shutdown.

Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal Of Digital Media &Amp; Policy, 10(1), 34. https://doi.org/10.1386/jdmp.10.1.33_1

Gillespie, T. (2018). Custodians of the Internet. Yale University Press.

Langvardt, K. (2018). Regulating Online Content Moderation. SSRN Electronic Journal, 105(5). https://doi.org/10.2139/ssrn.3024739

Milmo, D. (2021). Twitter trials anti-troll tool that automatically blocks abusive users. the Guardian. Retrieved 12 October 2022, from https://www.theguardian.com/technology/2021/sep/01/twitter-trials-anti-troll-tool-automatically-blocks-abusive-users.

Ortiz-Ospina, E. (2019). The rise of social media. Our World in Data. Retrieved 12 October 2022, from https://ourworldindata.org/rise-of-social-media.

Roberts, M. (2018). Censorship in China. In M. Roberts, Censored (pp. 93-112). Princeton University Press.

Roberts., S. (2019). Behind the Screen: Content Moderation in the Shadows of Social Media (p. 33). Yale University Press.

Rodrigues, R. (2011). Privacy on Social Networks: Norms, Markets, and Natural Monopoly. In M. Nussbaum, The Offensive Internet (1st ed., pp. 237-258). Harvard University Press.

Spring, M. (2020). Trump Covid post deleted by Facebook and hidden by Twitter. BBC News. Retrieved 12 October 2022, from https://www.bbc.com/news/technology-54440662.

Stephen, K., Krishnan, A., Cohen, K., Hackley, C., Porto, F., Coglianese, C., & Lai, A. (2021). The Social Responsibility of Social Media Platforms | The Regulatory Review. The Regulatory Review. Retrieved 12 October 2022, from https://www.theregreview.org/2021/12/21/stephen-social-responsibility-social-media-platforms/.