What did the utopian promise of freedom of the web bring?
“Free Speech *Conditions Apply by Fukt” by wiredforlego is licensed under CC BY-NC 2.0.
The internet was initially conceived as a free speech platform – the promise of the internet as a technology of freedom continues to shape the design of websites and platforms. Social media platforms arose from “the exquisite chaos of the web” (Gillespie, 2019, p. 5). Social media platforms provide new opportunities for users to share creativity and speech, interact with a broader range of information and people, and restructure them into “networked publics” (boyd, 2011). 4.7 billion people worldwide use social media platforms today – equivalent to 59 percent of the world’s total population. Apparently, social media platforms have become an integral part of social life and the new arena of free speech. While the benefits have been widely praised, the dark side has become increasingly evident. Hate speech, fake news, pornography, cyberbullying, harassment, racism, and other problematic content are increasingly disseminated on all social media platforms. Bad actors are exploiting the free speech cyberplace. What occurs in offline publics also occurs in online social media platforms, and the voice even gets more dissonant in networked environments. It is time to challenge the idea of internet freedom as a priority and consider content moderation on social media platforms to stop the spread of harmful content in the infinite extension of cyberspace. Social media companies indeed did implement content moderation, but there are new tensions.
Content moderation is snowballing.
“Social Media Mixed Icons – Banner” by Blogtrepreneur is licensed under CC BY 2.0.
- Facebook banned 1.3 million accounts within three months last year to combat “fake” and “harmful content.
- TikTok removed over 113 million videos for violations in Q2, 2022, representing 1% of total uploads.
- Weibo deleted 0.51 million “politically harmful” posts and 98 million illegal posts in September 2022.
- Twitter suspended 45191 accounts of Indian users in July due to a violation of its guidelines. Twitter says in Mat that it suspends 1 million spam accounts a day and locks millions of accounts each week.
Almost all platforms moderate. But the growth in scale and scope is exceeding expectations. Content moderation is the “screening, evaluation, categorization, approval or removal/hiding according to relevant communications and publishing policies” (Flew, 2019, p. 47). For now, companies and operators of social media platforms take the role of both policymakers and enforcers, while many seem reluctant. CDA provides a safeguard for platform operators in Section 230 by declaring them as not the provider or publisher of any information published on platforms – thus, they have no legal liability for what users say and do. Although platform operators may not have a legal liability to moderate the content they carry, they still find themselves must serve as “setters of norms, interpreters of laws, arbiters of taste, adjudicators of disputes, and enforcers of whatever rules they choose to establish” (Gillespie, p. 5). Whether actively or passively, social media companies are now the “master” of internet space, although more and more people seem to become dissatisfied with this.
Increasing government intervention – or coercion?
As the source of news and affairs and the shaper of social and political opinions and norms, the coverage, influence, and responsibility of social media platforms are growing (de Zwart, 2018, p. 283). The growing power of social media platforms has aroused the dissatisfaction and concern of the government. At the same time, governments have seen the possibility of social media platforms becoming new control means – governments attempt to “coerce or co-opt private owners of digital infrastructure to regulate the speech of private actors” (Balkin, 2018, p. 2016). Platform companies around the world are facing pressures and increasing censorship from governments forcing them to restrain the spread of hate speech and misinformation. German government criticized social media giants, including Facebook and Twitter, for not doing enough to curb hate speech on their platforms and claimed that they would face fines of up to $53 million if they did not step up efforts to remove illegal and harmful posts. In authoritarian countries, platforms companies must capitulate to the dictatorial control and coercion of authoritarian governments on speech and information. For example, The Great Firewall of China blocks all people located in China from foreign information and most social media giants, including Facebook, Twitter, and Instagram. On the remaining limited social media platforms, the Chinese government asks platform operators to pre-review content and even comments before publishing and redirects blacklisted sites and “harmful” search requests to error pages. The content moderation system, thereby, has become a new means for authoritarian regimes to control and repress public opinion in the new digital era in the name of functioning as a gatekeeper for “harmful” content (Schlesinger, 2020). What has aroused more concerns is that such a model of “techno-authoritarian” in China is increasingly propagating to the rest of the world and has caused panic in democratic countries like the U.S. because of its violation of basic values and human rights.
“What Does China Censor Online” by david is licensed by CC BY-NC 2.0.
The darkness is transferred, not disappeared.
Many people believe that content moderation depends entirely on sophisticated algorithms – this is a misunderstanding. Algorithmic moderation with machine controls will never be a complete solution to the endless game of whack-a-mole because of the huge gap between “the ‘data scale’ of faceless, consistency oriented, automated regulation and the ‘human scale’ of localized, culturally bound interactions” (Flew et al., 2019, p. 43). This is why platform companies need moderators – people who manually delete harmful content and block accounts according to extremely strict guidelines. When most people discuss the crisis of pervasive harmful content and complain about the content moderation system, few people pay attention to the work of moderator. Facebook employs around 15,000 content moderators. They work eight hours a day watching all the dark content that people consider “harmful” – violence, blood, crime, sexual abuse, and all other horrible content. The stricter the content moderation system, the more moderators the platform company needs. The following is a video about a former Facebook censor telling his work experience (the video contains content that may cause discomfort).
More is not always better.
More content moderation needs stricter rules and more robust enforcement, but it also leads to more mistakes. The number of moderators is increasing, but not nearly enough. Facebook CEO Mark Zuckerberg admitted in a white paper that their moderators make the wrong decision in more than 1 out 10 cases – that means 300,000 content moderation mistakes happen on one single platform a day. The number is striking. Algorithms bring more mistakes and are limited in detecting video and audio content. Meanwhile, free speech becomes “worthless”. More and more content has been deleted and blocked due to the excessive blocking and filtering of the platform, but most users fail to force the platform to restore such content on the grounds of free expression or artistic expression (Mostert, 2019). Platforms as modern public forums are privately owned and controlled, so as our free speech rights on these platforms. The dystopian vision of internet freedom is moving in the opposite direction.
Obviously, it is impossible to put content moderation back in the box – while platforms also have not found a way to alleviate tension between those who want to remove harmful content from the platform to ensure community safety and those who believe that content moderation actually puts users and internet freedom in crisis through censorship. But what can be improved is clear. Greater accountability and transparency regarding both algorithmic and human intervention in the content moderation system is needed (Gillespie, 2018; de Zwart, 2018). Re-humanizing human moderators with a logic of care should become one of the first problems to be solved by platform companies in the next step (Ruckenstein & Turunen, 2020). Also, although the coverage of digital platforms is often global, media policies are always bound by nation and region. Therefore, developing uniform digital tools in conjunction with administrative measures to establish effective global guidance should be considered in the blueprint for global governments and social media companies (Mostert, 2019).
Reference list
Balkin, J. M. (2018). FREE SPEECH IS A TRIANGLE. Columbia Law Review, 118(7), 2011–2056.
boyd, danah. (2011). Social network sites as networked publics: Affordances, dynamics, and implications. In Z. Papacharissi (Ed.), A networked self: Identity, community, and culture on social network sites (pp. 39–58). New York: Routledge.
de Zwart, M. (2018). Keeping the neighbourhood safe: How does social media moderation control what we see (and think)? Alternative Law Journal, 43(4), 283–288. https://doi.org/10.1177/1037969X18802895
Eddy, M, & Scott, M. (2017, March 14). Facebook and Twitter could face fines in Germany over hate speech posts. The New York Times. https://www.nytimes.com/2017/03/14/technology/germany-hate-speech-facebook-tech.html
Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal of Digital Media & Policy, 10(1), 33–50. https://doi.org/10.1386/jdmp.10.1.33_1
Gillespie, T. (2018). Regulation of and by Platforms. In Marwick, A. E., Poell, T, & Burgess, J. (Eds.). The Sage handbook of social media (pp. 254-278). SAGE Publications.
Gillespie, T. (2019). All Platforms Moderate. In Custodians of the Internet (pp. 1–23). Yale University Press. https://doi.org/10.12987/9780300235029-001
Holzber, M. (2021, March 22). Facebook Banned 1.3 billion Accounts Over Three Months to Combat ‘Fake’ and ‘Harmful’ Content. Forbes. https://www.forbes.com/sites/melissaholzberg/2021/03/22/facebook-banned-13-billion-accounts-over-three-months-to-combat-fake-and-harmful-content/?sh=42e4c1b95215
Koetsier, J. (2020, June 9). Report: Facebook makes 300,000 content moderation mistakes every day. Forbes. https://www.forbes.com/sites/johnkoetsier/2020/06/09/300000-facebook-content-moderation-mistakes-daily-report-says/?sh=68638bee54d0
Legal Information Institue. 47 U.S. Code § 230 – Protection for private blocking and screening of offensive material. Retrieved October 6, 2022, from https://www.law.cornell.edu/uscode/text/47/230
Milmo, D. (2022, July 2). Twitter says it suspends 1m spam users a day as Elon Musk row deepens. The Guardian. https://www.theguardian.com/technology/2022/jul/07/twitter-says-it-suspends-1m-spam-users-a-day-as-elon-musk-dispute-deepens
Mostert, F. (2019). Free speech and internet regulation. Journal of Intellectual Property Law & Practice, 14(8), 607–612. https://doi.org/10.1093/jiplp/jpz074
Ruckenstein, M., & Turunen, L. L. M. (2020). Re-humanizing the platform: Content moderators and the logic of care. New Media & Society, 22(6), 1026–1042. https://doi.org/10.1177/1461444819875990
Statista. (2022, September 20). Number of internet and social media users worldwide as of July 2022. https://www.statista.com/statistics/617136/digital-population-worldwide/
TikTok. (2022, September 28). Community guidelines enforcement report (April 1, 2022-June 30, 2022). https://www.tiktok.com/transparency/en/community-guidelines-enforcement-2022-2/
Wang, M. (2022, April 8). China’s techno-authoritarianism has gone global. Foreign Affairs. https://www.foreignaffairs.com/articles/china/2021-04-08/chinas-techno-authoritarianism-has-gone-global
Yang, Z. (2022, June 18). Now China wants to censor online comments. MIT Technology Review. https://www.technologyreview.com/2022/06/18/1054452/china-censors-social-media-comments/