By Kate Gallagher, 2023
Within the model of the western internet, there is an intricate weaving of moral, ethical, and economic values that is often debated and multi-layered.
Lines blur between censorship and free speech. To understand the complexities that form the building blocks of the Open Internet, a deeper dive into the core values of this internet model and an analysis of content moderation is necessary.
Through the resistance of the Open Internet to content moderation, the extent of the consequences in the political sphere range from the changing of personal views to the alteration of election outcomes. However, content moderation directly challenges the foundational morals of the Open Internet and many are resistant to the much needed moderation of political content.
Within the Open Internet model, a multilayered resistance to content moderation lies within the tensions of freedom of speech, protection of online communities, economic growth, and accountability. The reluctance to moderate content in the western model has contributed to detrimental consequences in the political sphere that range from individual to national outcomes, but through collaboration, accountability, and effort, a balance that both protects users and upholds the freedom of the Open Internet can be achieved.
What is the Open Internet and Content Moderation?
The Open Internet is constructed with the Silicon valley ideals of neutrality and innovation, directly contrasting with the perceived invasion of privacy that accompanies content moderation. The Open Internet references the western model of internet, founded heavily on the concepts of freedom of speech and open access as well as concepts of freedom resulting from western enlightenment.
Referencing the ideals of the Open Internet, O’Hara and Hall write that “free speech is a highly prominent shared and constitutionally enshrined value,” (Hall et al., 2018, p. 6). The Open Internet preaches accessibility, collaboration, freedom of expression, neutrality, and innovation.
However, free speech is the backbone of the Open Internet, therefore anything that threatens free speech, such as moderation, undoubtedly fosters resistance. According to Roberts,
“Moderation and screening are crucial steps that protect their corporate or platform brand (by enforcing user adherence to site guidelines or rules), ensure compliance with the laws and statutes governing their operations, and contribute positively to maintaining an audience of users willing to upload and view content on their sites,”(Roberts, 2019, p. 34).
While the concept of content moderation is great in theory, the application of moderation is a much more difficult process due to scale, labor, and ambiguity of rules. Not only does moderation deal with subjective complexities of content, it also requires a great deal of economic resources. Accurate moderation must be done by humans and companies are less willing to implement impactful moderation practices because of a reduction of economic gain due to paid labor. The lack of moderation also reduces accountability of the company and ultimately garners more revenue in the free circulation of “controversial” content among users.
Both the ideals of the Open Internet and content moderation share the same goal: protect the user and enhance their experience. However, the application of moderation directly contrasts the value of privacy and freedom that the Open Internet boasts. Values are not the only consideration in the moderation of content, as an intense moderation system requires economic investment. While a company may choose to implement little to no content moderation, there are consequences that result from a lack of moderation.
The Parasite of “Fake News” in the Media
The lack of regulation of content on many American sites has resulted in negative phenomena such as a lack of trust in government and media, a plethora of misinformation, an effect on the outcomes of elections, and an increase in hate speech.
In present times, the rapid spread of fake news is damaging, enhancing “the presence of political bias in content curation platforms, the extent to which such platforms facilitate the widespread dissemination of false news stories, and inflammatory political advertisements placed by foreign governments.” In addition, “fake news is a type of speech that is most directly and irrefutably damaging to the integrity of the democratic process, yet because it resides within the large and undifferentiated protective bubble of political speech,” (Napoli, 2018).
Since the concept of free speech houses such a wide breadth of characteristics, it can be difficult to differentiate false information from counterspeech. While the first amendment is the backbone of the Open Internet, it does not reference the concept of facts versus “alternate facts.” Alternate facts are able to slip through the cracks and spread misinformation that is regarded as true.
Freedom of speech can no longer be used as an excuse for a lack of content moderation, as it directly harms the public.
A Look at Reddit’s Anti-Feminist Militia
A staggering example of the damage that results from unfiltered communication within online spheres can be seen through the anti-feminism that proliferates Reddit. Various groups frequently discussed the perceived lower role of a woman in society and the shame that she should receive because of sexual experiences, and even compiled large sums of money to donate to their cause. Reddit has historically been a site that welcomes those with alternate views and rarely intervenes in offensive speech.
“And although individual administrators may express distaste at the ways in which Reddit is used, they are loathe to make any concrete changes, effectively suggest a lack of awareness when it comes to how the platform has been, and will continue to be, used to mobilize anti-feminist publics,”(Massanari, 2017).
The consequences of anti-feminist politics mobilization shows itself through the creation of extremist groups that have the power to influence others and elections. Although there is a multitude of evidence showing the dangers of unfiltered speech in the media, the priorities of large tech companies lead to a standstill in the implementation of greater content moderation.
Content moderation is necessary for the maintenance of legal and ethical standards that both ensure protection for the user and the company, but many companies resist content moderation due to a lack of responsibility and the moral standards associated with the Open Internet. Within internet monitoring, companies seem to prioritize the protection of free speech over the reduction of harmful content.
“At the heart of policy approaches to online content moderation are trade-offs between fundamental values such as freedom of expression and the protection of public health… The principle of proportionality requires that harsh measures should be applied only when strictly necessary and that a variety of less intrusive mitigating tools should be implemented as a first line of defense,”(Kozyreva et al., 2023).
Brainstorming Potential Solutions
While most companies prefer to be hands off, harm is occuring for both users and workers of the company. In order to create a baseline for fair content moderation, the conversation needs to be started between the public and the company. Though the protection of free speech is important, the specificities of free speech in the modern day have evolved to require adaptation in order to protect the public. Only once administrators open their minds to the benefits of content moderation and a discussion is established can a fair level of moderation be implemented. In order to improve the environment of online platforms and protect the public, companies need to act and take accountability. Improving the online environment is both the work of the company and the user.
Gillespie stated that “platforms could use data to make more visible the lines of contestation in public discourse and offer spaces in which they can be debated, informed by the everyday traces of billions of users and the value systems they imply,” (Gillespie,2018, p. 213).
If the internet is really for the people and by the people, shouldn’t the governing laws of moderation be a collective effort?
In order for the public to work with the platforms, the platforms must be willing to assume responsibility and judgment for their shortcomings rather than sweeping them under the rug. The Open Internet has always held values of collaboration and innovation.
Though content moderation takes a large amount of labor and resources, content moderation is essential in order to uphold the true vision of the Open Internet.
With effort and accountability, a balance of content moderation that suits both platforms and users can be accomplished.
From its genesis, the Open Internet has avoided content moderation to avoid restricting free speech, but the harmful political consequences of a lack of content moderation must be addressed.
The Open Internet is rooted in principles of freedom of speech and champions values like accessibility, collaboration, and innovation, but the implementation of content moderation is debated due to resources required and the tension between free speech and user protection.
The absence of content regulation on many American websites has resulted in adverse effects such as eroding trust in government and media, disseminating misinformation, influencing election outcomes, and promoting hate speech. The harm caused by unregulated information sharing demands a reevaluation of content moderation priorities by large tech companies.
Content moderation is essential for maintaining legal and ethical standards that protect both users and companies, highlighting the need for a balanced dialogue between the public and companies to adapt moderation practices and uphold the true core principles of the Open Internet.
To all the platforms out there: it is time to step up, take accountability, and collaborate with users in order to create an extraordinary and balanced internet for future generations to come.
Gillespie, T. (2018). What Platforms Are, and What They Should Be. In Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media (pp. 197–214). Yale University Press. https://doi.org/10.12987/9780300235029
Kozyreva, A., Herzog, S. M., Lewandowsky, S., Hertwig, R., Lorenz-Spreen, P., Leiser, M., & Reifler, J. (2023). Resolving content moderation dilemmas between free speech and harmful misinformation. Proceedings of the National Academy of Sciences, 120(7). https://doi.org/10.1073/pnas.2210666120
Leetaru, K. (2018, September 8). Why we still need human moderators in an AI-Powered World. Forbes. https://www.forbes.com/sites/kalevleetaru/2018/09/08/why-we-still-need-human-moderators-in-an-ai-powered-world/?sh=6dd737411412
Massanari, A. (2017). #Gamergate and the Fappening: How Reddit’s algorithm, Governance, and Culture Support Toxic Technocultures. New Media & Society, 19(3), 329–346. https://doi.org/10.1177/1461444815608807
Napoli, P. M. (2018). What If More Speech Is No Longer the Solution? First Amendment Theory Meets Fake News and the Filter Bubble. Federal Communications Law Journal, 70(1), 55–77. https://doi.org/link.gale.com/apps/doc/A539774158/AONE?u=usyd&sid=bookmark-AONE&xid=1f0a3a5c
Newton, C. (2019, February 25). The trauma floor. The Verge. https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona
O’Hara, K., Hall, W., & Cerf, V. (2018). Four Internets: The Geopolitics of Digital Governance (No. 206). Centre for International Governance Innovation. https://doi.org/10.1093/oso/9780197523681.001.0001
Roberts, S. T. (2019). Understanding Commercial Content Moderation. In Behind the screen: Content moderation in the shadows of social media (pp. 33–72). Yale University Press. https://doi.org/10.2307/j.ctvhrcz0v
Rosen, J. (2023, October 2). The Delete Squad. The New Republic. https://newrepublic.com/article/113045/free-speech-internet-silicon-valley-making-rules
Stokel-Walker, C. (2021, March 10). Anti-feminist YouTube and Reddit content is a gateway to the alt-right. New Scientist. https://www.newscientist.com/article/2270342-anti-feminist-youtube-and-reddit-content-is-a-gateway-to-the-alt-right/
Zakrzewski, C., De Vynck, G., Masih, N., & Mahtani, S. (2021, October 25). How facebook neglected the rest of the world, fueling hate speech and violence in India. The Washington Post. https://www.washingtonpost.com/technology/2021/10/24/india-facebook-misinformation-hate-speech/