The internet’s initial designers designed it to be open, meaning that data and technology should be movable, expandable, and interoperable, and that its protocols should be transparent. The ideology behind this Silicon Valley viewpoint was mixed with engineering ideas that would allow the internet to evolve as it expanded. Even if the advantages are evident and occasionally seem utopian, the risks are brutally obvious and becoming more each day:
- online harassment
- violent content
This brings about the need for content regulation. Yet who is best fit to carry out the task? The government appears to be out of question. On the other hand, the idea of a fully “open” platform is compelling and resonates with profound, idealized ideals of democracy and community, but it is only an ideal. There is no platform that does not, in some way, enforce regulations. Platforms should exercise some type of moderation by deleting anything unpleasant, disgusting, or unlawful, and portray their best selves to new users, their advertising and business partners, and the general public because it is best fit in the position and because its existence depends on it.
Social Media Platform and Content Regulation
Social media platforms are increasingly in charge of selecting material and monitoring the behavior of their users. This is done not only to comply with legal requirements or prevent the imposition of new laws but also to appease advertisers keen to link their brands to a thriving online community, maintain their corporate reputation, and uphold their own ethical standards both personally and institutionally (Gillespie, 2018). Some of the most aggressive steps are taken by social media entities to censor contents.
So, what do Social Media Platforms do?
Platforms do and are required to regulate user material and behavior via the use of detection, evaluation, and enforcement procedures. Platform moderators are an integral part of what they do. It is fundamental, defining, and constitutional. Platforms not only cannot function without moderation, but they also cease to exist (O’Hara & Hall, 2018). Although moderation has always been there, it needs to be substantially disavowed and disguised, in part to preserve the appearance of an open platform and, in proportion, to avoid social and legal obligations. Platforms are confronted with what could be an unsolvable paradox: they are positioned as just conduits while yet being built on the idea of selecting what users view and comment on.
Social Media platforms are in a better position to regulate social media content.
This is why:
This new perspective on moderation must change how we see what media platforms actually do—from conveying what we post to creating what we see. There is not an unbiased stance. Platform moderators often make selections in various ways (Gruyter, 2019). One method platforms create the social media content they produce for the public is excluding obscenity, insults, violence, and terrorism. One classic case would be when Following the attempted insurrection on January 6, Twitter and Facebook prohibited Trump from utilizing their services because the encouragement of violent and illegal behavior is against their terms of service.
How can Social Media Platforms Regulate Content?
Platforms often do not create the material; however, they can and do make significant decisions about it. Early platforms only made user contributions accessible and searchable. However, as time goes on, they progressively decide what users may disseminate and to whom, how to link users and facilitate interactions, as well as what they will reject (Gillespie, 2018). This implies that a platform must resolve any conflicts between the needs of independent content creators who wish their work to be seen by a broad audience and the platform’s own need for survival and growth from an economic and political standpoint. It also has to accomplish this without having created or ordered the material. As a result, platform managers often cannot supervise content using more conventional methods related to the media sector, such as compensation, contracts, or industry standards. Employment agreements and societal values were significant barriers against illegal material in traditional media. Platforms must discover other methods.
The way platforms are created and run makes social interaction feasible, calls it into existence, gives it structure, and upholds its fundamental validity as a gift to society. Not only do platforms moderate the public conversation, but they also create it. Moving sociality online does not merely “make sociality technical.” Instead, coding patterns fundamentally change how we connect, create, and engage with one another (Gruyter, 2019). They are intended to entice and mold our involvement toward specific goals.
This includes the design of user profiles and interactions, the preservation of social exchanges, the pricing or payment methods for access, and the opaque ways information is sorted algorithmically to give certain material a higher priority than others. Social media platforms deal with the ranges of speech and behavior known as “social media logic.” What users would or could do with Twitter would change if it were fundamentally structured and operated differently. This covers both what is forbidden and how it is put into effect.
The cultural backgrounds and political leanings of information technology corporations significantly impact how we share content on social media platforms. Most social media firms were established in North America, in the information and technology hub of Silicon Valley. As a result, their founding narratives and mission statements are influenced by the liberal corporate North American traditions of free speech and the free market.
For instance, Barbrook and Cameron refer to this mentality as the Californian Ideology in their criticism of the US new media industries, which they describe as “a discordant blend of technical determinism and libertarian capitalism” (Martin, 2019). This implies that the only rules governing what we decide to share are those set by the firms in the form of their “community standards” or content policies. These rules often forbid transmitting highly sexualized, violent, unlawful, or other prohibited content. However, it also allows businesses to support divisive, prevalent political extremism or censor the honest dissemination of information concerning violent political persecution.
Platforms are often not constrained by national or regional boundaries, although intermediary liability regimes are. Regarding the (legal and physical) location of the business, its material facilities, and its customers, ISPs are almost entirely situated in the country where regulation is established and implemented (Gruyter, 2019). This is not the case for websites like Twitter, Instagram, or Wikipedia. Despite having their corporate and legal headquarters in the U. S., where they have the broadest safe harbor, the majority of the huge social media platforms serve millions of people who reside in countries with much stricter liability laws or with particular requirements for complying with state or court orders to remove content. Major social media sites were forced to create their own procedures for handling requests to delete information from other countries. For example, Google famously left China rather than filtering its search results to comply with Chinese laws.
the aspect of social media regulation is a tricky topic because there is no single approach to it that is not without compromise. Nevertheless, social media platforms can be considered regulated social media given that their very existence is highly dependent on them, not to mention that government entities are the least favorable. There also serve best in mediating overlapping social ideals across different borders in which they operate.
Baggs, M. (2021, November 15). Online hate speech rose 20% during pandemic: ‘We’ve normalised it. BBC News. https://www.bbc.com/news/newsbeat-59292509
Chin, J. (2010, January 13). Google_Gone_0001.jpg. Flickr. https://www.flickr.com/photos/21953266@N00/4271537762/in/photolist-7vsKmN-MMNDpJ-mFaQ1-ddc5BW-mFbA4-7w6QJm-2b5rAuU-mF92F-9Qcnb-Fqqsi-e83tTi-7vsPzW-7KBTcZ-cEAo5-F965R-Gu8eDp-fSg89D-mFdmY-CQTY35-2iEzWDG-2ighvi5-F963M-yZ6tq-yYZVA-mFdAw-mDNXn-mFdeL-MYsRJ1-4qfjw3-mFb4n-8eKqoZ-2QHhk6-8h4KuC-mF9yD-mFaX5-mFdHy-mDM1v-mDMaA-mFcEF-mDSiA-DnkyS-mDM8P-mDMA9-7Lywqc-FqpKo-mDLRt-GB9x5T-2hUGGQX-Fqrt6-v9zCy
Cusumano, M., Gawer A., & Yoffie D. (2021, January 15). Social Media Companies Should Self-Regulate. Now. Harvard Business Review. https://hbr.org/2021/01/social-media-companies-should-self-regulate-now
Gillespie, T. (2018). Regulation of and by Platforms. The SAGE Handbook of Social Media, 254-278. https://doi.org/10.4135/9781473984066.n15
Gruyter, D. (2019). Chapter 1. All platforms moderate. Custodians of the Internet, 1-23. https://doi.org/10.12987/9780300235029-001
Licht, M. (2021, September 9). Trump on Twitter, 2021: Suspended for life. Flickr. https://www.flickr.com/photos/notionscapital/50818598351/in/photolist-2kqEANc-2ktcdWc-RNcFQ6-2krYBFt-2nKF57c-2nKyUve-2nKyUwb-2nKyUv9-2nKyUvz-2nKyUvp-2ktcdYw-2krrirv-2j1YghR-QQDmZx-2newZ1P-2m9YMut-S8Dgbb-23byVDG-23cemhy-23Pwy3k-2nKFZeS-2kpU4jp-2j3wiC3-2kujdwG-2n3YWTK-H1FBm5-2gbW8Qo-28zwaDB-WcPWGf-2kf6mh6-2kf6kGi-2ngUVWN-2juGKRV-2mYyEfa-2neQRZG-2nsXGze-2nQMa7Q-Zuo1qN-CsymjY-ZyFiWB-p1aLHR-2n1D3Un-oXZ4JQ-r7mkJF-JGazcb-p177SX-nuHW24-ZuD1KJ-SjK2un-TyBCPM
Martin, F. (2019). The business of news sharing. Sharing News Online, 91-127. https://doi.org/10.1007/978-3-030-17906-9_4
O’Hara, K., & Hall, W. (2018). Four internets. Centre for International Governance Innovation, 1-17. https://doi.org/10.1093/oso/9780197523681.001.0001
Schiller, A. L. (2011, April 17). Re:publica XI – Policing content in the quasi-public sphere. Flickr. https://www.flickr.com/photos/40726922@N07/5628343581
Waddell, K. (2016, January 19). Why Google quit China—and why it’s heading back. The Atlantic. https://www.theatlantic.com/technology/archive/2016/01/why-google-quit-china-and-why-its-heading-back/424482/