The complexity of power in the digital era and the wide discourse power -the pluralistic subject of regulation
Digital platforms were born out of the exquisite chaos of the Internet 2.0 era and formed a broad online public inspired by the freedoms promised by network operators (Gillespie, 2018), yet the illusion of utopian democratic communities and open platforms clearly does not provide media with powerful tools to block harassing content such as hurtful, malicious, violent, and pornographic content. This is despite the fact that media platforms because they provide hosting, blocking, and distribution services and profit from user data and advertisers, are seen as deserving of accepting angry public criticism and developing more sophisticated means of regulating websites, curtailing hostile content, and protecting healthy communities.
Yet the scale of the challenges faced by online technologies and the complex intersection of regulatory frameworks (Burgess & Poell, 2018) make the moderation of platform content no longer the sole responsibility of tech companies and internet platforms: neo-liberal regulatory regimes are proposed internationally, where governments have different guidelines for regulating and moderating mass media and the public political discourse it represents; given the digital platforms’ large scale subjects, the reporting and flagging mechanisms open to user permissions become powerful moderator tools.
The unmediated public domain – platforms become custodians of cyber security
Media platforms, as integrated service tools that connect global audiences, disseminate and share personalized content based on sharp network effects, and collect and store data on user preferences for economic gain, are increasingly becoming cultural landscapes that express and shape public discourse, with platforms acting as ‘online intermediaries‘ (Burgess & Poell, 2018) and moderators between users and rules. “(Burgess & Poell, 2018) and moderating role, with a definite social responsibility for the safety of content circulating in the media. At the same time, operators and technology companies set up internal policies and vetting teams and manipulate platform strategies in an attempt to combine the image of a just and free community with their own brands and convey a positive moral core to society. As Twitter preaches to the public in its community guidelines, the media platform is known for its wide inclusive open domain and high freedom, accessible channels for sharing information, but heretics and lawlessness will be restricted to ensure a safe user experience (Gillespie, 2018). Despite the “fallacy of displaced control” that often emerges from the shifting of blame amidst public outrage, inadequate workforce settings, and trauma, temporary shortages of classification technology and various resources all provide significant challenges for operators, the platforms, and the technology companies behind them that provide the services are still widely seen as the rule makers, adjudicators, and enforcers in stopping offensive speech and malicious attacks and protecting the values of their users.
Innovation and Responsibility in an Age of Crises – The Role of Digital PlatformsWhy Content Moderation Costs Social Media Companies Billions by CNBC. All rights reserved. Retrieved from https://www.youtube.com/watch?v=OBZoVpmbwPk
There are often specific internal bodies within internet companies that deploy rules to ensure that the values standards of their platforms are set out and provide uniform guidelines for judgment, of which the adoption of some proactive attempts appear to be effective interventions:
- Marginal platforms borrow and learn language policies from mainstream platforms and develop advanced online guidelines and rules that users must follow.
- Collecting large traces of media activity, building a corpus of undesirable language in the backend, and using automatic text search for banned words or skin flitter to block and block posted content.
- Provide SafeSearch in search engines, and update intelligent and righteous algorithms to moderate and remove illegal information and trolls.
- Use real identity authentication to classify and label explicit and sexual content, filter pornographic and horror content, and limit the population to which it is distributed.
- Delegate regulation to community administrators and present the reporting tool in a prominent way in the layout.
The collective interest in democratic politics – international and governmental responsibility for media regulation
Although the permeable and fluid connectedness of the internet has allowed digital media to gain a degree of monopoly power and regulatory authority, given the limitations of toxic technocultures (Massanari, 2017), the lagging nature of removal and filtering tools, the sense of tolerance of policymakers and administrators, the techlash and massive data processing, etc., there is still a large amount of malicious content that exists and circulates in social media beyond expectation, and with the public attention and social expectations facing social media platforms themselves, it is no longer possible to regulate the enormous influence that public discourse can have on society and politics by relying solely on the regulation of the platforms and companies themselves. For example, Trump once faced angry attacks from a section of Chinese users for his inappropriately biased tweets about the safety of a global epidemic, which unintentionally elevated his personal private social discourse to a statist issue that transcended dimensions (Schlesinger, 2000). As a result, the global focus of public sphere surveillance has shifted to the ‘regulatory turn’ (Schlesinger, 2020) where international organizations and national governments, as authoritative regulators, have the role of intervening in the regulatory function of the legal sector in the public sphere, such as the media, to defend “Information sovereignty’.
Although due to different ideologies and specific socio-cultural, ethical, and economic influences, countries have developed laws for ISPs based on different degrees of liability, there are still macro-monitoring initiatives that are worthy of reference and effective.
- Establishing a social credit system assessment and a complete set of censorship systems, linking citizens’ financial and criminal status to their personal account performance, and creating a list of blacklisted users to stop them from posting illegal information.
- New legal codes should be tailored to the new Internet media platforms, such as providing additional regulations on online violence, illegal transactions, dissemination of obscene pornography, and the publication of anti-social statements, to clarify the value system and ethical standards to be observed.
- The state should urge new media companies to improve and fulfill their obligations in addition to providing an open and free communication space for society (Schlesinger, 2020) and establishing a good communication order.
A new generation of large base users – decentralized self-regulation
Large platforms often start by attracting a homogeneous user base, yet when the number of individual users increases dramatically the amount of data faced by digital engines is unmanageable, and because of the anonymity and invisibility of online identities on the internet, most illegal content crosses regions through a ‘publish-then-filter‘ (Shirky, 2008) process, remaining on the online platform for several days and generating cumulative benefits. At the same time, with the dramatic changes in self-publishing platforms and the deepening of new media technologies, the mechanisms of panopticon surveillance and regulation in digital culture are gradually shifting towards a synaesthetic prison synopticon (Mathiesen, 1997), with the public’s identity shifting from that of the ‘watched’ to that of the ‘watcher’. “Reddit relies on volunteer administrators in the community to regulate activity in the subreddits, and such community administrators, known as “ superflaggers”, tend to have a higher level of shielding and organizational authority, through special personal Screened and appointed by authority status or loyal user history, they have been used extensively in Facebook, Twitter and even TikTok live streams. In the new internet era, however, more reliance on autonomous user ratings and the precise delegation of flagging rights, so that the power to monitor and manage and the awareness to publish safe speech and maintain the health of the community is passed on to the internet users themselves, is the effective and efficient way to stop bad content and trolling on digital platforms.
Gillespie, Tarleton (2017). ‘Governance by and through Platforms’, in J. Burgess, A. Marwick & T. Poell (eds.), The SAGE Handbook of Social Media, London: SAGE, pp. 254-278.
Gillespie, Tarleton (2018). All Platforms Moderate. In Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. Yale University Press. pp. 1-23.
Martin, Fiona (2019). The Business of News Sharing, In Sharing News Online: Commendary Cultures and Social Media News Ecologies. Martin and T.Dwyer. Cham, Switzerland: Palgrave Macmillan. pp. 91-127
Massanari, Adrienne (2017). #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3): 329–346.
Mueller, Milton (2017). Will The Internet Fragment? Sovereignty, Globalization and Cyberspace. Oxford: Polity Press, Ch. 5.
Roberts, Sarah T (2019). Behind the Screen: Content Moderation in the Shadows of Social Media. New Haven, CT: Yale University Press, pp. 33-72.