‘Social Media Platforms image’ by Today Testing, is licensed under CC-BY-SA-4,0
1: What’s techlash?
While on the bright side, the rise of digital platforms has envisaged a promising cyber-society where social interactions and economic activities have largely immigrated to the digital domain. (Van Dijck, Poell & de Waal, 2018) Those algorithm-based digital platforms are committed to providing high-speed, convenient social services which allow individuals to pursue social contact with friends, achieve self-expression and form online communities. (Van Dijck, Poell & de Waal, 2018) Nevertheless, the representatives of the digital platforms, named ‘FAANG’, including Facebook, Apple, Amazon, Netflix and Google’, are facing heated techlash from the public. And this technology backlash per se refers to growing hostility towards the hegemonic power digital platforms possess and their attendant negative social influences and public scandals. (Flew, Martin & Suzor, 2019)
2: Why do the public so concern about ‘FAANG’?
Scholars argued that the increasing social issues and the public indignation about the contemporary digital platforms primarily highlight the core of techlash, that is, the lack of regulation of digital platforms. Differ from the traditional media firms that are subject to government regulation and content scrutiny, digital platforms are presented themselves as mere content circulators which should not be regulated by a traditional policy. (Carah, 2021) Thus, the following social issues that concern the public will be surrounding the centralized issue, the absence of regulation on digital platforms.
2.1: Social issues derived from digital platforms:
2.1.1: The stimulated circulation of ‘Fake news’
‘Fake news’ by Jorge Franganillo, is licensed under CC BY 2.0
The circulation of fake news and misinformation is one of the social problems that stirs up public discontent. Tandoc, Lim and Ling (2017) contended that the distribution of misinformation can be ultimately attributed to the fact that digital platforms have blurred the conceptualization of information sources. While media professionals in traditional media firms are required to have credentials and particular expertise in order to produce media content and further make distribution under the supervisor of editors, digital platforms have generated space for unprofessional journalists, that is, untrained citizen journalists, to engage journalistic activities and reach mass audiences. (Tandoc, Lim & Ling, 2017) Nevertheless, an unprofessional journalist who is not committed to journalistic ethical code, is predisposed to fabricate information or produce sensationalized fake news in order to unethically increase click rate and gain profit. (Tandoc, Lim & Ling, 2017) Especially since the ubiquitousness of social media platform, Facebook, which has largely facilitated the distribution of fake news as it reluctantly scrutinizes the content and its source for fear of being considered as ‘content publisher’ and being subject to traditional media regulation. (Flew, 2018, p. 26)
As demonstrated by an astonishing incident in 2016, an adult attacked a pizza restaurant with an assault rifle and asserted that he was conducting a ‘self-investigation’ as he believed that the restaurant was running underground child sex trafficking by presidential candidate Hillary Clinton. (Tandoc, Lim & Ling, 2017) Later the culprit was captured and a police spokesman claimed that he was misled by a piece of viral news from a right-wing blog on Facebook, and that blog was established to purposely attack certain political parties and earn commission from employers. (Tandoc, Lim & Ling, 2017)
Nonetheless, this incident did not simply demonstrate that the news circulated around the digital platforms can be twisted to serve the purpose of gaining profit, it also revealed that digital platforms have the potential to be used to deliberately attack certain races, gender and minorities without undertaking responsibilities. Thus, another issue of digital platforms that concerns the public is the facilitation of cyberbullying and hate speech.
2.1.2: The subtly designed online abuse and prejudice
While digital platforms have offered an opportunity for marginalized groups and socially disadvantaged individuals to appeal and receive support from activists and commonweal organizations (Berners-Lee, 2019), the shareability of platforms also allows negative content caused by racists, extremists and chauvinists to become pervasive and more targeted. (Matamoros-Fernández & Farkas, 2021) Ever since digital platforms become dominant in the socio-political landscape across the globe, those platforms have not only transferred the real-life bully into the digital domain, such as the rising toxic subcultures on Reddit that stigmatize females as well as racial minorities and incite racial hatred, and the pervasive sexual harassment on Twitter (Matamoros-Fernández & Farkas, 2021), but also have evolved the blatant racism and prejudice into subtle, less aggressive prejudice. (Flew, Martin & Suzor, 2019) As Matamoros-Fernández and Farkas (2021) asserted that digital platforms have reshaped the racist dynamics via algorithms and policies, which indicated that despite overt racism would be detected and removed, digital platforms still connive users to promote subtle racism and prejudice against minorities and particular gender through the weaponized meme, emoji and contents that connotate racism.
As demonstrated by the criticism proposed by Cohen (2019), a widely acknowledged actor, who metaphorically described digital platforms as ‘the greatest propaganda machine in history’ and indignantly argued that the algorithms of digital platforms were purposely designed to provoke user’s engagement and their negative attitudes by circulating biased content and further instigating hatred and conspiracy.
If interested, please read more on: https://www.theguardian.com/technology/2019/nov/22/sacha-baron-cohen-facebook-propaganda
2.1.3: The breach of privacy and dataveillance:
The business model of digital platforms primarily relies on users’ stored information in the database and their active engagement in the platforms, which allows platform owners to make preference analysis via algorithms and later packaged sold to advertiser companies for commercial opportunities. (Carah, 2021) Hence, data collection and dataveillance were emphasized and claimed under threat as the advanced data analytical techniques digital platforms possess are not limited to selling commercialized data, by monitoring user’s engagement within their networked communities and further drawing inference, digital platforms are able to directly reveals individual’s interests, untold political orientation, religious beliefs and social networks. (Goggin et al., 2017)
The public shock towards dataveillance conducted by digital platforms reached a peak when the whistleblower, Christopher Wylie, revealed an alarming public scandal that accused digital platforms of illegally collecting, analyzing personal data on Facebook and later sold to third parties including political consultancy company Cambridge Analytica and US presidential campaign group.
Thus, by mirroring this public scandal, it is reasonable to infer that digital platforms should not be considered as simple information circulator that operates outside the regulation, they are as readily seen as powerful and manipulative firms that invade individual privacy and disrupt the political process.
3.1 How can these concerns be addressed?
3.1.1: Government regulation:
As the aforementioned social issues highlight the absence of regulation on digital platforms, the role of government is pivotal in establishing a national policy to regulate the commercial activities of platforms and to scrutinize the content distribution. While the early safe harbours protection named Section 230 of the Act that gave digital platforms legal immunity has confirmed outdated, many regions and countries have introduced appropriate policies to fit their respective socio-political landscapes. ((Flew, Martin & Suzor, 2019)
To address the invasion of personal data, European Union has introduced General Data Protection Regulation (GDPR) that endows individual legal rights to protect their personal data from intrusion, whereas in the Middle East region and some Asian regions, ‘strict liability’ was outlined in the regulation policy that enforced digital platforms to undertake legal responsibilities for any prohibited contents circulated on the platforms. (Flew, Martin & Suzor, 2019, p.44) And to address hate speech and the circulation of fake news, European Commission has proposed ‘the Code of Conduct for Countering Illegal Hate Speech’ to scrutinize the toxic content distributed by the digital platforms. (Flew, Martin & Suzor, 2019, p.39) Nevertheless, the national regulatory policies can be relatively limited as digital platforms are operated globally which indicated the potential policy conflicts that exist across regions.
3.1.2: Regulation via civil organizations:
Despite many proposals argued that a shift from national-based policy to global regulation is viewed desirable, Mansell and Raboy (as cited in Flew, Martin & Suzor, 2019) indicated an alternative approach to address regulation, that is, to establish non-government civil organizations to solve the regulatory conflicts between nations whilst perform more precise, effective regulation on targeted digital platforms. As the example of Christchurch Call which aims to address extremism content and hate speech by establishing self-conscious consensus on eliminating toxic content and ensuring the transparency of content-sharing platforms. (France Diplomacy, 2019) In addition to that, the global organization named International Telecommunications Union (ITU) was proposed by different nations to build a global-scale communication channel for the negotiation of regulation policy and the standardization of media practices. (Flew, Martin & Suzor, 2019)
Read more on Christchurch call on: https://www.diplomatie.gouv.fr/en/french-foreign-policy/digital-diplomacy/news/article/christchurch-call-to-eliminate-terrorist-and-violent-extremist-content-online
While the national regulatory policies have shown some deficits in addressing some particular social issues, Cunningham (2014) pose the significance of self-regulation by digital platforms. Indeed, the contemporary digital platforms such as Facebook, YouTube have partially addressed the mass circulation of fake news and online abuse by deploying large scale deep-thinking algorithms that are responsible for content moderation, however, the comprehensiveness and accuracy of the algorithms were questioned by scholars who argued that algorithms cannot detect and prevent those subtly designed yet pervasive toxic contents from disseminating. (Flew, Martin & Suzor, 2019) This criticism subsequently reflect the dilemma of self-regulation, that is, the incompetence of computerized content moderator and the economical impossibility of hiring large-scale of human moderators.
To sum up, the backlash against tech giants can be attributed to the rising social issue of fake news, online abuse and data dataveillance, which can be centralized to the issue of the absence of regulation on digital platforms. While indeed the regulation established by the government, civil organizations and digital platforms themselves has come into effect, their respective deficits also indicate the gap between regulatory approaches and suggest the possible further collaboration across different institutions to achieve a more balanced and general policymaking.
Berner-Lee, T. (2019) ’30 years on, what’s next #fortheweb’, Available online at https://webfoundation.org/2019/03/web-birthday-30/
Carah, N. (2021). Media and Society: Power, Platform and Participation, 2nd edition. London, UK; Sage Publication.
Cohen, S. (2019) ‘Read Sacha Baron Cohen’s scathing attack on Facebook in full: ‘greatest propaganda machine in history’’, Available online at https://www.theguardian.com/technology/2019/nov/22/sacha-baron-cohen-facebook-propaganda
Cunningham, S. (2014). Policy and regulation. In S. Cunningham & S. Turnbull (Eds.), The Media and Communications in Australia, 4th edition (pp. 73-91). Crows Nest: Allen & Unwin.
Department of France Foreign Affairs. (2019) Christchurch Call To Eliminate Terrorist and Violent Extremist Content Online. Paris, France: Author
Flew, T., 2018, ‘Platforms on Trial’, Intermedia 46(1), pp. 16-21. https://www.iicom.org/wp-content/uploads/im-july2018-platformsontrial-min.pdf
Flew, T., Martin, F. and Suzor, N. (2019), ‘Internet regulation as media policy: Rethinking the question of digital communication platform governance’, Journal of Digital Media & Policy, 10:1, pp. 33–50, doi: 10.1386/jdmp.10.1.33_1
Goggin, G., Vromen, A., Weatherall, K., Martin, F., Webb, A., Sunman, L & Bailo, F. (2017). Privacy, Profiling, Data Analytics. In G. Goggin. (Ed), Digital Rights In Australia (pp. 13-20). Sydney: University of Sydney
Matamoros-Fernández, A., & Farkas, J. (2021). Racism, Hate Speech, and Social Media: A Systematic Review and Critique. Television & New Media, 22(2), 205–224. https://doi.org/10.1177/1527476420982230
Tandoc, E. C., Lim, Z. W., & Ling, R. (2018). Defining “Fake News”: A typology of scholarly definitions. Digital Journalism, 6(2), 137–153. https://doi.org/10.1080/21670811.2017.1360143
van Dijck, J., Poell, T., & de Waal, M. (2018). The Platform Society. New York: Oxford University Press. doi: https://doi.org/10.1093/oso/9780190889760.001.0001