‘Techlash’: Overreaction or Warranted Concern?

Group 1

Creative Commons License The techlash against Amazon, Facebook and Google—and what they can do by Ryan Olbrysh is licensed under a Creative Commons Attribution 4.0 International License. Based on a work at https://www.economist.com/briefing/2018/01/20/the-techlash-against-amazon-facebook-and-google-and-what-they-can-do.

A global ‘techlash’ has emerged as a response to the proliferating power and influence of digital and social media platforms such as Facebook (Flew & Suzor, 2019). This has arisen from merited public concerns pertaining to privacy, ‘fake news’, and other online harms. Given that these issues have largely flourished due to inadequate regulatory regimes, the public response has been a push for greater platform regulation and a restructuring of internet governance.

What is the ‘techlash’?

The techlash refers to the “scale and scope of critique” and public animosity directed towards large online platform companies (Flew & Suzor, 2019,p.34; Schlesinger, 2020). A series of ‘public shocks’ regarding revelations of the infrastructure, implications and unregulated nature of these companies has headed this trend towards increasing distrust (Flew & Suzor, 2019). These concerns are elevated given the ‘power monopoly’ in which cultural, economic, and political influence is concentrated in five ‘FAANG’ companies (Facebook, Apple, Amazon, Netflix, Google) (ACCC, 2019). The market power and subsequent ubiquity of these companies has led to them “effectively becoming a necessity in contemporary life” (Pesce, 2017). This status has afforded them the luxury to practically force users into “acceptance of non-negotiable terms of use” (ACCC, 2019, p.22). The ‘techlash’ also encompasses demands for greater accountability and regulation of these companies and their operations so platforms can take responsibility for the issues that they have contributed to (Flew & Suzor, 2019). This contrasts to the historical treatment of these companies as mere ‘intermediaries’ with ‘safe harbour’ rights to self-regulate and escape liability for the content they distribute (Flew & Suzor, 2019).

What public concerns lie behind the techlash?

1. Privacy and Dataveillance

“Data Security Breach” by Visual Content is licensed under CC BY 2.0

Of primary concern driving the techlash are breaches of user privacy particularly pertaining to the unprecedented surveillance of personal data by platform businesses. There is an intentional lack of transparency regarding the type, amount, and ultimate uses of user data (Pesce, 2017). Many users are unaware that companies collect data that goes well beyond their profile information and activity on a specific platform (ACCC, 2019). Across the internet, browsing behaviour, IP addresses, location, and device specification can be tracked and collected (ACCC, 2019). For example, 43% of Google play store apps send user data back to Facebook (ACCC, 2019). Of most concern is the broad discretion that digital platforms have over the use of this data. For example, Facebook has been called out for using surveillance to create incredibly accurate user profiles that measures ones emotional state and is then sold to advertisers wishing to sell products to teenagers through targeted ads released when a user is seen to be feeling particularly insecure (Pesce, 2017). Many are calling for regulation that gives users (ACCC, 2019):

  • enough information to understand how their data is processed,
  • terms of service and privacy policies that are not long, complex or ambiguous,
  • the opportunity to exercise real choice and meaningful control over their data,
  • and the ability to bring direct action for breaches of their privacy.

It is suggested by the ACCC (2019) that this occurs in Australia via amendments to privacy law.

2. Fake news 

“Puppet Master of Fake News” by pasa47 is licensed under CC BY 2.0


Furthermore, platforms have greatly contributed to the spread of ‘fake news’ which has dismantled ‘free speech’ arguments that have long justified American platforms’ reliance on the ‘counter-speech doctrine’ by which an un-regulated and open speech environment is claimed to be the best solution to overcome false speech and reach the truth (Napoli, 2019). User data sets produced by platforms are used by fake news publishers to “target those most likely to be receptive to [it]” (Napoli, 2019, p.59). The Cambridge Analytica scandal is an example the abuse of data for political advertising in which microtargeted Facebook ads worked to boost Trump’s election campaign (Schlesinger, 2020). This sparked particular concern given evidence that being ill-informed led many to vote against their best interests in the 2016 election (Napoli, 2019).

These effects culminate in platforms’ utilisation of algorithmic machine-learning to ensure that the content consumed by users appeals to their individual preferences and beliefs in order to “keep people glued to their feeds” (Pesce, 2017). This leads to the creation of ideologically homogeneous ‘filter bubbles’ which “deflect news stories that do not correspond to the user’s established…political orientation” (Napoli, 2019, p.77). Subsequently, someone targeted by fake news appealing to their pre-existing beliefs is unlikely to see ‘counter-speech’ appear on their feeds. Therefore, many are asking for greater transparency regarding the algorithmic nature of content feeds, greater editorial responsibility toward truth, and external monitors who can signify to users the reliability and trustworthiness of their news sources (Picard & Pickard, 2017; Napoli, 2019; ACCC, 2019).

3. Online Harms

Platform companies have also been greatly criticised for their failure to adequately address online hate-speech, harassment, and inappropriate content (Flew & Suzor, 2019). Platforms “have tended towards light touch, generic policy making which can…be simply codified for machine learning” (Flew & Suzor, 2019, p.40). This is combined with often vague, inconsistent, and reactive user guidelines gives users only basic indications of expected content standards (Roberts, 2019). For example, Facebook says that one should not maliciously contact someone in a way that is “unwanted”.

A prime example of the inadequacies of platform moderation was ‘the fappening’ on Reddit; illegal acquisition and spread of celebrity nudes (Massanari, 2017). This was enabled on Reddit through anonymity, algorithmic push of ‘upvoted’ content and ease of profile and subreddit creation combined with a relaxed governance structure reliant upon volunteer moderators (Massanari, 2017). That administers did not step in until underage nudes were shared reflects how the platforms’ free speech model profits from and “implicitly legitimised” anti-feminist activism whilst marginalising those who wished to stop the harm (Massanari, 2017, p.343).

It is particularly difficult to evaluate the methods and effectiveness of content moderation as the process is shrouded in secrecy to escape scrutiny and so platforms can continue appearing like open places for unrestricted free expression (Roberts, 2019). Given that moderation is both necessary and incredibly values laden, public concern demands “more responsive, public interest-centred moderation processes” that are open, consistent, and easily understood (Flew & Suzor, 2019, p.41; Gillespie, 2018).

Who will best address these concerns?

It is unlikely that either governments, civil society organisations or technology companies alone will be able to adequately address any of the concerns driving the techlash. Particularly, it is widely recognised that continued self-regulation alone is not the best answer (Flew & Suzor, 2019). This makes it necessary to expel the fantasy that platforms are impartial or ‘open’ and instead accept that they should be subjected to media policy, regulation, and governance (Gillespie, 2018).

It is time to reconsider the responsibilities of platforms…crafting a new principle of liability tailored for [them]… articulating normative expectations for what platforms are – legally, culturally, and ethically (Gillespie, 2017, p.273)

This is due to their market dominance, global scale, and increasingly ‘editorial’ function in the selection, curation and ranking of content (ACCC, 2019). Furthermore, whilst platforms have made some effort to address criticism, such as Facebook’s introduction of an Oversight Board, policy makers are still encouraged to consider the ramifications of self-regulation in the hands of companies with an “inherent profit motive” (ACCC, 2019,p.7). For example, tech companies are known to lobby against regulation in a way that “subsumes the public interest under corporate priorities” (Popiel, 2018, pg.576).

"Social Media Logos" by BrickinNick is licensed under CC BY-NC 2.0
“Social Media Logos” by BrickinNick is licensed under CC BY-NC 2.0

Similarly, governments regulating their own internet jurisdictions is equally as ineffective as a standalone approach. Positive moves have certainly been made in Europe such as the GDPR and German laws imposing fines for ineffective removal of hate speech or fake news (Flew & Suzor, 2019; Napoli, 2019). Despite this, domestic policies are too specific to each national socio-economic context, making global policy better positioned to address these global platforms (Picard & Pickard, 2017). Ultimately multistakeholder governance is the approach most likely to succeed as it recognises that the public value of a platform “should be the shared responsibility of all social actors…companies, citizens and governments alike” (Dijck et al, 2018,p.20). Each actor will have varying means of intervention, but all should be actively engaged in consultation seeking to explore their “policy needs…develop support for action and learn the implications of proposals” (Picard & Pickard, 2017, pg.34).

The challenge of creating innovative platform-focused regulation should not discourage governments, tech-companies, and other organisations from addressing a growing portion of society in their justified fears pertaining to data use, algorithmically curated content dissemination and exposure to online harms. Whilst it is likely going to require a multistakeholder approach, change is certainly necessary, and this has been revealed by the regulatory ‘techlash’.


Reference list

Australian Competition and Consumer Commission. (2019). Digital Platforms Inquiry: Final Report. Canberra, Australia: ACCC. Retrieved from https://www.accc.gov.au/system/files/Digital%20platforms%20inquiry%20-%20final%20report.pdf

Augusta University Cybersecurity. (2018, April 1). Dataveillance [Video file]. Retrieved from https://youtu.be/7dTqSWEv084?t=36

Chang, A. (2018, May 2). The Facebook and Cambridge Analytica scandal, explained with a simple diagram. Vox. Retrieved from https://www.vox.com/policy-and-politics/2018/3/23/17151916/facebook-cambridge-analytica-trump-diagram

Dijck, J., Poell, T. & de Waal, M. (2018). The Platform Society as a Contested Concept. In The Platform Society (pp. 5-32). New York, NY: Oxford University Press.

Facebook. (2021). Bullying and Harassment. Retrieved from https://transparency.fb.com/en-gb/policies/community-standards/bullying-harassment/

Facebook. (2021). Oversight Board. Retrieved from https://transparency.fb.com/en-gb/oversight/

Financial times. (2018, March 13). Why world’s richest companies are facing a ‘tech-lash’ [Video file]. Retrieved from https://www.youtube.com/watch?v=3yYWEO9gUas

Flew, T., & Suzor, N. (2019). Internet Regulation as Media Policy: Rethinking the Question of Digital Communication Platform Governance. Journal of Digital Media & Police, 10(1), 33-50. doi: 10.1386/jdmp.10.1.33_1

Gillespie, T. (2018). All Platforms Moderate. In Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (pp. 1-23). New Haven, CT: Yale University Press. doi: 10.12987/9780300235029

Gillespie, T. (2017). Governance by and through Platforms. In The SAGE Handbook of Social Media (pp. 254-278). London, UK: SAGE.

Guardian News. (2019, Nov 4). ‘They would have let Hitler buy ads’: Sacha Baron Cohen’s scathing attack on Facebook [Video file]. Retrieved from https://www.youtube.com/watch?v=tDTOQUvpw7I

Internet society. (2021). Internet Governance – Why the Multistakeholder Approach Works. Retrieved from https://www.internetsociety.org/resources/doc/2016/internet-governance-why-the-multistakeholder-approach-works/

Massanari, A. (2017). #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329–346. Doi: 10.1177/1461444815608807

Napoli, P. (2019). What If More Speech Is No Longer the Solution? First Amendment Theory Meets Fake News and the Filter Bubble. Federal Communications Law Journal, 70(1), 57–104.

Pesce, M. (2017, Summer). The Last Days of Reality. Meanjin. Retrieved from https://meanjin.com.au/essays/the-last-days-of-reality/

Picard, R., & Pickard, V. (2017, April). Reuters Institute for the Study of Journalism report. Essential Principles for Contemporary Media and Communications Policymaking. Retrieved from https://reutersinstitute.politics.ox.ac.uk/our-research/essential-principles-contemporary-media-and-communications-policymaking

Popiel, P. (2018). The Tech Lobby: Tracing the Contours of New Media Elite Lobbying Power. Communication, Culture & Critique, 11(4), 566-585. Doi: 10.1093/ccc/tcy027

Roberts, S. (2019). Understanding Commercial Content Moderation. In Behind the Screen: Content Moderation in the Shadows of Social Media (pp. 33-72). New Haven, CT: Yale University Press. doi: 10.12987/9780300245318

Schlesinger, P. (2020). After the Post-Public Sphere. Media, Culture & Society, 42(7–8), 1545–1563. doi: 10.1177/0163443720948003

Worlford, B. (2021). What is GDPR, the EU’s new data protection law?. Retrieved from https://gdpr.eu/what-is-gdpr/

Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.