Techlash and Public Concerns Lying Behind Techlash
In his book Tools and Weapons (2021), Brad Smith, the American attorney and current president of Microsoft asked whether technology is indeed a tool or a weapon. The merits of technology, especially Information Technology have been celebrated for making life easier, as turning the world into a global village with seamless communication. However, in the last decade, tech consumers have raised doubts about the abilities and the general effects of technology. It has therefore created a controversy on whether technology makes life simpler and easier or does it add complexity with issues such as the threats to privacy, destructions of jobs, social isolation among other things. Such questions are targeted to the big technology companies that have managed, over decades, to provide technological solutions to social problems. These companies, which include the likes of Google, Apple, Microsoft among others, and otherwise referred to as the ‘big tech’ companies are under consumer scrutiny for their abilities in technological innovations. This scrutiny against big tech companies and their abilities and influence in tech innovation is what is referred to as a techlash. According to the Oxford dictionaries, and as referenced by Weiss-Blatt (2021), a techlash is a strong and widespread negative reaction to the growing power and influence of large technology companies, particularly silicon-based companies. The MacMillan dictionaries attempt to specify the reason for the techlash and include the element of privacy and the possibility of political manipulation in the definition. In this article, the focus will be based on understanding the concept of a techlash by discussing the public concerns that lie behind the techlash, as well as the extent to which these concerns can be addressed by governments, the civil society organizations, and by the technology companies themselves.
The possibility of a widespread techlash emerges from a reduced trust in the tech companies and the IT products they are offering to the consumers. Living at the age where technology has experienced its best disruptive innovations such as AI, facial recognition, the Internet of Things among other innovations, it is no doubt that these innovations have brought more distrust amongst users contrary to the trust tech companies anticipated. In a report as written by West (2021) and according to the Edelman Trust Barometer poll, trust in the technology industries and the product offers dropped from 78% in 2012 to 57% in 2021. Moreover, on a global scale, tech sector trust has dropped from 77% to 68% during that time. These statistics indicate the degree to which the techlash has already spread. Even more, the big tech companies have found themselves in the public eye for negative scrutiny. This included, as highlighted by Atkinson et al (2019) the revelations that Russia used social media platforms to affect the 2016 U.S. elections, the case of Cambridge Analytica and how it misused Facebook data for political purposes, and when Google was investigated for antitrust violations.
The major public concerns that saw this drop I trust were mostly inclined towards issues of privacy, misinformation, as well as social isolation, and the addictive nature of the internet. Eminently, the issue of privacy carried the most weight as it covered aspects that are closely related to the usage of AI by the same big tech companies in their advertisement algorithms and facial recognition software. With the help of AI algorithms, tech companies such as Google, Amazon, as well as Microsoft can tailor-make and direct advertisements to a user’s device based on one’s search history (Li, 2019). Moreover, recommendations are also made based on one’s location or the frequently visited locations. This access to the user’s information with regards to one’s location and the search histories has raised public concerns on the violation of privacy by these companies. It is proof that these big tech companies hold too much information about their users that could make the same users vulnerable.
In other instances, the factor of misinformation has also been cited as a public concern that resulted in the techlash. The use of technology has allowed for freedom of speech as expression. This has therefore improved the autonomy with which people share or hold information. In the wake of facing the Covid-19 pandemic, thanks to the internet and technologies created by tech companies such as Facebook and the Facebook-owned WhatsApp, there was an overload of information about the pandemic. The freedom to offer recommendations on the virus’ treatment and prevention by anyone who has access to the internet and a messaging app brought a great deal of confusion to desperate internet users (Siddiqui, et al., 2020) Even worse, it has allowed people to publicly air their political stances through social platforms such as Twitter, and this mostly invokes social misunderstanding and cyberbullying. Even more, other effects such as social isolation and internet addiction are good examples of how technological risks outweigh the benefit and hence low trust levels by consumers. Internet addiction can be perceived as a double-edged sword when it comes to spreading a techlash. Just like an addictive drug, internet addiction challenges the efforts to give a negative reaction to the growing power and influence of the large tech companies. With addictive platforms such as Facebook, TikTok, WhatsApp, Twitter, and Instagram, tech innovators such as Mark Zuckerberg are at a greater potential to manipulate their consumers due to their dependency on the technologies offered.
Another public concern behind the techlash is the loss of jobs that have resulted from the use of automated systems in the workplace. In what is termed the robot economy, the goal is to replace human labor with the machine or rather robot labor. This disruptive innovation aims at improving workplace efficiency as well as reducing the cost of managing human resources (Arduengo & Sentis, 2020). Computers and associated pieces of software have already made a huge replacement of the human workforce in the corporate spectrum. This disruptive innovation, which is coined as the age of automation, has played a bigger role in increasing the distrust individuals have in the abilities of technologies and the associated tech companies. For this reason, the human population, at least those at the edge of feeling the demerits of technology, have aggressively backlashed such innovations even on the corporate level.
Much of these public concerns are regulatory problems that require government action and involvement as well as the engagement of the technologies companies themselves. That is, independently or as a team of technology companies who are obliged to objectively engage in safety innovations. In his article on responsible innovation and self-regulatory organization, Hemphill (2019) echoes the prospects of Shelly Palmer, the CEO of Palmer Group on the take on safe and responsible innovation. Shelly proposes three ways in which safe and regulated innovation can be achieved. That is, through government regulation, self-regulation, and self-regulatory organization. Governments can be involved in regulating technological innovations and moderating the power of technological companies. This is through the formulation and implementation of laws and policies that address the concerns of consumer privacy, public safety, and national security (Hemphill, 2019). This will ensure that technology products and services that are safe for consumers make it to the market. Additionally, self-regulation is an extent to which technology companies can play part in safe and responsible innovation. As Hemphill (2019) reflects Shelly’s ideas, this strategy ensures that the individual companies, such as Google, Facebook, Apple among others, contemplate the societal effect of their technologies before such technologies emerge in the consumption market. There is also the establishment of Self-Regulatory Organizations, abbreviated as SROs, that entails the big tech companies teaming up to outline responsible innovation principles that all players of the SRO would agree to abide by (Hemphill, 2019). Other roles of such SROs could include ensuring compliance with the set principles, the issuance of fines upon violations, as well as presenting such violations to federal regulatory agencies. Civil organizations can also play a part by advocating for the privacy rights of consumers and the regulation of the powers of the tech companies in information control and dissemination.
Arduengo, M., & Sentis, L. (2020). The Robot Economy: Here It Comes. International Journal of Social Robotics, 1-11.
Atkinson, R. D., Brake, D., Castro, D., Cunliff, C., Kennedy, J., McLaughlin, M., & New, J. (2019). A Policymaker’s Guide to the “Techlash”—What it is and why it’s a Threat to Growth and Progress. Information Technology and Innovation Foundation.
Hemphill, T. A. (2019). ‘Techlash’, Responsible Innovation, and the Self-regulatory Organization. Journal of Responsible Innovation, 6(2), 240-247.
Li, H. (2019). Special Section Introduction: Artificial Intelligence and Advertising. Journal of advertising, 48(4), 333-337.
Siddiqui, M. Y. A., Mushtaq, K., Mohamed, M. F., Al Soub, H., Mohamedali, M. G. H., & Yousaf, Z. (2020). “Social Media Misinformation”—An Epidemic Within the COVID-19 Pandemic. The American Journal of Tropical Medicine and Hygiene, 103(2), 920.
Weiss-Blatt, N. (2021). The Techlash and Tech Crisis Communication. Emerald Group Publishing.
West, D. M. (2021, April 2). Techlash Continues to Batter Technology Sector. Brookings. Retrieved October 12, 2021, from https://www.brookings.edu/blog/techtank/2021/04/02/techlash-continues-to-batter-technology-sector/.