The widespread digitalisation of the world has brought great convenience to citizens’ lives. Still, the public’s concerns about privacy breaches and the spread of hate speech behind Techlash have also reinforced the public’s distrust of media and platforms. Finally, this article will briefly discuss the extent to which these problems are being solved by governments, civil society organisations, and technology companies working together through co-regulation.
＂Techlash＂, which refers to the growing power and influence of large tech companies as technology develops at an astonishing speed, has also caused a corresponding adverse public reaction (Flew et al., 2019).
Consumers’ trust in technology has declined sharply. The continuous collection and reselling of user data have made more than 90% of Americans feel out of control over their privacy (Anderson, 2018). Besides, not only Americans but also Chinese consumers are anxious about information stolen by hackers and loss of private data (” Major Study in U.S. and China Reveals Rising ‘Techlash’ “, 2019). This is one of the effects of Techlash. The rapid development of the Internet is a double-edged sword, and citizens have no corresponding protection when using the convenient Internet. As indicated by the network evaluation, the safety of users’ information on social media platforms is too fragile, making it so familiar for hackers to grab personal information illegally (Irani et al., 2011).
Moreover, in addition to individual users, the reputation of organisations or brands can also be damaged by data breaches. For example, after the actual data of pollutant emission of Volkswagen was exposed by hackers who hacked into the system, its reputation was severely damaged, and it faced a massive public relations crisis (Mitroff & Storesund, 2020). Beyond passive hacking, users’ privacy concerns about Techlash don’t end there.
Additional, the website will even take the initiative to betray users, taking other people’s private data as a bargaining chip for unethical interests. Founded in 2007, Airbnb quickly became the industry leader in short-term accommodation services by using an online data platform driven by data algorithms (Van Dijck et al., 2018). The development of the Internet has indeed created more opportunities, bringing about more efficient and easier ways to book online.
However, the following concerns about privacy security are not groundless. When the operation nature of Airbnb changed from non-profit to profit, selling household data and cooperating with advertisers became an inevitable means of profit (Van Dijck et al., 2018). As users in real life (Goghstarr, 2021) fret, disclosing personal information leaves ordinary people struggling to deal with a succession of nuisance calls. It is not uncommon for Airbnb to collect user information through digital platforms and sell it to third parties. As a giant global social media company, Facebook has been publicly attacked dozens of times for personal data leakage (Flew, 2018). As noted in the survey (HarrisX as cited in Hemphill 2019, p. 243), 83% of respondents believe that user privacy violations should be punished more strictly by the law, and they’ve been caught up in Techlash panic. Therefore, for the long-term and sustainable development of online platforms, the public’s concern about personal privacy security behind Techlash has become an urgent issue that needs to be addressed.
The proliferation of hate speech
Secondly, the global Techlash has triggered people’s thinking about online speech. The Overspeed technological development will lead to the inevitable, uncontrollable regulation of Internet platforms, thus promoting hate speech. One of the most pressing issues for platform moderation is regulation. The sheer number of users causing YouTube to have more than 400 hours of video content moderated per minute per day (Flew et al., 2019). YouTube preferentially uses the moderation algorithm to screen the illegal video content, and only when the number of complaints by users reaches a specified amount, the video will enter the manual moderation channel (Flew et al., 2019). This can conceivably create a breeding ground for malicious racists or sexists, who change sensitive content into machine-readable jargon to spread hate speech.
Furthermore, there are fundamental problems in the system mechanism of online platforms, which give the people who spread hate a voice (Berners-Lee, 2019). As Baron-Cohen (2019) points out, Facebook is “the greatest propaganda machine in history”. Behind the algorithmic mechanism of the platform lies the ugly face of promoting violence and the spread of hatred by any means necessary for profit. Facebook and other large media platforms rely on algorithms that gather browsing traces to figure out what users like and constantly push similar types to keep them engaged, including content that appeals to users’ lower instincts and causes anger (Baron Cohen, 2019). That’s why fake news always spreads faster than the truth, as the system frequently pushes users “the truth they want to see”. Users can gain self-identity by watching more videos with similar content, forever immersed in personal prejudice and hatred. As Pointed out by Mitroff and Storesund (2020), there are deliberate mechanisms in every platform where hatred and lies can ferment worldwide with just a click of the mouse. And several major media platforms, led by Facebook, are accomplices to these nasty comments. Therefore, while the Internet brings convenience to life, it also gives opportunities to malicious people.
In addition, the government, social organisations, and technology companies bear the social pressure and try to solve these issues through various methods. The problem that has to be faced is that private interests blind platforms to some of the wrongdoing. These mega-companies have some of the best engineers in the world, and they can independently write specific privacy protection systems or anti-hate algorithms if they want to. Still, this approach is not adopted for the sake of profit because it is too expensive to either protect private data or eliminate more conspiracies (Baron Cohen, 2019). Selling user data or spreading hate and anger can generate more revenue and clicks. So it is urgent to change the way the internet is regulated to reduce this kind of malicious profit-making by individuals. This paper will mainly discuss the feasibility of the method of co-regulation.
In the early period when co-regulation was not implemented, the voice of self-regulation of network institutions was higher, but some disadvantages were difficult to ignore. Self-regulation is passive and can only be improved when mistakes have already occurred, which is not suitable for the institution’s reputation (Hemphill, 2019). Hemphill (2019) also points out that companies applying self-regulation will suffer from inevitable business conflicts, which are inherent in this type of regulation.
On the contrary, co-regulation can solve some problems that the self-regulation mechanism cannot handle (Flew, 2018). Co-regulation is regarded as a mixture of state and self-regulation, and the regulatory system is composed of general legislation and self-regulation agencies. This mode with multiple stakeholders can make regulation inclusive and satisfy more legitimate demands (Marsden, 2011). For example, while advertisers in this regulatory model are committed to active self-regulation, they are also supervised by independent agencies, which is a good balance between different stakeholders (Marsden, 2011).
However, there are drawbacks to co-regulation. Its legitimacy and representativeness are questionable, especially regarding the industry’s code of conduct under the umbrella of government agencies (Marsden, 2011). But that should not obscure the benefits of this regulatory approach. Co-regulation becomes particularly important when regulation is both to serve the public interest and to keep the government at arm’s length from the regulatory process (Flew, 2018). Because government and industry are only responsible for the supervision, regulators can write their own general rules and laws (Flew, 2018). The advantage of co-regulation is that it does not violate the law and encourages innovation simultaneously. This flexible approach can create flexible solutions more suitable for individuals (Marsden, 2011). Therefore, despite its shortcomings, this approach is mainly effective in preventing institutions from harming public interests for personal reasons and can reduce privacy leakage and the proliferation of hate speech.
In conclusion, behind Techlash lies the public’s concern about privacy disclosure and the rampant spread of hate speech. Weak website protection systems make information vulnerable to hacker attacks, and more vulnerable are media platforms that actively collect users’ private data and sell it to third parties. What’s more, the data platform lacks more sensitive and humanised algorithms to review malicious content and also need to modify the algorithm mechanism to keep users immersed to prevent the spread of hate. Finally, governments, civil society organisations, and tech companies are constantly adjusting their approaches to addressing public concerns through negotiations, trying to prevent institutions from promoting privacy leaks and hate speech for their private interests through co-regulation. Although this method has some defects, it can solve these issues to some extent.
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Anderson, S. (2018). Privacy, Artificial Intelligence and Techlash. Chips (Norfolk, Va.).
Baron Cohen, S. (2019, November 22). Read Sacha Baron Cohen’s scathing attack on Facebook in full: ‘greatest propaganda machine in history’. The Guardian. https://www.theguardian.com/international
Berners-Lee, T. (2019, March 12). 30 years on, what’s next #ForTheWeb? World Wide Web Foundation. https://webfoundation.org/2019/03/web-birthday-30/
Flew, T. (2018). Platforms on trial. Intermedia, 46(2), 24-29. http://www.iicom.org/intermedia/intermedia-july-2018/platforms-on-trial.
Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal of Digital Media & Policy, 10(1), 33–50. https://doi.org/10.1386/jdmp.10.1.33_1
Goghstarr [@GoghStarr]. (2021, October 14). Third crank call of today, crazy? [Tweet]. Twiter. https://twitter.com/goghstarr/status/1448648568521199626?s=21
Hemphill, T. A. (2019). “Techlash”, responsible innovation, and the self-regulatory organization. Journal of Responsible Innovation, 6(2), 240–247. https://doi.org/10.1080/23299460.2019.1602817
Irani, D., Webb, S., Pu, C., & Kang Li. (2011). Modeling Unintended Personal-Information Leakage from Multiple Online Social Networks. IEEE Internet Computing, 15(3), 13–19. https://doi.org/10.1109/MIC.2011.25
Major Study in U.S. and China Reveals Rising “Techlash”: Consumer worries about privacy & potential surveillance should prompt marketers to step up. (2019). NASDAQ OMX’s News Release Distribution Channel.
Marsden, C. T. (2011). Internet co-regulation : European law, regulatory governance and legitimacy in cyberspace. Cambridge University Press.
Mitroff, I. I., & Storesund, R. (2020). Techlash: The Future of the Socially Responsible Tech Organization. Springer International Publishing AG.
van Dijck, J., Poell, T., & de Waal, M. (2018). The Platform Society. Oxford University Press. https://doi.org/10.1093/oso/9780190889760.001.0001