The Hidden Leverage of Governments and Big Capitals
In the age of digitization, the power dynamics have witnessed a seismic shift, with big data algorithms emerging as the new puppet masters in shaping public sentiment. Governments and corporations, leveraging their vast resources, are no longer just observers of public emotion; they’re the sculptors, molding feelings to fit distinct narratives. With the explosion of data, a subtle undercurrent of manipulation, almost imperceptible, runs through our daily digital interactions, having profound implications on everything from war sentiments to consumption habits. Futami et al. (2022) delved into how artificial intelligence can be biased in emotion estimation, manipulating cognitive biases and subtly altering impressions of others either positively or negatively. This lends credence to the assertion that not only are our perceptions being steered, but the very fabric of our interpersonal judgments is susceptible to algorithmic influence.
Further shedding light on this realm, Agudo, Matute, and Jankowski (2021) illustrated how algorithms can impact pivotal decisions in our lives, notably in political spheres and even personal relationships. It’s no longer just about targeted advertisements; it’s about influencing core decisions and beliefs. The extent of this manipulation finds profound exploration in Woolley and Howard’s (2018) work, emphasizing the role of computational propaganda employed by political entities, highlighting the synergy between politics and algorithmic manipulation on social media platforms.
Our central argument asserts that both governmental power and large corporations possess the capability to utilize big data algorithms in order to manipulate public emotions across various contexts, including warfare, international relations, fashion trends, and consumer behavior. This manipulation is employed as a means to further their own vested interests. However, It’s essential to recognize algorithms as unbiased tools, their outcomes molded by human decisions. Instead of condemning these technological innovations, we should champion a clear, principled, and inclusive structure for their use.
The rise of Surveillance Capitalism, as pioneered by tech giants such as Facebook and Google, offers a disturbing glimpse into how big data algorithms are meticulously crafted to manipulate people’s emotions, a strategy that goes beyond mere consumerism and ventures into the realm of social and political control.
Prominent tech corporations have not merely devised mechanisms to accumulate vast amounts of data; they’ve perfected the art of converting this data into predictive behavioral models (Lauer & Lipartito, 2021). At face value, the aim appears benign: to enhance user experience through tailored content. Yet, beneath the surface lies a more insidious goal. These mechanisms not only flag concerns over privacy but underscore a broader ethos where users, no longer passive, are actively pursued as profitable ventures.
Lauer and Lipartito (2021) provide insights into the emergence of what they term the “Employer-Surveillance State.” In this framework, the objective is twofold: not only to monitor users but to leverage their data strategically. By decoding and predicting behavioral patterns, preferences, and vulnerabilities, corporations, and by extension, vested governmental interests, have the potential to channel human emotions and behavior toward specific, often profit-driven or politically advantageous outcomes.
It’s not merely about the vast data pools but the sophisticated analytical prowess that’s employed. As Lauer and Lipartito (2021) suggest, the shift from traditional surveillance capitalism marks a transition where individuals are not just observed but intricately dissected and influenced. Each user becomes a nexus of emotional, social, and political leverage, which, when wielded effectively, can serve the dual masters of profit and power. Such powerful interplay between big data and human emotion emphasizes the pressing need for stringent ethical considerations, transparency, and regulatory oversight. The very fabric of democracy and individual agency might hinge on how societies navigate this intricate web of surveillance, emotional manipulation, and power dynamics.
Political Influence Campaigns
In the modern digital age, the nexus between big data algorithms and political influence campaigns reveals a strategic recalibration of power dynamics, emphasizing emotional manipulation as a central tool in the arsenal of governments and major capital entities. Boler & Davis (2020) in their seminal work, “Affective Politics of Digital Media: Propaganda by Other Means,” shed light on the intricate machinery behind these campaigns. No longer confined to overt propaganda, the political landscape has evolved to harness the potency of digital media, deploying subtler, yet more profound, tactics of emotional manipulation.
Central to this discourse is the understanding that emotions are not merely by-products of political messaging but are, in fact, the primary targets. Big data algorithms, with their unparalleled ability to dissect and predict human behavior, offer political entities a precision tool. By analyzing user data, these algorithms can craft messages tailored to resonate with specific emotions, be it fear, pride, anger, or hope, effectively creating an emotionally-charged political atmosphere (Boler & Davis, 2020). The intent isn’t merely to inform but to evoke, to stir, and to direct emotional currents in ways that serve specific political and capital interests.
Furthermore, as Boler & Davis (2020) postulate, the digital realm’s “affective politics” delineates a shift from traditional propaganda models. Instead of broad-brush narratives, there’s a targeted approach, capitalizing on the granular insights provided by big data. Each digital interaction, like, share, or comment becomes a data point, feeding into a larger system that refines and retunes political messages for maximum emotional impact.
Conclusively, the convergence of big data algorithms and political campaigns underscores a pivotal transformation in the art of governance and influence. In this era, understanding and manipulating emotions through digital media isn’t just an advantage—it’s a necessity. The onus rests on regulatory bodies, tech platforms, and civil society to recognize and navigate these murky waters, ensuring that democratic ideals aren’t drowned in a sea of algorithmically curated emotional waves.
Are big data algorithms inherently negative in politics and businesses?
Artificial Intelligence Transforming Healthcare Landscape by Mukul is licensed under CC BY-SA 2.0.
While the discourse on big data algorithms often underscores their potential for manipulation, particularly in political and business arenas, it is essential to consider a more nuanced perspective. Not all applications of these algorithms serve divisive or profit-driven ends; they also play pivotal roles in optimizing services, streamlining processes, and fostering innovation.
From Twitter’s vantage point, the majority of political campaigns across the spectrum strategically harness the power of social media platforms to optimize visibility and bolster their messages. The inherent dynamics of algorithms that amplify the online presence of political entities aren’t intrinsically problematic. Instead, the crux of the issue lies in the hands that wield them and the methodologies they employ. The technology serves as a neutral tool, but its ethical implications stem from its application by various actors.
Haggart, Tusikov, & Scholte (2021) delve into the intricate interplay of power and authority in internet governance. While the state’s influence is undeniable, the algorithms, as instruments, serve as tools that can be wielded for various purposes. A centralized governance might use these algorithms to enhance public service delivery or to foster more informed citizen engagement. Their inherently neutral character becomes contextually positive or negative based on the intent of the wielder.
Further, Flew, Martin, & Suzor (2019) argue for a re-evaluation of digital communication platform governance. The underlying assumption here is that algorithms and the platforms they operate on can be shaped, regulated, and made transparent. In the business world, algorithms drive innovations in market research, product development, and customer service. By parsing vast datasets, businesses can better understand consumer needs, refine products, and even address societal challenges, from healthcare to environmental sustainability.
Roberts (2019), in her exploration of commercial content moderation, highlights another often overlooked dimension: the human effort behind algorithmic operations. Big data algorithms do not function in isolation; they are consistently refined and directed by human expertise. This interplay suggests the potential for ethical and intentional use of algorithms to curate content that fosters positive community building and counters disinformation, rather than amplifying echo chambers.
Undoubtedly, while algorithms carry risks of exploitation, their role as essential instruments in service optimization, spurring innovation, and promoting civic participation cannot be overlooked. The core issue isn’t the inherent nature of the algorithms, but rather their deployment and oversight. Haggart, Tusikov, & Scholte (2021) underscore that even as state control seems to intensify, the inherent structure of the internet defies absolute centralization, suggesting a complex interplay of power. The apprehensions linked to surveillance capitalism and the steering of political narratives, although legitimate, demand a more discerning dialogue on big data algorithms. It’s crucial to understand algorithms as neutral mechanisms, with their ramifications being shaped by human directives. Hence, the imperative isn’t to vilify the technological advancements but to advocate for a transparent, ethical, and participatory framework guiding their utilization.
Agudo, U., Matute, H., & Jankowski, J. (2021). The influence of algorithms on political and dating decisions. PloS One, 16(4), e0249454. https://doi.org/10.1371/journal.pone.0249454
Boler, M., & Davis, E. (2020). Affective Politics of Digital Media: Propaganda by Other Means (1st edition). United Kingdom: Routledge. https://doi.org/10.4324/9781003052272
Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal of Digital Media & Policy, 10(1), 33-50. https://doi.org/10.1386/jdmp.10.1.33_1
Futami, K., Yanase, S., Murao, K., & Terada, T. (2022). Unconscious other’s impression changer: A method to manipulate cognitive biases that subtly change others’ impressions positively/negatively by making AI bias in emotion estimation AI. Sensors (Basel, Switzerland), 22(24), 9961. https://doi.org/10.3390/s22249961
Gillespie, T. (2018). Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press. https://doi.org/10.12987/9780300235029
Lauer, J., & Lipartito, K. (2021). Surveillance capitalism in America. Philadelphia: University of Pennsylvania Press. https://doi.org/10.9783/9780812299946
Roberts, S. T. (2019). Understanding commercial content moderation. In Behind the screen: Content moderation in the shadows of social media (pp. 33-72). Yale University Press. https://doi.org/10.12987/9780300245318
Woolley, S. C., & Howard, P. N. (2018). Computational propaganda: Political parties, politicians, and political manipulation on social media. Oxford University Press. https://doi.org/10.1093/oso/9780190931407.001.0001