The rapidly evolving digital landscape of today has emerged as a potent catalyst, shaping people’s perspectives, beliefs, and prejudices on an unprecedented scale. The rise of social media platforms, notably TikTok, and search engines like Google has ignited intense debates about the impact of the digital environment on societal biases. Understanding the mechanisms behind algorithmic recommendations and the proliferation of misinformation has become crucial. As Noble (2018) highlights, bias is present in both our online activities and everyday technology usage, ingrained in computer programming and, more notably, in the artificial intelligence systems we rely on, regardless of our intentions.
The Impact of TikTok: Amplifying Prejudice in the Digital Age
Since 2016, TikTok has become one of the fastest-growing platforms among social media users, capturing the attention of users from all age groups worldwide (Hernandez et al., 2022). However, the platform’s commentculture has exacerbated prejudices. For example, when a chubby girl and a handsome, slim boy share a couple’s vlog, comments often question why the boy would choose such a fat girl. The intent behind these vlogs is to share the couple’s life, but they lead to negative comments from the public, which in turn, affects other users, such as high school girls, who, upon encountering such videos, might be influenced by the comments, deepening their biases and reinforcing incorrect notions that only slim girls are considered beautiful.
TikTok’s recommendation algorithm also contributes to the issue of prejudice by trapping users in information bubbles. When a user only encounters similar content, it reinforces their existing biases. For instance, there was a trend on TikTok where bloggers frequently discussed the movie “Barbie”, propagating the idea that the film is terrifying and emphasizing how Americans are brainwashing the world with gender conflicts. Initially, users might find the video interesting and curious, leading them to like and engage with the content. However, users’ likes influence TikTok’s algorithm to recommend similar content continuously, the abundance of similar viewpoints can lead users to be indoctrinated by the opinions of influential content creators, leading to biases against the United States and American movies. Meanwhile, on TikTok, one in every two videos contains negative content related to elderly people, whereas videos portraying elderly individuals as “warm” have a 43% lower chance of incorporating negative stereotypes (Ng & Indran, 2023). This also reinforces biases among teenagers against the elderly population.
With the proliferation and popularity of memes on TikTok, some creators intentionally generate content infused with prejudice to attract more traffic. For instance, due to the phonetic similarities between certain English racial slurs and Chinese words, some creators opportunistically incorporate these terms into their videos to create a humorous effect. These videos often receive a significant number of shares and likes on TikTok due to their wit and comedic appeal. However, this phenomenon also brings serious issues of bias. Especially since many TikTok users are teenagers who have not yet fully developed their social awareness and values, they might not grasp the underlying meanings behind these memes. They might mistakenly perceive these “memes” as harmless entertainment. This deepens their biases against specific groups or cultures, unknowingly perpetuating and reinforcing negative stereotypes related to race and culture.
Google’s Missteps and the Perils of Misinformation
Google, as one of the largest search engines in the world today, plays a pivotal role in shaping the information age. However, as Fine (2018) points out, Google resembles more of a “thought supermarket” than a metaphorical market crucial to democracy, people often overlook that Google is not a public information resource; rather, it is a multinational advertising corporation. The accuracy and cultural sensitivity of information disseminated by Google are often ignored (Narayanan & De Cremer, 2022). This profit-driven rather than accuracy-oriented corporate objective further fuels the spread of prejudice and misunderstanding.
In 2021, Google sparked widespread controversy and sharp criticism on social media for its attempt to associate the Palestinian scarf “keffiyeh” with terrorism (Essa, 2021). When Google users search”what do terrorists wear on their head?”, the search engine provides the answer “Palestinian keffiyeh.” The dissemination of such misinformation has laid the groundwork for misunderstandings about other cultures and communities. The original intent behind keffiyeh was to honor Palestinian farmers defending their land; it is a symbol akin to the Palestinian flag representing the nation. Google’s racial bias had a profoundly negative impact on the Palestinian people. The keffiyeh, representing a sense of patriotism, suddenly being linked to terrorism by Google’s algorithm not only misled the public’s understanding of this cultural symbol but also deepened prejudice and misunderstanding towards the Palestinian people, tarnishing their image. This case clearly illustrates how the current digital environment is prone to perpetuating prejudice. Its erroneous associations have deepened stereotypes and biases while highlighting a more significant issue: as pivotal sources of information, social media platforms inadvertently reinforce biases when their algorithms promote misleading or culturally insensitive content. The swift dissemination of incorrect information and its penetration into public awareness underscores the digital era’s vulnerability to exacerbating prejudices.
User Choices? Just Algorithms
However, some scholars argue that Google merely presents information, and it is the users who make the selections, similar to shopping in a supermarket where buying something unsatisfactory can’t be blamed on the supermarket. They assert that the bias caused by Google is not a problem of the digital environment but a human issue. Google provides a vast amount of data and information, but users typically only view the first few pages of search results (Jansen & Spink, 2006). People tend to spend the most time viewing and are most likely to click on the first link on the search results page (Joachims et al., 2007). Most people rarely weigh what to believe (Boutin, 2011), thus, the biases in the digital environment are linked to users’ habits of information retrieval and search behavior, rather than caused by the digital environment itself.
Yet, they overlook a fundamental issue: the biases in the digital environment are not merely the result of users’ subjective choices, but rather, they emerge from the interaction of various factors such as algorithm design, information overload, and psychological elements. Google’s information presentation is not neutral, it is personalized based on users’ historical searches, clicks, preferences, and other data, driven by algorithms(Hall et al., 2020). It reflects what Google wants people to see, and users are indeed highly likely to strongly favor the top-ranked links, even if these links have lower relevance to the search query (Pan et al., 2007). For instance, sponsored restaurants on Uber Eats are clearly marked as “sponsored,” indicating that they are paid content. Google, however, lacks such visible cues. If a politician is shaping their image and Google is assisting them, it is highly probable that Google might suppress unfavorable content about them, as Introna and Nissenbaum (2000) indicate the search engine systematically removes specific websites from its results and deliberately emphasizes others, often at the expense of others. Due to Google’s algorithms favoring webpages from highly profitable clients (Fine, 2018), users struggle to differentiate between neutral information and false or commercially endorsed content.
Users indeed hold a default, superficial trust in Google (Gunn & Lynch, 2019), which leads them to refrain from filtering information. Hence, this highlights a significant issue: they are easily manipulated by the current digital environment. Social media algorithms, filter bubbles, echo chambers, and misinformation are all factors within the digital environment that influence individuals and lead to biases. People are merely being controlled by the digital environment, but it is the digital environment itself that perpetuates these biases.
Under the influence of algorithmic recommendations, the spread of misinformation, and media trends, the current digital environment is gradually transforming into a breeding ground for bias, which is more conducive to the spread of prejudice. People are increasingly swayed by the media rather than learning to filter information. When evidence of prejudice woven into Google’s search algorithms is exposed, people tend to ignore it, highlighting the need to address this double standard (Narayanan & De Cremer, 2022). Digital platforms should enhance transparency and accountability, and governments should strengthen regulations on these platforms to prevent the escalation of biases.
Boutin, P. (2011). Your Results May Vary. The Wall Street Journal Asia. Retrieved from https://www.proquest.com/docview/868187390?pq-origsite=primo
Essa, A. (2021). Google search results suggest keffiyeh a symbol of terrorism. Middle East Eye. Retrieved from https://www.middleeasteye.net/news/israel-palestine-google-criticised-keffiyeh-headscarf-terrorists
Fine, C. (2018). Coded prejudice: how algorithms fuel injustice. FT.com.
Gunn, H. K., & Lynch, M. P. (2019). Googling. In The Routledge Handbook of Applied Epistemology (1st ed., Vol. 1, pp. 41-53). Routledge. https://doi.org/10.4324/9781315679099-4
Hall, C. M., Bertuccio, R. F., Mazer, T. M., & Taiwiah, C. O. (2020). Google It. The Rural Educator (Fort Collins, Colo.), 41(1), 40-60. https://doi.org/10.35608/ruraled.v41i1.680
Hernandez, L. E., Frech, F., Mohsin, N., Dreyfuss, I., & Nouri, K. (2022). Analysis of fibroblast pen usage amongst TikTok social media users. Journal of Cosmetic Dermatology, 21(10), 4249-4253. https://doi.org/10.1111/jocd.15038
Introna, L. D., Nissenbaum, H. (2000). Shaping the Web: Why the Politics of Search Engines Matters. The Information Society, 16(3), 169-185. https://doi.org/10.1080/01972240050133634
Jansen, B. J., & Spink, A. (2006). How are we searching the World Wide Web? A comparison of nine search engine transaction log. Information Processing & Management, 42(1), 248-263.https://doi.org/10.1016/j.ipm.2004.10.007
Joachims, T., Granka, L., Pan, B., Hembrooke, H., Radlinski, F., & Gay, G. (2007). Evaluating the accuracy of implicit feedback from clicks and query reformulations in Web search. ACM Transactions on Information Systems, 25(2), 7-es. https://doi.org/10.1145/1229179.1229181
Narayanan, D., & De Cremer, D. (2022). “Google Told Me So!” On the Bent Testimony of Search Engine Algorithms. Philosophy & Technology, 35(2). https://doi.org/10.1007/s13347-022-00521-7
Ng, R., & Indran, N. (2023). Videos about older adults on TikTok. PloS One, 18(8), e0285987-e0285987. https://doi.org/10.1371/journal.pone.0285987
Noble, S. U. (2018). Algorithms of oppression: how search engines reinforce racism. New York University Press.
Pan, B., Hembrooke, H., Joachims, T., Lorigo, L., Gay, G., & Granka, L. (2007). In Google We Trust: User’s Decisions on Rank, Position, and Relevance. Journal of Computer-Mediated Communication, 12(3), 801-823. https://doi.org/10.1111/j.1083-6101.2007.00351