The consequences of Silicon Valley’s diversity problem

Like many other corporate industries, it is the straight, non-neurodivergent, Caucasian male who comes from a middle-class to upper-class background that dominates the positions within Silicon Valley. The lack of diversity in the tech sector has always been an issue, and the majority of marginalised employees have repeatedly declared they feel “uncomfortable” in their role because of their gender, ethnicity, socioeconomic background or neurodevelopmental condition (mthree, 2021). The consequences of continuing to have a lack of diversity amongst the people who create the programs that everyone interacts with harms the already marginalised groups. This essay will attempt to explore the extent of influence this diversity issue has caused, and how it is particular races that are harmed both individually and as a community due to this ongoing problem.

Far-Reaching Impact

In not diversifying the different biases that enter the algorithms, the development of the internet has been largely impacted and its ramifications can be experienced by anyone who does fit into the main demographic Silicon Valley has long sought after – the Caucasian middle-class male. In content recommendation, such as a personalised content feed, and information ranking, as seen in search engines like Google, algorithms are “at the core of processes” that personalise and curate the information that is discernible by the user (Poulain & Tarissan, 2020). These algorithms are programmed by humans and as such, “algorithms are encoded with human biases” (Cook, 2020, p.55). Lee (2018) argues that even if the developer attempts to mitigate or completely remove any of their discriminatory or prejudicial intent from their work, their implicit and unconscious biases will still inevitably be embedded in the algorithmic design. The extent of this biased algorithm is not limited to the social media platforms that seem familiar as the algorithm used by US court systems was found to predict a defendant’s likelihood of becoming a recidivist predicted Black offenders were twice as more likely to become a recidivist than White offenders. The impact of this racial bias is also seen in health where algorithms used by physicians to guide patient care resulted in Black people receiving inferior care (Vyas et al., 2020).  If implicit and unconscious bias of individuals’ attitudes toward race, age and appearance will inevitably enter the algorithm they create, and only one demographic is predominantly producing these algorithms for global use, then all technological algorithms are also predominantly overrun by this specific bias. Cook (2020) states that the result of having this one demographic in these global systems is “the assumption of white, male dominance and the proliferation of racial and gendered stereotypes”. Then ultimately, anyone who does not fit this demographic is harmed in some way physically in the healthcare system, or emotionally due to stereotypes, because they were not involved in the algorithmic processes. Racial biases are particularly prominent, but biases based on sexual orientation or identity, disability, income or gender still exist (Cook, 2020). The lack of diversity affects the development of algorithms, and subsequently, the internet as only one set of biases is inserted into these algorithms, ultimately causing everything online to be dominated by these biases.

Health

The racial bias that exists within algorithms within all technologies due to the lack of diversity within the tech sector can cause physical harm to specific races. Due to the accessibility of search engines like Google, it is increasingly common for people to seek medical advice online to identify health conditions and determine if they require medical treatment (Cross et al., 2021). However, as the representation of content on health conditions lack ethnic diversity, a reflection of the racial bias that exists within these search engine algorithms, the health of these minority individuals who are seeking this information may be at risk. Loeb et al. (2022) found that content about prostate cancer on Google and YouTube had minimal Black ad Latinx representation and there was no content available for them that was “high quality, understandable… and at the recommended reading level”. If there is a discrepancy between the medical information available for Caucasian individuals and other minority ethnic groups, then this not only harms the individual’s health as they do not have the same health information accessibility but further increases the disparities that exist between these communities. On the other side of treatment, US hospitals rely on algorithms to manage about 200 million people annually (Ledford, 2019).

“Inauguración del Hospital Municipal de Chiconcuac” by Presidencia de la República Mexicana is licensed under CC BY 2.0.

Obermeyer et al. (2019) found racial bias in a widely used medical algorithm and found that Black patients who were assigned the same level of risk as White patients were sicker and required extra care. Furthermore, the research states that subsequently less money is spent on Black patients, and falsely concludes that Black patients require less care than equally sick White patients (Obermeyer et al., 2019). This presents a situation where these algorithms are systematically discriminating against an entire demographic due to the racial biases within the algorithm itself. If these algorithms are reflecting these subconscious racial and gender biases, where the level of risk is disproportionately assigned to Black patients, presenting the healthcare system to also be involved. So, therefore, if Black people seeking medical advice on search engines cannot access appropriate health information, and medical professionals rely on racially biased algorithms to treat their patients, there is systematic discrimination harming the health of a whole demographic from an individual to a population level.

Systematic Racism

“Twitter” by chriscorneschi is licensed under CC BY-SA 2.0.

Noble (2018) suggests that due to the continued racial bias that exists within technological algorithms, what occurs is ‘technological redlining’. Technological redlining is the practice where algorithms and decisions that “reinforce oppressive social relationships” allow new modes of racial profiling to occur to already marginalised groups (Noble, 2018, p.1). This practice of redlining reinforces racist stereotypes that harm populations that historically already endured these forms of discrimination. This was seen when Google’s image-recognition algorithm auto-tagged pictures of black people as “gorillas” in 2015. More recently, research by Yee et al. (2021) found that Twitter’s automated image cropping machine – which crops around the part that is considered most salient – was found to favoured light-skinned people over dark-skinned people. This is not limited to visual racial biases by algorithms. Caliskan et al. (2017) found that common machine-learning programs that were trained with ordinary human language associated African American names with “unpleasantness” more often than European American names. If marginalised groups continue to be the victim of algorithmic racial bias and Noble (2018, p.35) states that minority groups are “problematically represented in stereotypical… ways”, due to the missing social context causing the racial bias to exist, then the lack of diversity in the tech sector will continue to harm these populations as these discriminatory ideals extend beyond the Internet. These algorithmic biases explicitly and implicitly harm racial groups and lead to forms of discrimination as these prejudiced beliefs that people have been reinforced by interacting with these algorithms (Lee, 2018). Ultimately, the lack of diversity within the tech sector creates racial biases within the algorithms that harm non-White individuals because these racial biases are proliferated as people engage with these biased algorithms and perpetuate them beyond online platforms.

Conclusion

The impact of Silicon Valley’s lack of diversity is far-reaching, with its racial bias impacting health systems, the criminal justice systems, and local communities. It can be found that anyone who is not a Caucasian male will be harmed, as their implicit and subconscious bias seeps into the algorithms that make up the Internet. These biases reflect and also contribute toward the systematic racism that continues to harm marginalised communities, particularly African-American populations by reinforcing pre-existing prejudiced biases through the algorithms they create.

 

Reference List

Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine Bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

Caliskan, A., Bryson, J.J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334), 183-186. DOI: 10.1126/science.aal4230

Cook, K. (2020). The Psychology of Silicon Valley. Springer Nature.

Cross, S., Mourad, A., Zuccon, G., & Koopman, B. (2021). Search Engines vs. Symptom Checkers: A Comparison of their Effectiveness for Online Health Advice. WWW ’21: Proceedings of the Web Conference 2021, 4(1), 206-216. https://doi.org/10.1145/3442381.3450140

Kasperkevic, J. (2015). Google says sorry for racist auto-tag in photo app. The Guardian. https://www.theguardian.com/technology/2015/jul/01/google-sorry-racist-auto-tag-photo-app

Ledford, H. (2019). Millions of black people affected by racial bias in health-care algorithms. Nature. https://www.nature.com/articles/d41586-019-03228-6

Lee, N. T. (2018). Detecting racial bias in algorithms and machine learning. Journal of Information, Communication and Ethics in Society, 16(3), 252-260. DOI: 10.1108/JICES-06-2018-0056

mthree. (2021). Diversity in Tech: 2021 US report. Wiley. https://www.wiley.com/edge/site/assets/files/2689/diversity_in_tech_2021_us_report_by_mthree.pdf

Noble, S.U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press.

Obermeyer, Z., Powers, B., Vogelli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453. DOI: 10.1126/science.aax2342

Poulain, R., & Tarissan, F. (2020). Investigating the lack of diversity in user behavior: The case of musical content on online platforms. Information Processing & Management, 57(2), 45-59. https://doi.org/10.1016/j.ipm.2019.102169

Vyas, D.A., Einsenstein, L.G., & Jones, D.S. (2020). Hidden in Plain Sight – Reconsidering the Use of Race Correction in Clinical Algorithms. The New England Journal of Medicine. 383(1), 874-882. DOI: 10.1056/NEJMms2004740

Yee, K., Tantipongpipat, U., & Mishra, S. (2021). Image Cropping on Twitter: Fairness Metrics, their Limitations, and the Importance of Representation, Design and Agency. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), 1-24. https://doi.org/10.1145/3479594

Zhao, J., Wang, T., Yatskar, M., Ordonez, V., & Chang, K. (2017). Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints [Paper presentation]. 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Denmark. https://doi.org/10.48550/arXiv.1707.09457