
Development of the internet
Our daily lives revolve around the internet and technology; whether individuals are still students, office workers, or educators, digital technologies have become increasingly invasive into our routines. With that comes the fear of privacy, surveillance, fake news, cyberbullying, and online violence experienced by users, especially in marginalized communities. The lack of diversity in the internet’s early development ushered in various biases and prejudice in the system that can potentially harm societies. Lusoli and Turner (2020) brought our attention to the emergence of “bro culture” in Silicon Valley, describing how workplaces in tech firms are stereotypically occupied by middle to upper-class white males. Margret Mitchell, a researcher at Microsoft, described the tech field as a “sea of dudes“ in which most computer scientists designing these AI are affluent white males. (Clark, 2016) A survey by Fortune revealed that, on average, only one-third of the women account for the top 9 tech companies’ workforce in Silicon Valley, exhibiting that only 29% of women hold leadership jobs in these organizations. (Marcus, 2015) The lack of diversity and representation of employees in the sector results in problematic and biased systems that many people currently have access to and use daily.

Tim Berners- Lee, the man behind the creation of the world wide web, voiced his concerns on how the web we have today had failed to serve humanity but “ended up producing a large-scale emergent phenomenon which is anti-human.” (Brooker, 2018, para. 6) The utopian beliefs of the internet and technology are to enable free speech, overcome barriers and organize common interests despite individual differences. (Novak et al., 1997) However, in reality, the lack of inclusion in development induces skewed perspectives of the world around us. The data collected are not representative of the general population but are inputted into algorithms, artificial intelligence, and machine learning. The internet in the 20th century contradicts the notion of open space for freedom of speech and knowledge since the networked sphere creates “filter bubbles” that restrict users to diverse views for collective sharing or understanding. (Abbate, 2017) The development of these digital technologies is rather binary in which inclusion and exclusion exist, often reflecting the interests of those in power. (Abbate, 2017) Hence, it leads to the failure of inclusiveness in the information infrastructure and decision-making.
The danger of biased data to societies and individuals
According to Schatsky et al. (n.d.), machine learning is an AI technology that can automatically “identify patterns and detect anomalies in the data” without human intervention. Algorithms help flitter content and connect users to content, services, and advertisements. (Dijck et al., 2018) In that sense, machine learning uses algorithms to imitate human behaviors and form decisions without their interference; however, the model’s accuracy needs reconsidering since there is a lack of diversity in the field that is training these algorithms. Gillespie (2014) stated that algorithms determine how people participate in social and political discourses on the internet and govern the flow of information. Algorithms are expected to be biased because there are people who actively collect data from users, especially when these machine-learning experts are primarily male. (Gillespie, 2014) Therefore, the lack of good quality and inclusive data makes machine learning flawed and unreliable.
"Tay" went from "humans are super cool" to full nazi in <24 hrs and I'm not at all concerned about the future of AI pic.twitter.com/xuGi1u9S1A
— gerry (@geraldmellor) March 24, 2016
When data are trained poorly, it creates a dangerous and malicious space for users where minorities, in particular, are most affected. In 2016, Microsoft launched their Twitter AI self-learning chatbot TayTweets (@TayandYou) which gained a lot of attention and backlash from the audience. (Vincent, 2016) The female AI bot was designed to mimic and learn the conversational style of other Twitter users. (Kraft, 2016) However, a creation that was supposed to be revolutionary quickly turned into a disturbance of racist and misogynistic conversations between the AI chatbot and Twitter users. (Vincent, 2016) The incident proves that errors exist in machine learning and artificial intelligence that input data to make decisions. The issue lies in the programmed data that contain human prejudices and disregard how Tay learns from data that reflect existing bias.
As the extent of the internet and machine learning evolve, will it help amplify human lives or destroy humanity? The technologies that promise to free people from oppression and improve productivity are somewhat fictional. The business model of these big tech companies aims to monopolize the industry and make the most profit from users by generating more engagement. (Dijck et al., 2018) When AI and machine learning are adapted to the real world of crime and healthcare, the system neglect nondominant parties. National Digital Inclusion Alliance (n.d.) defined digital redlining as the discrimination of people in certain regions, incomes, races, or ethnicities by internet service providers in maintaining and delivering these services. It is prominent that certain individuals are excluded from the production of the internet and AI as they integrate into the real world to make predictions and decisions for actual people.
Risk assessment instruments predict a defendant’s future misconduct, and the score produced is then used in the courtroom as a factor during criminal sentencing. (Chohlas-Wood, 2020) However, machine bias toward people of color affects the score obtained for black defendants, with their score predicting the likelihood of committing a future crime much higher than white defendants. (Angwin et al., 2016) Moreover, using risk assessment to indicate criminals of future crime is unreliable, with only 20 percent of the predicted defendants committing a crime afterward. (Angwin et al., 2016) The system that contains biases against black individuals is utilized in criminal sentencing while making false predictions that black defendants are at higher risk and white defendants with lower scores are likely to gain shorter sentences or probation programs. (Chohlas-Wood, 2020) Furthermore, the word “arrest” was linked to 80 percent of black names, with only 30 percent of white names in Google search. (Gillespie, 2014) Algorithms affect search engines but, at the same time, are influenced by tech workers to ensure the most relevant search results “look right and are not of relevance exactly, but of satisfaction.” (Gillespie, 2014, p. 175) Machine learning to predict outcomes does not always reflect reality but is filtered by humans, possibly without the best intention, and is used to ultimately make ‘fair decisions” for real-world situations.

In healthcare, artificial intelligence facilitates diagnosis and treatments for patients. (Gallon, 2022) Algorithm bias in AI used in medicine and public health leads to misdiagnoses of individuals. (Babic et al., 2021) This is because much of the medical research was conducted on white patients; hence, the lack of diversity in the data makes some diagnosis predictions flawed and unreliable. (Gallon, 2022) For instance, AI technology that detects skin cancer in black patients is often inaccurate and fails to distinguish the difference in skin colour. (Babic et al., 2021) Another incident was where wearable health devices such as smartwatches, which monitor heart rates or track physical activities, revealed unreliable monitoring of people with dark skin tones. (Brewer et al., 2020) The implications of new technologies that white males train tend to overlook marginalized individuals constructing prejudice toward the group.
The future of the internet
With artificial intelligence playing an important in the future of businesses and public health, the lack of diversity in the development of AI could bring harmful consequences such as discrimination, racism, and violent encounters to societies and individuals. If only men were teaching machines to make decisions, then the data would likely acquire limited and homogenous views. The people working in tech firms, the development of AI, and machine learning should re-examine the scope of diversity and recognize the problem of discrimination against people of colour and women. The underrepresentation of marginalized communities in the creation of technology depicts only the view of the dominant groups with power. The machine learning model must be transparent and fair, so the outcomes are free from stereotypes and just. Additionally, making the ‘best decisions’ in real-life situations sometimes requires subjective standpoints, especially when data lacks inclusion; it will deliver unethical judgments and conclusions.
References
Abbate, J. (2017). What and where is the Internet? (Re)defining Internet histories. Internet Histories, 1(1–2), 8–14. https://doi.org/10.1080/24701475.2017.1305836
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine Bias. ProPublica. Retrieved October 10, 2022, from https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Babic, B., Cohen, G. I., Evgeniou, T., & Gerke, S. (2021, February). When Machine Learning Goes Off the Rails. Harvard Business Review. Retrieved October 10, 2022, from https://hbr.org/2021/01/when-machine-learning-goes-off-the-rails
Brewer, L. C., Fortuna, K. L., Jones, C., Walker, R., Hayes, S. N., Patten, C. A., & Cooper, L. A. (2020, January 14). Back to the Future: Achieving Health Equity Through Health Informatics and Digital Health. JMIR MHealth and UHealth, 8(1), e14512. https://doi.org/10.2196/14512
Brooker, K. (2018). “I Was Devastated”: Tim Berners-Lee, the Man Who Created the World Wide Web, Has Some Regrets. Vanity Fair. Retrieved October 10, 2022, from https://www.vanityfair.com/news/2018/07/the-man-who-created-the-world-wide-web-has-some-regrets
Chohlas-Wood, A. (2020, June 19). Understanding risk assessment instruments in criminal justice. Brookings. Retrieved October 11, 2022, from https://www.brookings.edu/research/understanding-risk-assessment-instruments-in-criminal-justice/
Clark, J. (2016, June 23). Bloomberg – Are you a robot? Bloomberg. Retrieved October 10, 2022, from https://www.bloomberg.com/tosv2.html?vid=&uuid=f81aff0f-487f-11ed-8a15-426367756c66&url=L25ld3MvYXJ0aWNsZXMvMjAxNi0wNi0yMy9hcnRpZmljaWFsLWludGVsbGlnZW5jZS1oYXMtYS1zZWEtb2YtZHVkZXMtcHJvYmxlbQ==
Dijck, V. J., Poell, T., & Waal, D. M. (2018). The Platform Society as a Contested Concept. In The Platform Society: Public Values in a Connective World (pp. 5–32). Oxford University Press. https://doi.org/10.1093/oso/9780190889760.003.0002
Gallon, K. (2022, July 8). Digital back doors can lead down the path to health inequity. STAT. Retrieved October 10, 2022, from https://www.statnews.com/2022/06/24/digital-back-doors-can-lead-down-the-path-to-health-inequity/
Gerry [@grealdmellor]. (2016, March 24). “Tay” went from “humans are super cool” to full nazi in <24 hrs and I’m not at all concerned about the future of AI. [Tweet]. Twitter. https://twitter.com/geraldmellor/status/712880710328139776?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E712880710328139776%7Ctwgr%5E8010121c8d9fe275c8c74f4cf78170909048498a%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fwww.theverge.com%2F2016%2F3%2F24%2F11297050%2Ftay-microsoft-chatbot-racist
Gillespie, T. (2014) The Relevance of Algorithm. In T. Gillespie, P.J. Boczkowski & K.A. Foot (Eds.), Media Technologies: Essays on Communication, Materiality, and Society (Inside Technology) (Illustrated). The MIT Press.
Kraft, A. (2016, March 26). Microsoft shuts down AI chatbot, Tay, after it turned into a Nazi. CBS News. Retrieved October 11, 2022, from https://www.cbsnews.com/news/microsoft-shuts-down-ai-chatbot-after-it-turned-into-racist-nazi/
Lusoli, A., & Turner, F. (2020). “It’s an Ongoing Bromance”: Counterculture and Cyberculture in Silicon Valley—An Interview with Fred Turner. Journal of Management Inquiry, 30(2), 235–242. https://doi.org/10.1177/1056492620941075
Marcus, B. (2015, August 12). The Lack Of Diversity In Tech Is A Cultural Issue. Forbes. Retrieved October 10, 2022, from https://www.forbes.com/sites/bonniemarcus/2015/08/12/the-lack-of-diversity-in-tech-is-a-cultural-issue/?sh=56dd1d4c79a2
Novak, T. P., Hoffman, D. L., & Venkatesh, A. (1998). Diversity on the internet: The relationship of race to access and usage. In A. Garmer (Ed.), Investing in diversity: Advancing opportunities for minorities and the media. Washington, D.C.: The Aspen Institute.
National Digital Inclusion Alliance. (n.d.). National Digital Inclusion Alliance Definitions. Retrieved October 11, 2022, from https://www.digitalinclusion.org/definitions/
Schatsky, D., Kumar, N., & Bumb, S. (n.d.). Bringing the power of AI to the Internet of Things. WIRED. Retrieved October 10, 2022, from https://www.wired.com/brandlab/2018/05/bringing-power-ai-internet-things/
Vincent, J. (2016, March 24). Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day. The Verge. Retrieved October 10, 2022, from https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist