
With the development of the internet, scholars and users have optimistically declared the internet’s “democraticizing force where information on all topics would readily be available” (Dworak et al, 2014, p. 1). Whilst this is theoretically true, it is a utopic view of the potential of the internet, which only takes into consideration certain social and business aspects of the world. There is still an appreciation of these idealistic views from the scholarly men that developed the internet, and to an extent, these practices are evident in the internet today, however it is important to understand that this lack of diversity has also dismissed the geographic, political and financial circumstances of certain individuals, and centralising internet content within algorithms, social media and advertisements to adhere to the monetary profits the internet has now become a platform for.
Internet Culture
Many internet imaginaries incorporate an idealistic view of it being the great “interconnectedness for information” with unlimited potential in space, accessibility and production (Dworak et al, 2014, p. 1). However, this imaginary is only available to those who have the resources, accessibility and knowledge to their advantage.
Castells (2002) outlines the “four cultures of the internet” that arose from the development of the internet and are still fairly prevalent today. The ‘techno-meritocratic’ culture Castells (2002) labels, consists of “primarily white, educated, men living in urban areas” (Vickery, 2018, p. 35). The development and culture of the internet is “rooted in the scholarly tradition of the shared pursuit of science and academic excellence” of these “techno-elites” (Castells, 2002 p. 39). One of the key characteristics of this culture mentioned in Castells (2002) also include a system of peer-review, resulting in the development of the internet and technology for both software and hardware are discussed within the tight-knit group of elite scholars that is not representative of the diverse people of this world.
The “Hacker culture” Castells (2002) also labels, are essentially extremely digitally literate people who have the ability to design, hack and develop the internet. Although they are not defined by academics, their exclusivity is expressed through their technological skills, and accessibility that are not representative of the people of this world. Although now many people are digitally literate enough to become ‘hackers’, this culture also mainly consisted of men who are literate, and had the finance to resource their abilities.
From the early stages of development and until now, internet and technology developers have remained generally of the male gender, its culture labelled as “predominantly white” (Marcus, 2015). Old gender and racial bias have remained “ubiquitous” in the workforce, pushing female workers and minorities out of companies (Marcus, 2015). With so few representations of modern society within the developers of the internet, it unknowingly, but critically translates into the programming and algorithms of the internet.
Algorithms and Machine Learning
Data driven technology, especially those that rely on algorithms and machine learning require “samples of historical observations or curated examples considered relevant to performing a particular task” (Richardson, 2022, p. 120). Recorded actions that are systematically biased are ‘learnt’ by internet programs, resulting in more bias (Richardson, 2022, p. 120). An under represented workplace potentially has the possibility of overlooking any discrimination, creating a continuous cycle of bias and separating the social distance between individuals. There are also little data sets for low socio-background individuals, producing little to no advancement in learning their online behaviours and diversifying the internet.
Health Services
An example of this is online, algorithmic health care services. Services differ in the “design, objectives and the diversity of groups”, including “profiles, lifestyles, preferences and genetic endowments” (Panch et al, 2019, p. 2). It is detrimental as a health service to learn from diverse sets of data for the best outcome, however individuals of low-socioeconomic background who have no access to these online sources, plus the potential, physical incapability of those with disabilities make these services doubtful.
Business and Economics
The internet has also become a budding platform for business and economics, users being continuously subjected to advertisements that effect their consumeristic decisions (Dijeck et al, 2015). Technology companies such as Facebook and Google have introduced “personalisation features: algorithms that tailor information based on what the user needs, wants and who he knows” to tackle the masses of data created by users (Bozdag, 2013, p. 209). Systematic bias in our culture that effects the machine learning of these platforms, influence the projection of online content including advertisements onto users.
One example of this, is an advertisement that “promotes job opportunities and training in STEM” (Lambrecht & Tucker, 2022, p. 2966). The advertisement is ‘gender-neutral’ and this experiment was launched in 91 countries to measure the fairness of online algorithms (Lambrecht & Tucker, 2022). Results show that the advertisement was shown to “20% more men than women”, with this difference more “pronounced for individuals in their prime career years” (Lambrecht & Tucker, 2022, p. 2966). Advertisements such as these, limit the reach to females and other minority groups, potentially affecting the hiring process with an unbalanced group of applicants. This inevitably ties into workplace culture mentioned above, and with the domino effect, creates wider gaps between users through gender, racial and minority segregation systematically performed by algorithms and machine learning.
Bias and Shadow Banning on Social Media
In a more social setting, algorithmic bias and ‘shadow banning’ are evident on social media platforms. Shadow banning “blocks or hides a user’s social media content” such as “views that don’t align with LGBTQIA” (Brown, 2021). As the internet and social media platforms are developed to make profit, an emphasis on influencer content that generates engagement are more favoured than other voices. As a result, diverse, small businesses that depend on these platforms generate a hard time in audience engagement, and users are “unaware” of their existence (Brown, 2021). With online business models and advertising depending on the advertising funds available, and users mainly seeing larger businesses, physical and online goods and services become centralised, and society loses diverse businesses that make society a cultural place.
Likewise, political movements such as the Black Lives Matter movement created controversy, with Facebook flagging activists’ accounts, but not the “hate speech against Black people” (Lim & Alrasheed, 2021). Similarly, posts dedicated to ‘Missing and Murdered Indigenous Women and Girls’ (MMIWG) on Red Dress Day (5th of May) had “disappeared from their Instagram accounts” (Lim & Alrasheed, 2021). The systematic bias in algorithms, and shadow banning “decide whose voices will be heard” (Lau & Akkaraju, 2019), limiting public opinion, and discriminating against race, gender and minority groups.
Anyone know why @instagram removed/censored all #MMIWG stories yesterday? Families, loved ones, advocates are deeply upset. Why would this be happening? pic.twitter.com/44pmSdZvfh
— Brandi Morin (@Songstress28) May 6, 2021
In conclusion, online diversity is an issue that affects social, economic and political aspects of modern society. From the early stages of the development of the internet, the exclusivity in gender, class and education have created an internet culture that segregates gender, race and other minority groups. The algorithms and machine learning society needs today to manage a data-driven network is evident of the overall lack of diversity, as systematic bias is evident in the training data available. This creates an endless cycle of bias and segregation, in hiring and the workplace, advertising and business, and free speech and activism. Although there are many plans in place to challenge and create change; Pinterest announcing their initiatives for workplace diversity (Marcus, 2015), and demanding algorithm reform by a new “AI Bill of Rights” passed by the White House (Paul, 2022), companies and individuals need to act quickly to lessen the bias circulating online.

References
Bozdag, E. (2013). Bias in algorithmic filtering and personalisation. Ethnics Inf Technol 15, pp. 209-227. https://doi.org/10.1007/s10676-013-9321-6.
Lambrecht, A., Tucker, C. (2019). Algorithmic bias? An Empirical study of apparent gender-based discrimination in the display of STEM career ads. Management Science 65(7), pp. 2966-2981. https://doi.org/10.1287/mnsc.2018.3093.
Cowgill, B., Tucker, C., (2019). Economics, fairness and algorithmic bias. Preparation for: Journal of Economic Perspectives.
Richardson, R. (2022). Racial Segregation and the Data-Driven Society: How Our Failure to Reckon with Root Causes Perpetuates Separate and Unequal Realities. Berkeley Technology Law Journal 36(3), pp. 101-139. https://doi.org/10.15779/Z38PN8XG3V.
Schlozman, K., L., Verba, S., Brady, H., E. (2010). Weapon of the strong? Participatory inequality and the internet. Perspectives on Politics 8(2), pp. 407-509. https://doi.org/10.1017/S1537592710001210.
Dimaggio, P., Hargittai, E., Neuman, W., R, et al. (2001). Social implications of the internet. Annual Review of Sociology 27, pp. 307-336. https://www.jstor.org/stable/2678624.
Dimaggio, P., Hargittai, E., Celeste, C., et al. (2004). Digital inequality: From unequal access to differentiated use. In Social Inequality, pp. 355-400.
Dimaggio, P., Hargittai, E. From the ‘Digital Divide’ to ‘Digital Inequality’: Studying Internet Use as Penetration Increases. Working Paper 15, pp. 1-23.
Vickery, J.R. (2018). This Isn’t New: Gender, Publics, and the Internet. Chapter 2 in Vickery, J., Everbach, T. (eds) Mediating Misogyny. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-319-72917-6_2.
West, D., M. (2015). Digital divide: Improving Internet access in the developing world through affordable services and diverse content. Center for Technology Innovation at Brookings, 1-30.
Dworak, B., J., Lovett, J., Baumgartner, F., R. (2014). The Diversity of Internet Media: Utopia or Dystopia? Midwest Political Science Association, 1(6), 42-65.
Paul, A. (2022). The White House’s new ‘AI Bill of Rights’ plans to tackle racist and biased algorithms. Popular Science. https://www.popsci.com/technology/ai-bill-of-rights-biden/.
Lau, T., Akkaraju, U. (2019). When algorithms decide whose voice will be heard. Harvard Business Review. https://hbr.org/2019/11/when-algorithms-decide-whose-voice-will-be-heard.
Lim, M., Alrasheed, G. (2021). Beyond a technical bug: Biased algorithms and moderation are censoring activists on social media. The Conversation. https://theconversation.com/beyond-a-technical-bug-biased-algorithms-and-moderation-are-censoring-activists-on-social-media-160669.
Marcus, B. (20 15). The Lack of Diversity In Tech Is A Cultural Issue. Forbes. https://www.forbes.com/sites/bonniemarcus/2015/08/12/the-lack-of-diversity-in-tech-is-a-cultural-issue/?sh=545453c279a2.
Brown, A. (2021). Understanding The Technical And Societal Relationship Between Shadowbanning And Algorithmic Bias. Forbes. https://www.forbes.com/sites/anniebrown/2021/10/27/understanding-the-technical-and-societal-relationship-between-shadowbanning-and-algorithmic-bias/?sh=26aff396296e.
Panch, T., Mattie, H., Atun, R. (2019). Artificial intelligence and algorithmic bias: implications for health systems. Viewpoints 9(2), pp. 1-5. https://doi.org/10.7189/jogh.09.020318.
Van Dijck, J., Poell, T. & de Waal, M. (2018). The Platform Society as a Contested Concept. In The Platform Society. Oxford: Oxford University Press, pp. 5-32.
[PBSNewsHour]. (2019, Nov 25). Racial bias in widely used hospital algorithm, study finds. [Video]. Youtube. https://www.youtube.com/watch?v=589O8esdVb4&ab_channel=PBSNewsHour.