The deep-rooted ‘Bromance’ VS the ‘Bringing the women home’ manifesto
How Did Tech Become So Male-Dominated? by The Atlantic. All right reserved. Retrieved from https://www.youtube.com/watch?v=OZ7zX6LalLI
The early days of the internet were characterised by cultural homogeneity, which limited the development of diverse representation on the web. Well-educated white males dominated numerous technology companies in Silicon Valley, and the origins of web 2.0 were organised around this shared ‘Bromance’ (Losioli & Turner, 2021), while groups that did not belong to this culture were excluded from the community. As a result, white males belonging to this dominant culture had an early advantage in internet technology.
Moreover, influenced by the anti-feminist movement in the United States in the 1980s, the internet field catered to this ‘Bringing women home’ campaign. Women’s figures disappeared into the tide of digital innovation (Mundy, 2017). The lack of workforce diversity was rooted at that point, leading to the creation of flawed systems and perpetual sexism and racism on the internet (Paul, 2019).
‘Diversity disaster’: faulty systems perpetuate structural bias
Flawed data samples lead to “patriarchy” in artificial intelligence
In 2014, Amazon.com developed an intelligent “algorithmic screening system” to help Amazon select the most “suitable” candidates in its recruitment process. The system scores CVs by identifying and analysing how well those keywords match the job. However, the results show that the algorithm clearly prefers male candidates. When the algorithm identified words related to “female”, it gave relatively low scores to the CVs. Furthermore, it even filters out the CVs of candidates who had attended “girls’ schools”. This is because the engineers trained the algorithm on the resumes of employees hired by Amazon in the past decade, the vast majority of whom were men (Mann & O’Neil, 2016).
The imbalanced labor representation leads to a gender bias in the input sample, and this already biased data contributes to the sexism of the output technology. When such algorithmically inferred derivative data labels users hypothetically, it can lead to discriminatory decision-making for users who are misclassified as female. Thus, it causes them to be unduly restricted or excluded from favorable information and potentially profitable opportunities (Lambrecht & Tucker, 2019).

We have to admit that equal employment rights are rendered moot in cyberspace.
Convergent thinking in product design: satisfying the ‘male gaze’
Not only that, but the same dynamic is still being repeated (Losioli & Turner, 2021). According to the Global Gender Gap Report published by the World Economic Forum (2021), the gender ratio within the significant tech industry is still very disparate, with 15% of AI researchers at Facebook being women and only 10% at Google. Men still have a dominant voice in AI research and development. As a result, there is a ‘male gaze’ in the design, operation and application phases of algorithmic models.
Gendered technology perpetuates the role of service and companionship for women, solidifying pre-existing labor structures. For example, various mainstream voice assistants play the role of secretaries in our lives. While the beeps and intelligent voices of these web and electronic tools, such as Siri on Apple products, are by default or carefully constructed as feminine roles (Morley, 2020). However, the gendered division of labour exhibited by technological products affects the social construction and valorisation of women. It also creates a vicious circle of bias propagation between society, AI and users. This has the effect of ‘structurally locking’ the disadvantaged position of women in society (Noble, 2018).

“Gender or personality are not things that occur naturally in AI systems,” says Morley (2020), “they are deliberately created based on market demand and structural biases.”
Stop assuming data, algorithms and AI are objective | Mata Haggis-Burridge | TEDxDelft. by TEDx Talks. Retrieved from https://www.youtube.com/watch?v=Hft8xiycH2Y
Data “neutral”? A carefully constructed pseudo-proposition
The public’s belief in ‘algorithmic neutrality’ has long allowed algorithmic bias to influence the fair presentation of public information in an unconscious mode. Influenced by the neoliberal prioritisation of economic interests, platforms use their ranking systems to convince users that the content with the first position on a site is the most ‘voted’ for by users or the most credible after standardised scrutiny. In reality, they are distorted search results that profit from favouring advertisers willing to pay for them (Noble, 2018).
In Baidu‘s bidding rankings, companies pay to have their websites or product information listed at the top of the search engine results, and there is a “malicious blocking” of companies that do not pay.
- From the platform ecology, this objectively prevents other similar products from reaching their audience, resulting in a lack of diversity in the dissemination of information.
- From the user’s point of view, this hurts the user’s search experience and traps them in the advertiser’s filter bubble. More importantly, it blocks access to unpaid links, which harms the right to information of audiences in an economic marketplace.
The most well-known case of medical data touching human lives in 2016 is a cancer patient who DIED after obtaining treatment from a hospital prompted by a Baidu search. Driven by financial gain, Baidu provided distorted advertising information and flawed search rankings to the user, but the hospital was later proven to be ‘unqualified’ (Ramzy, 2016). The commercial logic of the algorithm misled users who believed the search engine to be a public resource, causing fatal harm to the individual.

Search engines create advertising algorithms, not information algorithms.
In addition, search engines undermine the incentive to innovate for businesses or third-party collaborators who are at a capital disadvantage by charging high promotion fees. This is also reinforced by biased calculation rules that inappropriately position some search results in a strong position while continuing to marginalise other links. This has dealt a blow to the dynamic competition in the digital information market (Mansell & Steinmueller, 2020).
Limited public participation: Government censorship and content moderation

Business advertising and organisational propaganda are familiar examples of biased communication, with the former aiming to stimulate the purchase of specific products and services and the latter aiming to persuade people to accept a political position.
For example, Weibo‘s approach to political news dissemination and orientation has kept official government news media (such as the People’s Daily) at the top of the information pile. This demonstrates how, despite Weibo’s status as a public domain, the platform still directs enormous amounts of web traffic to mainstream newsgroups, which undoubtedly fuels their control over the direction of political issues (Cook, 2022).
Online discourse is dominated by elite users with discursive influence, while members of civil society may be at risk of being ‘deleted’.

On the other hand, netizens also begin to use approaches such as coded language to circumvent censorship algorithms and keep critical content online. For example, the video ‘April Voices’, produced by the grassroots collective during the Shanghai city closure in March this year, shows the influences of the blockade from a local citizen’s perspective. The video escaped automatic censorship with its cryptic words and cover and was widely retweeted in a short period. However, this only lasted for an hour or two. Content moderators of the platform then proceeded to manually clean up this particular content entirety, as well as censor the accounts of users who posted “overly angry comments” (Cook, 2022).
This case demonstrates the homogenisation of political discourse and opinion on Weibo regarding partisanship and propaganda. This runs counter to the notion of net neutrality and openness of the internet. While this unipolar online public sphere consolidates the hegemony of the elite voices in political discourse, it hinders the formation of public opinion. It makes the prospect of democratic participation worrying.
Conclusion
The historical and persistent gender imbalance in representation in technology has led to the gender divide in Internet information technology today. This places women’s digital existence at continued risk of widespread and systematic algorithmic sexism. Influenced by neoliberalism, the biased algorithm creates an echo chamber effect of monopolisation and distribution in platform verticals. However, this is not easily detected as it is hidden under the black box of ‘net neutrality’. Likewise, heavy-handed government censorship and content moderation prevent citizens from participating in political discourse in cyberspace. This is also at odds with the idea of openness on the Internet.
Therefore, the Internet development should incorporate a more diverse sample of data, including marginalised groups. The enforcement of anti-trust laws should also be front-loaded to limit the control of information by platform oligopolies. However, the balance between content moderation and freedom of speech remains a problematic issue to resolve.
Reference:
Cook, S. (2022 May19). China’s Censors Aim to Contain Dissent During Harsh COVID-19 Lockdowns | Opinion. Freedom House. https://freedomhouse.org/article/chinas-censors-aim-contain-dissent-during-harsh-covid-19-lockdowns-opinion-0
The World Economic Forum Recognises PhosAgro CEO Andrey Guryev as a 2021 Young Global Leader. (2021). Russia Business News.
Lambrecht, A., & Tucker, C. (2019). Algorithmic Bias? An Empirical Study of Apparent Gender-Based Discrimination in the Display of STEM Career Ads. Management Science, 65(7), 2966–2981. https://doi.org/10.1287/mnsc.2018.3093
Mann, G., & O’ Neil, C. (2016, December 09). Hiring Algorithms Are Not Neutral. Harvard Business Review. https://hbr.org/2016/12/hiring-algorithms-are-not-neutral
Mansell, R., & Steinmueller, W. E. (2020). Economic Analysis of Platforms. In Advanced Introduction to Platform Economics (pp.35-54). Edward Elgar Publishing Limited.
Morley, M. (2020, March 28). What Would a Feminist Alexa Look, or Rather Sound, Like? AIGA Eye on Design. https://medium.com/aiga-eye-on-design/what-would-a-feminist-alexa-look-or-rather-sound-like-e6711e553e67
Mundy, L. (2017, April). WHY IS SILICON VALLEY SO AWFUL TO WOMEN? The Atlantic. https://www.theatlantic.com/magazine/archive/2017/04/why-is-silicon-valley-so-awful-to-women/517788/
Noble, S. U. (2018). A society, searching. In Algorithms of Oppression: How search engines reinforce racism (pp.15-63). New York, USA: New York University Press. https://doi.org/10.18574/9781479833641-003
Pual, K. (2010, Apri 17). ‘Disastrous’ lack of diversity in AI industry perpetuates bias, study finds. The Guardian. https://www.theguardian.com/technology/2019/apr/16/artificial-intelligence-lack-diversity-new-york-university-study
Ramzy, A. (2016 May 3). China Investigates Baidu After Student’s Death From Cancer. The New York Times. https://www.nytimes.com/2016/05/04/world/asia/china-baidu-investigation-student-cancer.html