AI needs to be stopped!

Tech giants need to halt AI development to solve key problems before they resume present operations.

Photograph of Netherlands’ Ministry of Foreign Affairs discussing the use of AI in military applications.
Academic Forum Session – Responsible AI framework for military domain; Development and critical assessment” by Ministerie van Buitenlandse Zaken is licensed under CC BY-SA 2.0.

Artificial intelligence (abbreviated to AI) has recently become a hot topic in all discourse around internet transformations. AI also reaches beyond academia as big techs can tap into this popularity by monetising both public usage of AI systems but also areas of private interest to improve the efficiency of their own back-end operations. The possibility for massive financial gain can incentivise tech giants to sweep safety and moral judgement under the rug, as being at the forefront of AI development is what is best for profit (Holzmeyer, 2021). The novel deployment of AI systems is also being looked at for military applications (as pictured above), with political power as another potential outcome, further dampening the priority that ethics has in the AI conversation.

This blog post argues that the development of AI systems needs to be stopped before it causes more harm than good. Once the issues below have been appropriately addressed, we should continue to integrate AI for society to technologically evolve.

Why we SHOULD pause AI development


The current AI systems we are using are already ethically ineffective. Artificial intelligence is trained through a process of machine learning, in which it is fed massive amounts of man-made data in order to detect patterns which can then be replicated. Therefore, if there are any consistent negative attitudes within the source material, the artificially generated content will follow that same pattern (Varona & Suarez, 2023). This is an extremely dangerous prospect as learned discriminatory behaviours are being unintentionally reinforced. Applying these powerful tools in corporate, public, military or really any context obviously has major negative implications which is why these mechanisms need to be revised (Varona & Suarez, 2023).

For example, the US court system uses an AI algorithm called COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) to predict whether convicted criminals are likely to reoffend. This model was responsible for twice the amount of false positives for black offenders than white offenders (Shin, 2020). The data used and the model chosen were blamed for this discriminatory mistake. It is clear that the current versions of these systems are not randomly ineffective, but have harmful biases which should be read as a call to action to stop and solve this prejudice before it is perpetuated any further.

Difficulties with moderation

Elon Musk (of Tesla, SpaceX, etc.) and Steve Wozniack (of Apple) among other industry experts and leaders agree that AI development should be paused (Emilia, 2023). The open letter issued by the Future of Life institute asks for this pause so that we can comprehend just what AI is capable of before we run into problems that we are not prepared to solve. A news story summarising this letter and specifically Elon Musk’s involvement is shown below. As the co-founder of OpenAI, Musk’s decision to effectively argue against his own business shows the stakes of this discussion and its potential to damage our future. Although it is cited as a non-profit company, OpenAI has a for-profit subsidiary, alluding to Musk’s preference for a temporary financial loss to avoid a permanent detriment to quality-of-life society-wide.

Video of an ABC News story summarising Elon Musk and other experts’ concerns surrounding AI.
Elon Musk calls for artificial intelligence pause” by ABC News. All rights reserved.

We can look to issues of content moderation to model how we might react to these unmanageable risks if they appear as predicted. On social media platforms, where everyday people are given access to massive amounts of data and allowed to generate their own new material (similar to the process of AI but this is non-artificial intelligence), we still haven’t solved the issue of content moderation (Gillespie, 2018). Private companies are expected to set these boundaries, while users are expected to adhere to them. The fine line between regulation and restriction is constantly argued and all of this is within the context of real people moderating other real people. We first need to solve the issue of moderating non-artificial intelligence before we empower AI without the means to moderate it.

The World Economic Forum pictured below acknowledges the dangers of AI and gathers experts to discuss these types of issues. Although they are able to identify the problems already discussed among others such as the disproportionately large emissions of the ever-growing AI sector, they are unable to keep up with the unprecedented pace of AI development, evidenced by the blank section of their webpage titled “OUR CONTRIBUTION” (World Economic Forum, 2023).

Photograph of the future of AI being discussed during the World Economic Forum.
Global Tech, Local Solutions: Artificial Intelligence” by World Economic Forum is licensed under CC BY-NC-SA 2.0.

Why we SHOULD NOT pause AI development

This section states major reasons to continue with AI developments, which are actively rebutted.

AI4SG (Ethics washing)

The Artificial Intelligence for Social Good (AI4SG) initiative disagrees with this blog, promoting a continuous development and application of AI systems to address both social and environmental issues (Holzmeyer, 2021). On the surface this seems like a positive incentive for AI to be guided in the right direction, but the ambiguity and inability to regulate what ‘social good’ means allows for exploitative behaviour and ethics washing to take place. Corporate entities who are involved with any sort of technological advancement can label themselves as ‘developing AI’ in order to benefit from these AI4SG initiatives (Holzmeyer, 2021). They feign ethical consideration to gain public support which includes funding and customer loyalty. Their definition of ‘social good’ is then left up for interpretation, where companies can argue that ‘a larger proportion of the market share keeps customers connected and helps to train our AI systems’ (commercial success / selfish profit) is a justified use of funding rather than more morally driven areas such as health-related applications.

Unlimited scale

Bill Gates (of Microsoft) can be seen in the social media post below disagreeing with the aforementioned Future of Life institute’s open letter asking for a pause of AI development. He argues that it is unreasonable to assume we could somehow enforce the pausing of AI development for everyone globally (Faguy, 2023). His proposed solution is to focus on the positive developments and “identify tricky areas”. As a philanthropist, he is using the Gates Foundation to promote the genuine social-good uses for AI such as access to health and education, but his influence is limited since his ideas are not shared by less-ethical competitors. The argument that it is unreasonable to stop AI development is quite fair, although having this discussion and arguing the contrary is still valuable as it allows us to be better prepared in the likely event that it will continue to develop.

Instagram post of Bill Gates disagreeing with Elon Musk and others’ plea to pause AI development.
Bill Gates Doesn’t Agree With The Movement to Pause AI Development – Here’s Why” by entrepreneur. All rights reserved.

As Gates alluded, the scale with which AI is already operating is massive and hence halting development is becoming increasingly unmanageable. Gillespie echoes this sentiment of an unlimited scale. He argues that AI is often justified as the solution to dealing with large scale problems – content moderation for example is occurring on social media platforms at such a big scale that non-artificial intelligence cannot keep up, so we need artificial intelligence to pick up the slack (Katzenbach, 2021). This “growth at all costs” mentality never really solves the original problem, but merely creates a different one by putting a bigger band-aid over a smaller one. He worries that this unlimited scale could deal with the underlying problems, but it also has the potential to create new ones which we won’t be able to account for. This has already been seen on a relatively small scale with the issue of AI discrimination discussed prior, but these effects are not necessarily limited by scale due to the nature and increasing deployment of AI.

The AI for Good Global Summit pictured below is the United Nations’ answer to these concerns, acting in a similar vein to Bill Gates’ solution, focusing on the positives but comparatively, the “tricky areas” of AI development remain undisrupted.

Photograph of a speech given during the AI for Good Global Summit in 2018.
AI for Good Global Summit 2018” by ITU Pictures is licensed under CC BY 2.0.


Ideally, AI development should be temporarily paused to solve the issues of discrimination, moderation (on an unlimited scale) and corporate ethics washing before it continues to take the world by storm. As this is unreasonable, we can only promote active discussion, awareness and more importantly action to ensure the harmful results of AI systems are outweighed and hopefully solved by the beneficial outcomes that organisations such as the Gates Foundation are aiming to provide.

This work is marked with CC0 1.0 Universal 

Reference List

Emilia, D. (2023, April 3). Leaders like Elon Musk want to pause AI development, but the power of the free market means it’s impossible to stop. Business Insider.

Faguy, A. (2023, April 4). Bill Gates Rejects ‘Pause’ On AI, Suggesting It’s Impractical. Forbes.

Gillespie, T. (2018). CHAPTER 8. What Platforms Are, and What They Should Be. In T. Gillespie (Ed.), Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (pp. 197-214). New Haven: Yale University Press.

Holzmeyer, C. (2021). Beyond “AI for Social Good” (AI4SG): social transformations-not tech-fixes-for health equity. Interdisciplinary Science Reviews, 46(1-2), 94–125.

Katzenbach, C. (2021). “AI will fix this” – The Technical, Discursive, and Political Turn to AI in Governing Communication. Big Data & Society, 8(2).

Shin, T. (2020, June 5). Real-life Examples of Discriminating Artificial Intelligence. Towards Data Science.

Varona, D., & Suarez, J. L. (2023). Social context of the issue of discriminatory algorithmic decision-making systems. AI & Society.

World Economic Forum. (2023) Global Future Council on the Future of Artificial Intelligence.