Artificial intelligence (AI) has completely transformed the technology industry within recent years, significantly affecting the everyday lives of individuals across the globe. As this technology advances, the potential risks of AI increase. In order to manage AI while harnessing its potential to avoid consequences that would negatively disrupt society, a balance between regulation and innovation must be preserved. An in-depth analysis of the disruptions of generative AI will be conducted, focusing on both its threats and current regulations in place to identify the need for additional effective AI regulation to ensure a safer digital future.
Generative AI – Its transformative nature
Generative AI is one of the most disruptive forms of AI, as it has the capability to redefine how individuals perform their tasks and communicate with one another. This technology aids in content generation from all forms of media, including text, images, and conversations with a high level of human-like characteristics and creativity (Euchner, 2023). It enables interaction between humans and technology, going beyond automation. Generative AI, like other forms of AI, relies on machine learning and is trained on vast amounts of data to gather information. For instance, language models like ChatGPT employ internet texts and dialogue scripts to produce human-like conversations, while Stable Diffusion utilises images and associated captions from the web to develop images based on text commands from users (Helberger & Diakopoulos, 2023). Ultimately, generative AI aims to replicate human behaviour and generate new content based on existing ones.
The Dark Side of AI
Although this technology holds great potential, it also poses significant risks. A difficulty arises when open-sourced generative AI is employed to construct content, encompassing misinformation, hate speech, and inappropriate materials. An example of this is Microsoft’s Tay chatbot, which was manipulated by users to produce offensive and inappropriate incendiary and political statements (Bunz & Meikle, 2018). This further signifies how toxic technoculture could be generated and disseminated online through the amplification of AI (Massanari, 2016). Therefore, this instance highlights the difficulties associated with assuring ethical AI conduct in an unregulated and unpredictable digital setting.
Is AI biased?
Stereotypes and biases are significant concerns in AI systems. There is a risk of inadvertently perpetuating biases detected in training data, which is inherent in AI models that were not thoroughly managed. As a result, the outcomes generated by AI systems often reflect the less favourable biases of society. In short, if the dataset used to train an AI model is biased, it is highly likely that the AI itself will be biased. For instance, Stable Diffusion was found to portray gender discrepancies, with women being overrepresented in low-paying positions and underrepresented in professions like “judge,” whose salaries are higher. The model additionally displayed racism, misrepresenting racial demography within occupations and disproportionately overrepresenting individuals with deeper skin tones in low-income jobs.
The question arises:
Why is data so inherently biased?
A crucial aspect contributing towards AI bias is the composition of the people who are responsible for developing these systems. The generated data is more inclined to demonstrate tendencies in situations where there is an overrepresentation of white men and an associated underrepresentation of women and people of colour (POC) in these development teams. Approximately less than 2% of workers who perform technical positions at big tech corporations such as Google and Facebook are black, whereas only one-fifth of the technical force are women. To address bias and ensure the ethical use of AI, organisations and governments should advocate for and implement diverse perspectives in AI development teams and datasets.
Overall, AI bias affects different aspects of its performance by reinforcing societal preconceptions. Biased data collection, algorithmic biases, and prejudiced feedback can further amplify this challenge. Developers, data providers, legislators, and society as a whole have an integral role in addressing AI bias. Ethical AI development practises, transparency and regulatory monitoring are crucial for reducing bias and ensuring fair and responsible use of AI in all applications.
Copyright is an extremely prevalent issue in the legal landscape regarding AI. Creators are against the notion of their works being used as data for training AI systems. The development of generative AI is greatly influenced by copyright issues between Stability AI, Getty Images, and other entities, leading to a reevaluation of creativity, innovation, and ownership in the age of digitisation (Samuelson, 2023). For example, OpenAI was sued for allegedly violating the copyright of more than 120,000 authors. The lawsuit against OpenAI asserts that the training of the ChatGPT language model used copyrighted materials without permission.
Moreover, creators claim that AI companies profit from their copyright infringement by charging for their services, like ChatGPT, and utilising it to produce text and code for their own offerings. The principle of equitable utilisation is crucial to these concerns. Authorities are obligated to determine whether using copyrighted materials as AI training data is permitted, taking into consideration factors including intent, nature, quantity, and economic impact. Transformative usage and public benefits support fair use, according to precedents like Google versus Oracle (Samuelson, 2023). Although it disrupts the creative sector and fosters unparalleled creativity, AI’s dynamic nature presents difficulties. However, the issue of derivative works poses questions on how to evaluate results produced through generative AI systems. Creators contend that these creations are coming against their original works, highlighting the significance of major similarities. Thus, transparency is lacking in this area, as AI companies often do not disclose the specific datasets they use. This creates a blurred line and raises concerns about the sources of data and potential biases in the AI models.
Deepfake AI has numerous purposes and is prevalent in positive and negative spheres. In a positive application, deepfakes are commonly used in entertainment on platforms including TikTok, wherein the faces of celebrities such as Tom Cruise are utilised for comedy-inspired deepfake videos (Jones, 2023).
However, deepfakes can also be misused to produce non-consensual pornography, a disturbing invasion of confidentiality and respect that highlights the necessity for moral guidelines and proper legal protections in the digital sphere. Moreover, it indulges in fraudulent endeavours, when con artists expertly manipulate video or audio recordings to mislead individuals and profit from their ordeals. Political processes and public perception are threatened by deepfakes. For example, a deepfake of the Ukrainian president urging locals to surrender to Russia caused further harm in Ukraine. This false information aims to create discord. The rapid advancement of deepfake technology has made it easier to create realistic videos that are hard to distinguish from real ones. The lack of regulation in Australia highlights the urgent need for legislative frameworks to combat deepfake misuse. It is important to develop regulations and policies to protect citizens and prevent the harmful effects of deepfake manipulation. By implementing measures such as monitoring and strict sentences for offenders, the risks of deepfakes can be reduced.
Evaluation of current regulations on AI
There are some existing regulations in place regarding AI around the world, yet these regulations are still limited in terms of the significant challenges posed by generative AI. For instance, the AI Act regulation by the European Commission in 2021 adopts a risk-based approach that depends on the degree of risk. It considers whether certain AI systems can introduce higher risks compared to others, and the regulation becomes stricter as the risk level increases (Helberger & Diakopoulos, 2023). However, this approach in the application of generative AI is not appropriate since this technology could be utilised for various intentions, whether good or bad, making it difficult to assess the risk levels of generative AI systems. This AI act could concentrate on regulating generative AI’s use and development rather than classifying it according to its degree of risk.
The rapid advancement of AI introduced new challenges to our digital futures, similar to the decentralised nature of the Internet and the significant role portrayed by online intermediaries (Flew et al., 2019). The prospective threats associated with biased algorithms, discriminatory results, and potential abuse of AI technology for malicious intents make AI regulation essential. Comparable to the fact that conversations concerning internet regulation have focused on striking a balance between promoting innovation and safeguarding moral and ethical use, AI also faces this obstacle. Establishing extensive and universally integrated rules that can successfully minimise dangers, promote equitable treatment, and safeguard individuals in an AI-driven society.
Overall, the ethical issues surrounding generative AI require careful moderation and regulation. It is crucial to establish proper regulations for these AI systems, given the ongoing exploration of their future potential and uncertainties regarding transparency and ethics within AI companies. To ensure the responsible development and use of generative AI, it is crucial to prioritise open, just, and responsible regulations.
Appel, G., Neelbauer, J., & Schweidel, D. A. (2023, April 7). Generative AI Has an Intellectual Property Problem. Harvard Business Review. https://hbr.org/2023/04/generative-ai-has-an-intellectual-property-problem
Bunz, M., & Meikle, G. (2018). Speaking things. In The internet of things. Polity Press.
Buolamwini, J. (2019, February 7). Artificial Intelligence Has a Problem With Gender and Racial Bias. Here’s How to Solve It. Time; Time. https://time.com/5520558/artificial-intelligence-racial-gender-bias/
Elias, M. (2023, February 9). A deepfake porn scandal has rocked the streaming community. Is Australian law on top of the issue? SBS News. https://www.sbs.com.au/news/the-feed/article/a-streamer-was-caught-looking-at-ai-generated-porn-of-female-streamers-the-story-just-scratches-the-surface/vfb2936ml
Euchner, J. (2023). Generative AI. Research-Technology Management, 66(3), 71–74. https://doi.org/10.1080/08956308.2023.2188861
Field, H. (2023, September 20). John Grisham, George R.R. Martin and other prominent authors sue OpenAI. CNBC. https://www.cnbc.com/2023/09/20/john-grisham-george-rr-martin-other-prominent-authors-sue-openai.html
Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal of Digital Media & Policy, 10(1), 33–50. https://doi.org/10.1386/jdmp.10.1.33_1
Helberger, N., & Diakopoulos, N. (2023). ChatGPT and the AI Act. Internet Policy Review, 12(1). https://doi.org/10.14763/2023.1.1682
Jones, N. (2023). How to stop AI deepfakes from sinking society — and science. Nature, 621(7980), 676–679. https://doi.org/10.1038/d41586-023-02990-y
Massanari, A. (2016). #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329–346. https://doi.org/10.1177/1461444815608807
Nicoletti, L., & Bass, D. (2023). Humans Are Biased. Generative AI Is Even Worse. Bloomberg. https://www.bloomberg.com/graphics/2023-generative-ai-bias/
Samuelson, P. (2023). Generative AI meets copyright. Science, 381(6654), 158–161. https://doi.org/10.1126/science.adi0656
Ethicality of Generative AI in Digital Futures – The Call for Regulation © 2023 by Belita Haagsma is licensed under CC BY-NC-ND 4.0