AI Influencers and the Dehumanisation of Relatability and Representation.

POSTED BY: Ariel Roche, 7th October, 2023

Virtual Reality by Vision Invincible is marked by CC BY-NC-ND 2.0 DEED

The concept of artificial influencers in 2023 has transcended from concept to reality as across the globe more artificial influencers are experiencing success across social media. The development of this novelty technology is grim as a lack of societal education has allowed for the dehumanisation of representation and relatability without consumer backlash. The phenomenon of virtual influencers often receives praise due to its technological achievement and dystopian stylised content. Consumers have growing positivity towards artificial influencers and their brand reactions (Sands et al., 2021 as cited by Sands, Ferraro, Demsar, & Chandler, 2022). owner Christopher Travers attests to the positive perspective of consumers and labels use of artificial influencers as an “ongoing merger of humanity and the internet.” (Klein, 2020). Fortunately this naive positivity is not shared by all as cultural experts call for educating the next generation before AI beings indistinguishable and Meta warns the potential for harm listing “representation and cultural appropriation issues” as core transgressions this industry faces (Sands et al., 2022). 

The Rise of Artificial Influencers

Artificial Influencers, Meta-Influencers, Virtual Influencers, these labels all entail the replacement of the human’s role in the social media influencer industry through controlled digital avatars. Thomas, V. L., & Fowler, K. (2021) formally defines an artificial influencer as “a digitally created artificial human who is associated with Internet fame and uses software and algorithms to perform tasks like humans”. This definition encompasses popular artificial influencers such as Noonoouri and Lil Miquella in the West or AYAYI in the east. This industry is wildly successful, Bradley (2020) states that artificial influencers bring in three times higher engagement than a human Influencer. An observation of these meta-personalities drastic success can be seen in AYAYI’s success as an artificial influencer from China, her introduction saw over 40k followers overnight and a direct line to working with mainstream brands such as Guerlain, M.A.C cosmetics, Make up forever, L’Oréal Paris.

This is far from an anomaly in the industry, Lil Miquella from the United States is the most recognizable in the west amassing 2.7 million followers on instagram. This large following came after achievements of partnering with luxury brands such as Balenciaga, Prada, and Kenzo, an estimated revenue of over $10 million annually and a position on TIME magazine’s most influential people on the Internet (Klein, 2020).

These non-human influencers have notable positives over the alternative, a stark issue is that all of these positives resolve around monetary benefits. There is the positive of not ageing as artificial influencers can maintain a personality, look and brand identity indefinitely. This ability to never change doesn’t just revolve around aesthetics, the aforementioned ability to maintain a brand and keep churning out the same style of content with no fear of human burnout is perhaps the most important benefit in the long term, especially for ‘relatable’ influencers. This opposes real influencers who often grow out of what made them popular and require an adaptation of fan-base to avoid burnout. 

A core example of this human fault they lack can be seen in the rise of Emma Chamberlin, who Rebecca Jennings (2020) labels the inventor of popular ‘authentic and relatable’ influencer content. This archetype of relatable content creation is one that many, including artificial influencers such as Lil Miquella, have attempted to mimic. Fortunately or unfortunately for Emma and many influencers who followed suit, success came with a better life quality, management teams to reduce stress, celebrity friends, and the detachment from her once generally relatable self. The natural path from bed vlogs to red carpet interviews is a fate that only Artificial Influencers can avoid.

Emma Chamerblain For Allure by Vogue Taiwan is marked by CC BY 3.0 DEED

Another monetarily focused positive of artificial influencers is the entirely controlled nature bringing less risk to brands wishing to associate, as they can more confidently avoid controversy. This positive is in conjunction with evidence that shows artificial and human influencers have similar positive branding effects (Thomas, & Fowler, 2021).

So Why The Concern?

All of these commercial positives come with camouflaged caveats, a clear example is seen in the supposed ability to void brand risk, this however isn’t removed from the influencer, but rather the risk is passed onto the company. This is evident in Lil Miquellas attempted strive for relatability and vulnerability that crossed an ethical boundary through a vlog about an entirely fabricated sexual assault. 

This saw the deserved reaction of outrage and criticism;

These criticisms highlight the key general problems with AI influencers, that is the false representation and false relatability. This issue is inevitable when, as the post above puts it, ‘some white guy’ can create an artificial influencer of a marginalised group and have that artificial influencer pretend to be a victim of prejudice. This undermines and profits off of real issues of abuse and prejudice. Klein (2020) echos this concern of overlooked transgression on diversity, stating that “CGI-Diversity is mistaken with real diversity”. This is worsened by the control that creators of AI influencers have; Why represent marginalised groups when you can make your own controllable representative?  

The issues with AI Influencer’s don’t end there, as Thomas & Fowler (2021) attribute the furthering of body dysmorphia on social media to AI influencers being most commonly an unattainable physicality with pixel perfect features. A criticism to this standpoint comes from Sands et al. (2022) which uses Noonoouri as an example of being able to avoid unrealistic beauty standards through artificial influencers in a way that wouldn’t succeed for human influencers. This example falls through when observing other artificial influencers or even scrolling through Noonoouri’s social pages as although the clearly animated figure is not attempting to mimic a human face, the unrealistic body standard remains.

Are human influencers just as bad?

Klein (2020) argues that AI is just a re-representation of the digital culture and resembles an inevitable digital future. The merit of this argument relates to false authenticity and relatability being just as prevalent in human influencers in the modern era. While it is clear our human online-identities don’t represent ourselves entirely this doesn’t translate morally to entirely removing the human and replacing influencer opportunities. Another criticism is that these profits aren’t going to ‘the-man’ as Klein (2020) puts it, referring to Lil Miquella’s progressive messaging of #blacklivesmatter in their bio. While different companies that create and manage artificial intelligence may have different political perspectives this doesn’t remove the core issue of who controls that messaging and for what purpose. This is another example of hijacking a real world movement of real world issues for a fake personality with the end goal of maximising profits.

What now?

Going forward, both the growth and regulation of artificial influencers is inevitable as artificial intelligence technology improves and Meta has begun looking to regulate and establish ethical boundaries (Sands et al., 2022). This is important as current artificial influencers still require a team, but as technology improves less and less people will need to operate within these companies. An eventual future sees indistinguishable artificial influencers that are controlled by corporations for manipulation of the public. Consequences from this will only increase in severity, for example, laws about influencer ads currently require specification if you are getting paid by a different company, but what if your influencer is an artificial sub-company for a conglomerate?

The ability to perceive artificial intelligence is more of a future sighted issue as nowadays they often remain in an uncanny valley aesthetic. This novelty period doesn’t excuse the influencers, or rather, the companies’ appropriative nature and exploitation of real societies issues. The ethical boundary of these representational and appropriation based issues should be held firmly upon creators of artificial intelligence by consumers just as it is for human influencers. An alarming lack of concern about ethical boundaries has been shown from consumers who maintain a positive perspective on the replacement of humans by AI in influencer roles (Sands et al., 2021 as cited by Sands et al. 2022). Across social media users hold power in keeping content creators to an acceptable moral standard ethical concerns against the dehumanisation of representation and relatability, this must not be lost in the face of a marketing gimmick masked as technological advancement. 

AI Influencers and the Dehumanisation of Relatability and Representation by Ariel Roche is marked with CC0 1.0 

Reference List

Blouin, J. (2022). Tiffany & Co. Partners with Virtual Influencer Ayayi. Retail in Asia. Retrieved from

Bradley, S. (2020). Even better than the real thing: Meet the virtual influencers taking over your feeds. The Drum. Retrieved from,ideal%20canvases%20for%20aspirational%20content

(CORNYASSBITCH, 2019, December 12). Lil Miquella Sexual Assault Vlog. Twitter.

Dao Insights. (2021). China Launches First Meta-Human Virtual Influencer: Ayayi. Retrieved from

Jennings, R. (2020). How Emma Chamberlain’s coffee brand The Chamberlain Coffee is capitalizing on her internet fame. Vox.

Klein, M. (2020). The Problematic Fakery of Lil Miquela Explained: An Exploration of Virtual Influencers and Realness. Forbes.

Sands, S., Ferraro, C., Demsar, V., & Chandler, G. (2022). False idols: Unpacking the opportunities and challenges of falsity in the context of virtual influencers. Business Horizons, 65(6), 777-788.

Thomas, V. L., & Fowler, K. (2021). Close Encounters of the AI Kind: Use of AI Influencers As Brand Endorsers. Journal of Advertising, 50(1), 11–25.

Be the first to comment on "AI Influencers and the Dehumanisation of Relatability and Representation."

Leave a comment