- Kai-Ti Kao
- July 10, 2023
- Series 1
At the AWGSA conference in November last year, I gave a presentation about Artificial Intelligence (AI) imaginaries of race and gender within the context of news media. Within what seemed like mere hours of that presentation, the public conversation around AI had dramatically changed. ChatGPT had been released and was rapidly being applied to everything from developing recipes to writing academic essays and of course, generating news media content. If anything, though, the release of ChatGPT and the accelerated conversation around AI has made it even more imperative that we ask questions about how we think about AI in society, what role news media play in shaping our perceptions of AI, and what these technologies and perceptions may mean for marginalised identities.
The racial and gender discrimination and bias embedded within AI has been well-
documented, as has its potential to perpetuate the harms already experienced by
marginalised communities. One way that we can better understand and start to alleviate such harm is by first recognising how our perceptions and imaginaries of AI can influence the way they are designed, used, and reported on. I use sociotechnical imaginaries as a critical lens to explore how our social and cultural contexts help shape these technologies and their interactions with society.

“Sociotechnical imaginaries” refers to the ways that technologies are imagined in society as both desirable and undesirable (Jasanoff, 2015) . These imaginaries are influenced by broader social and cultural contexts, which, in turn, influence the design, development, deployment, regulation and adoption of technologies. Sociotechnical imaginaries are revealing for what they show us about what kinds of futures are deemed desirable or undesirable and, importantly, who is making such judgements.
A common imaginary that circulates about AI is that these technologies are neutral,
objective, and rational. Their ability to make decisions and act based on patterns detected in vast amounts of data is certainly impressive and has led to the implementation of AI across various sectors and industries. News media frame AI as intelligent and powerful because, it is implied, of their inherent objectivity and neutrality, but problematically ignore the ways that prevailing biases sit deep within the social structures that create and make available these technologies.

For example, Cave and Dihal (2020) talk about how assumptions of rationality and
intelligence underpin the “Whiteness of AI” that manifests through the personification and aesthetic of AI technologies. Humanoid robots are either white in colour or presented as White people; virtual assistants are coded White in the way they speak and intonate; and representations of AI in visual media are mostly as, or alongside, White people. Taken together, these examples form a “White AI” imaginary which perpetuates colonial frames that establish Whiteness as default and simultaneously elevate associated attributes of scientific rationality, objectivity, and neutrality as superior.
When news media reflect this imaginary by also framing AI as superior, they leave little room to critically question the function, capability, and appropriateness of these
technologies. The White AI imaginary also erases the relationships that people of colour have with AI whether it is as the victims of embedded discrimination or as the exploited labour that such technologies are built upon. We can further start to recognise how the circulation of popular dystopian imaginaries of humans subjugated to sentient AI reflects a White colonial experience that conveniently ignores the ways that people of colour have similarly suffered under colonialism.

We also see AI being talked about in the news media in magical and enchanted terms. This “Enchanted AI” imaginary frames AI as technologies capable of fantastic outcomes, but unable to be explained (Campolo & Crawford, 2020) . We saw this AI imaginary at play in the many discussions about Loab (a confronting and often grotesque AI-generated image of a woman) circulating last year. Media coverage of Loab as a demon haunting the internet offered a dramatic—but no less problematic—counterpoint to the sanitised and subservient femininity we see in digital AI assistants and humanoid robots, but also reflected a lack of critical discussion around AI. An ABC article even went so far as to include an “interview” with Loab that was conducted with GPT-3, a natural language generator that is a precursor to ChatGPT and an entirely different type of AI system to the image generator that created Loab. Such media coverage highlights how our fascination with the “magical” capabilities of AI have detracted from critical questions about how these technologies work, what types of data they are trained on, and what kind of biases and gaps are present within them.
References
Campolo, A., & Crawford, K. (2020). Enchanted Determinism: Power without Responsibility in Artificial Intelligence. Engaging Science, Technology, and Society, 6, 1–19.
https://doi.org/10.17351/ests2020.277
Cave, S., & Dihal, K. (2020). The Whiteness of AI. Philosophy & Technology, 33(4), 685–703.
https://doi.org/10.1007/s13347-020-00415-6
Jasanoff, S. (2015). Future Imperfect: Science, Technology, and the Imaginations of
Modernity. In S. Jasanoff & S.-H. Kim (Eds.), Dreamscapes of Modernity:
Sociotechnical Imaginaries and the Fabrication of Power (pp. 1–33). University of
Chicago Press.