What did you research with your tutorial group?
‘We researched how ChatGPT portrays itself from the present day up until 100 years from now. We wanted to see which assumptions would emerge in these self-representations, and in particular which elements will be preserved and reinforced. In addition, we got ChatGPT itself to explain how it arrived at these answers. We were curious whether these answers would change the more the conversation within a chat progressed. And we also hoped to gain insight into ChatGPT’s decision-making process, but ultimately all results provided are performative.’
How would you describe critical AI studies and why is that important?
‘Briefly stated, in critical AI studies you look beyond the technical aspects of AI-driven systems, and the cultural, social, political, economic and ecological implications of those systems are also examined. ‘Critical AI studies’ often focuses on normativity and stereotyping that are visible in AI-generated results. However, the energy consumption of AI and leaking of sensitive data are also examined. After all, AI is not neutral, even if it is often presented and viewed as such.’
Why did you want to research this?
‘A lot of the current research into AI bias often deals with representations of people and social inequality. We were curious whether AI exhibits bias about technology itself, precisely because technology is often presented as neutral.’
What was your approach?
‘We prompted ChatGPT to imagine itself at 25-year intervals, both textually and visually. In addition, we asked via the chat function how ChatGPT arrived at those answers. We copied the results to a spreadsheet and then processed them with various digital tools. In this way, we were able to see patterns in our data more easily.’
What was most surprising thing about that?
‘Our research shows that ChatGPT paints a rather utopian picture of itself, in which humans and AI live together in harmony and nature plays a major role. In doing so, it ignores the impact that AI has on the environment, for example. What’s most bizarre, in my opinion, is that ChatGPT pictures itself in the future as a kind of omnipresent being, without physical form. However, this picture is similar to the current Zero UI trend within the tech industry. Technology is becoming increasingly integrated into our daily life, partly because it is becoming more and more invisible.’
What is the relevance of this?
‘Demonstrating bias in AI chatbots, such as ChatGPT, is particularly important in order to make people aware that they can’t blindly trust AI. Given that training data largely consists of internet content, AI chatbots may adopt, or even reinforce, existing normativity and biases. The field of critical AI studies in general is of major importance because AI-driven systems are becoming increasingly prominent, and exerting more and more influence on our society. For example, facial recognition and risk assessment software are used internationally by police and judicial authorities. Bias or stereotyping in these systems may have major consequences at both individual and collective levels.’
Which skills have your learned?
‘Using a data toolkit was new to me. I found this extremely enjoyable, since I’m personally more interested in quantitative or mixed research than pure qualitative research. I also practiced using a spreadsheet to process my datasets. That took some getting used to, but once I got the hang of it, it turned out to be incredibly efficient!’
Does this course prepare you for your future career?
‘We're not there yet, but this course has confirmed for me that I want to focus more on quantitative research within media studies. I think these first experiences with using digital tools to process larger datasets form a good basis for that ambition.’