Biases in data can be both explicit and implicit. A simple two-word phrase can carry strong connotations, and entire research fields, such as post-colonial studies, are devoted to them. However, these sometimes subtle (and sometimes not so subtle) differences in voice are as yet not often found in the results of automatic analyses or datasets created using automated methods.
|Date||28 April 2021|
Current AI technologies and data representations often reflect the popular or majority vote. This is an inherent artefact of the frequentist bias of many statistical analysis methods resulting in simplified representations of the world in which diverse perspectives are underrepresented.
In this lecture, the sixth webinar in the series ‘Humane Conversations’ is organised by the UvA’s Research Priority Area Human(e) AI, Marieke van Erp will discuss how the Cultural AI Lab is working towards mitigating this.
The event will be held online. You'll receive a Zoom link after registering.
The webinar series Humane Conversations connects researchers from various disciplines to discuss AI research topics, with a particular focus on human-centred AI. The first speaker was Prof. Max Welling, from the UvA Faculty of Science, who addressed the topic What’s happening in AI research and what does it mean for Humane AI?
The University of Amsterdam has designated Human(e) AI as one of its research priority areas. The aim is to bring synergy to ongoing work on AI and stimulate new research at the UvA on the societal consequences of the rapid development of artificial intelligence and automated decision-making.