For best experience please turn on javascript and use a modern browser!
Bekijk de site in het Nederlands

The AI community has been focusing on developing fixes for harmful bias and discrimination through so-called ‘debiasing algorithms’ that either try to fix data for known or expected biases, or constrain the outcomes of a given predictive model to produce ‘fair’ outcomes. We argue that creating more AI solutions to fix harmful biases in data is not the only solution we should be pursuing. A fundamental question we face as researchers and practitioners is not how to use new algorithms to fix harmful bias in AI, but rather whether we should be designing and deploying such potentially biased systems in the first place.

Event details of Sennay Ghebreab and Hinda Haned: Understanding and mitigating bias in automated AI systems
Date 26 February 2021
Time 14:00 -15:00

This event will be held online. You will receive a Zoom link after registering.

This is the fourth webinar in the series ‘Humane Conversations’ is organised by the UvA’s Research Priority Area Human(e) AI.

About the webinar series ‘Human Conversations’

The webinar series Humane Conversations connects researchers from various disciplines to discuss AI research topics, with a particular focus on human-centred AI. The first speaker was Prof. Max Welling, from the UvA Faculty of Science, who addressed the topic What’s happening in AI research and what does it mean for Humane AI?

About the Research Priority Area Human(e) AI

The University of Amsterdam has designated Human(e) AI as one of its research priority areas. The aim is to bring synergy to ongoing work on AI and stimulate new research at the UvA on the societal consequences of the rapid development of artificial intelligence and automated decision-making.