For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.
Event details of PEPTalk #11: Connecting the ethics and epistemology of AI
10 February 2022
Registration PEPTalk 11

A typical line of argument in ethics of AI is that the need for fair and just AI is related to the possibility of understanding the AI system itself. A fair and just AI, then, requires turning an opaque box into a glass box, as inspectable as possible. Transparency and explainability, however, pertain to the technical domain and to philosophy of science, thus leaving the ethics and epistemology of AI largely disconnected.

In this talk Federica Russo will focus on how to remedy this problem and introduce an epistemology for glass box AI that can explicitly incorporate values and other normative considerations. The proposed framework draws on existing work in argumentation theory on how to model the handling, eliciting, and interrogation of the authority and trustworthiness of expert opinion, as we'll work on inductive risk in the philosophy of science to think through how social consequences that harm intersectionally vulnerable populations can be modelled in the context of AI design and implementation.  

This talk is based on joint work with Eric Schliesser and Jean Wagemans as part of their RPA-Human(e) AI project “Towards an epistemological and ethical ‘explainable AI’”.


Federica Russo is a philosopher of science, technology, and information at the University of Amsterdam and at the Institute for Logic, Language, and Computation, and member of the Management Team of the Institute for Advanced Study at the UvA. Her current research addresses epistemological, methodological, and normative aspects as they arise in the biomedical and social sciences, and in highly technologised scientific contexts. She (co-)authored several monographs, edited volumes, and special issues, as well as articles in international journals and spanning various themes, such as causation and causal modeling, explanation, evidence, and technology. She is currently completing a monograph titled Techno-scientific practices. An informational approach, under contract with Rowman&Littlefield International. For more information: or follow her @federicarusso.

Aybüke Özgün is an Assistant Professor of Responsible and Ethical AI at the University of Amsterdam and the Institute for Logic, Language, and Computation. She completed a joint PhD degree in Logic and Computer Science at the University of Amsterdam and the University of Lorraine.

[Save the date!: March 3, 13:00 PM: Beyond debiasing of AI featuring Seda Gürses and Agathe Balayn, April 7, 12:00 PM: Digital societies and the ideal of transparency featuring Lea Watzinger]