For the Zoom link, please send an email to email@example.com.
Philosophical research in AI has hitherto largely focused on the ethics of AI. In a recent paper, “Toward an Ethics of AI Belief”, co-author Vincent Valton, a machine learning scientist, and Winnie Ma, an ethicist of belief, suggest that we need to pursue a novel area of philosophical research in AI – the epistemology of AI, and in particular an ethics of belief for AI. They suggest topics in extant work in the ethics of (human) belief that can be applied to an ethics of AI belief, including doxastic wronging by AI, morally owed beliefs, pragmatic and moral encroachment on AI beliefs, and moral responsibility for AI beliefs. And they discuss two important, relatively nascent areas of philosophical research that have not yet tended to be recognized as research in the ethics of AI belief, but that do fall within this field of research in virtue of investigating various moral and practical dimensions of belief: the epistemic and ethical decolonization of AI; and epistemic injustice in AI. (You can see a brief research summary of their paper here featured by the Montreal AI Ethics Institute).
In this PEPTalk, Winnie will discuss the possibility that individuals profiled by algorithms such as the COMPAS algorithm may be doxastically wronged in virtue of these profiling beliefs. Such doxastic wrongs could constitute a kind of moral wrong, and possibly a novel kind of belief-based discrimination, that has hitherto largely gone unacknowledged, but that, they argue, should be further investigated and regulated.
Winnie Ma (she/they) is Visiting Assistant Professor in Philosophy at King’s College London (KCL) and Research Associate at the Sowerby Philosophy and Medicine Project, also at KCL. She specializes in epistemology, ethics, and their intersection, and the philosophy of AI. Winnie’s research currently concerns pragmatist ethics of belief of profiling and stereotyping persons, including patients in medical contexts, by both artificial and human agents. She is also interested in areas of intersection between the philosophy of AI and the legal regulation of AI, including algorithmic discrimination.
Annemijn Kwikkers will be moderating the PEPTalk. Annemijn Kwikkers is a PhD student in philosophy at the University of Amsterdam. Her project, as part of the AlgoSoc consortium, is about the change of democratic public values in an algorithmic society. She specifically focuses on the deployment of algorithmic decision-systems for both the creation and dissemination of news and information and how this may influence the way people are informed about democratically relevant matters.