Artificial Intelligence (AI) is advancing at a rapid pace. Its limitless applications are already an integral part of our daily experiences, (maybe not so) silently structuring our lives and society in crucial domains such as education, healthcare, surveillance, transportation, journalism, and law. As we place more and more safety-critical and consequential social decisions in the hands of artificial systems, we must ensure that these systems behave ethically, and that decisions and actions are taken responsibly. It is therefore increasingly urgent to understand the conceptual underpinnings of the notion of responsibility in AI, and to develop rigorous tools to design artificial systems that behave responsibly and ethically. But, what does it even mean for an artificial system to behave responsibly? Can and should we design machines with moral reasoning capacities? In ensuring responsible and ethical AI, should scalable and efficient, yet opaque, sub-symbolic approaches be replaced by transparent, explainable, yet brittle, symbolic, logic-based techniques?
On 6 May 2021, we will discuss these questions in our PEPT panel with Jan Broersen and Marija Slavkovik, emphasizing the role that can be played by the social sciences, humanities, and AI research in an interdisciplinary approach to the issues outlined above. The conversation will be moderated by Aybüke Özgün.
Jan Broersen is a Professor of Logical Methods in Artificial Intelligence at the Department of Philosophy, Utrecht University. He is the principal investigator of the NWO funded research project Empowering Human Intentions through Artificial Intelligence. His main interests are responsible AI, knowledge representation and reasoning, and logic theories of agency.
Marija Slavkovik is an Associate Professor in Artificial Intelligence at the Department of Information Science and Media Studies at the University of Bergen. She is a vice-chair of the Norwegian AI Association and co-leader of the project User Modeling, Personalization and Engagement at the Research Centre for Responsible Media Technology and Innovation. Her research interests include machine ethics, logic reasoning in social networks, computational social choice, and judgement aggregation.
Aybüke Özgün is an Assistant Professor of Responsible and Ethical AI at the University of Amsterdam and the Institute for Logic, Language, and Computation. She completed a joint PhD degree in Logic and Computer Science at the University of Amsterdam and the University of Lorraine.
To participate, please register via email@example.com and you will receive the Zoom link by email.