8 April 2021
When should an artificial agent intervene to resolve a dilemma? And when should it alert its user or a relevant authority instead?
Given the growing number of safety-critical applications of autonomous systems in areas like medicine, engineering, surveillance, transportation and media, developing rigorous tools to determine when it is responsible for an agent to act becomes increasingly urgent.
The aim of this project is to develop logics for reasoning about the conditions under which an autonomous agent should take the responsibility to act.
Dr Aybüke Özgün is an assistant professor at the UvA’s Institute for Logic, Language and Computation. The core of her research lies in formal epistemology, in particular in dynamic epistemic logic with a special focus on evidence-based knowledge and belief modelled on spatial/topological structures. Some of her other interests include logic and topology, epistemic learning theory, and belief revision.
Dr Ilaria Canavotto is a postdoctoral researcher at the UvA’s Institute for Logic, Language and Computation. Her current research mainly focuses on temporal logics of agency and deontic logics, in connection with the notions of causality, responsibility, and normative system.
Dr Alexandru Baltag is an associate professor at the UvA’s Institute for Logic, Language and Computation. He is known mostly for his work in logics for multi-agent information flow (in particular dynamic-epistemic logic) and their applications to communication, game theory, epistemology, social networks, belief dynamics etc. His other interests include non-wellfounded set theory, coalgebraic logic, formal learning theory, topological modal logic, the logical foundations of quantum mechanics and quantum computation.
This project addresses transparency concerns that arise from AI-driven communication in the field of trademarks and brands.
Brand messages are often generated with minimal or no human interference and distributed on the basis of consumer behavioural data via online platforms. With regard to this practice, proposed new EU legislation seeks to empower consumers by ensuring access to information on the selection criteria (parameter transparency: ‘Why me?’) and the source of the communication (source transparency: ‘Who sent this?’). Before adopting and potentially extending these legal rules to a broader spectrum of digital, virtual and augmented reality media environments, it is pivotal to understand whether these transparency disclosures would indeed be effective.
Communication science research shows that transparency disclosures may have limited or even conflicting effects. Examining consumer responses to transparency disclosures in multiple media environments, the project seeks to clarify whether transparency information reaches consumers, leads to desirable effects on trust, and encourages consumers to seek additional information on alternative offers. In answering these questions, the project aims to impact the policy debate surrounding the proposed new transparency legislation at EU level. It will also provide a compass for the establishment of appropriate responsible AI legal standards in the field of brand-based communication.
Prof. Martin Senftleben is a professor of Intellectual Property Law at the UvA’s Institute for Information Law. His research focuses on platform and AI regulation in the EU.
Prof. Guda van Noort is a professor of Persuasion & New Media Technologies at the UvA’s Amsterdam School of Communication Research. Her research focuses on consumer responses to emergent media technologies and its content.
Prof. Edith Smit is a professor of Persuasive Communication at the UvA’s Amsterdam School of Communication Research. Her research focuses on persuasion and empowerment in the domain of marketing and media.
----