The ambitions of the theme-based collaboration programme for this theme, have been translated into concrete research projects. In summer 2022, budgets were allocated for start-up projects. These projects have been implemented successfully. In April 2023, budgets were allocated for so-called midsize and seed grant projects.
Seed grant projects bring together UvA scholars from different faculties to work on small-scale, innovative, interfaculty research projects or grant proposal preparations.
Midsize projects build on existing research collaborations between UvA scholars from different faculties. They also involve partnering with one or more non-academic parties.
Below is an overview of projects for the theme responsible digital transformations.
Exploring legal and technical mechanisms enabling low-risk business-to-government data sharing while complying with stakeholder rights and interests.
Companies collecting large amounts of data appear reluctant to share relevant data, and, as a result, public institutions may lack information necessary to fulfil their public tasks as mandated by law. In response, the EU has placed digital sovereignty high on its policy agenda with the Data Act and the Data Governance Act.
In this interdisciplinary research project, the Institute for Information Law (IViR) and the Informatics Institute (IvI) will jointly investigate the legal and technical dimensions of business-to-government data sharing (‘B2G’) through a data intermediary. A data intermediary, a trusted third party with a bespoke data governance regime, can potentially help overcome disincentives and strengthen confidence in B2G data sharing.
Together with societal partners, the researchers aim to:
The municipality of Amsterdam invests in this project, in cash as well as in kind, by helping to identify relevant policy domains and practical use cases to test and apply B2G through data intermediation. With IvI, AMdEX fieldlab supports the technical design components of the project, while other civil society partners help to engage the citizen perspective in data intermediation in a meaningful way.
In 2021, Sign Language of the Netherlands (NGT) was recognized by law as one of the country’s official languages. To raise societal participation of deaf citizens, the law prescribes that the use of NGT in Dutch society must be increased. The present project aims to contribute to this end. It involves a collaboration between UvA and Prowise, a company that provides a digital language learning tool called Taalzee (“Language Sea”), which is used by more than 2.000 primary schools in the Netherlands.
We will create a learning environment for NGT, Gebarenstrand (“Sign Beach”), which will be integrated in Taalzee and will thereby become available to more than 300.000 children in the Netherlands. This will not only contribute to increased familiarity with NGT in Dutch society, but it will also provide a unique window into how children learn a visual language like NGT.
So far, language acquisition research has focused primarily on spoken languages. Our digital learning environment will yield extensive acquisition data for NGT. This will allow us to address fundamental questions in sign language linguistics and digital learning, which in turn will enable refinements of both online and offline sign language learning curricula. Thus, the project will have substantial scientific and societal impact.
This project aims to understand the effects of the digitalisation of illicit urban economies, and to mitigate certain risks associated with these processes.
The everyday activities associated with illicit urban economies, and illicit drug markets in particular, have long had a strong territorial basis, with supply and consumption concentrated in specific, often marginalized areas. As illicit transactions become digitalised – with buyers purchasing drugs via phone apps rather than from street dealers – drug sales increasingly resemble other forms of ultrafast delivery services, with profound consequences for the risks associated with these transactions.
This project seeks to identify mechanisms through which the digitalisation of drug transactions exacerbates or mitigates risk for different populations, including addiction and exposure to criminal or state violence. It does so through a comparative approach, studying these mechanisms in Amsterdam and Rio de Janeiro, cities with established local drug markets but contrasting approaches in terms of drug policies and policing.
In close collaboration with harm reduction NGO Jellinek, the project will experiment with interventions aimed at “responsible use” by drug consumers, developing a digital literacy campaign aimed at sensitising users to the risks of app-based purchases.
In this project, we will explore the use of privacy-enhanced data to create a decision-making framework considering energy consumption/efficiency and privacy/information loss.
Synthetic data is privacy-enhanced data similar to the original dataset, free from verbatim original dataset content. There is the question of whether, from an energy consumption and information loss point of view, it is better to use synthetic data instead of k-anonymized data. This seed grant project will result in a recommender system for such decisions (see Google Maps alternative green routes).
This project aligns with the Responsible Digital Transformations theme, and intersects with the Sustainable Prosperity theme: we explore the economic contextualization of our technical results, to understand how society perceives the trade-off between privacy and climate impact.
We innovate by combining design space exploration methods with user narratives In the age of a Data Economy, regulations are needed that guarantee data privacy and security and at the same time support a sustainable infrastructure for the data, data use and data storage. To this end, we foster a dialogue between business, governance, and software engineering that is needed to jointly find solutions.
The uptake of new medical treatments in clinical practice is sub-optimal, resource-heavy, and time-consuming. In-silico/computational modelling can be valuable to refine, reduce, and even partially replace preclinical animal testing and subsequent clinical trials for development of new and personalized treatments.
In a large-scale EU-funded consortium, we have developed and validated an in-silico trial platform for treatment evaluation of patients with an acute ischemic stroke – the most debilitating disease in the world. This platform combines data- and knowledge-driven computational models of treatment with “synthetic patients” developed utilizing a large stroke database.
We believe that there is great value in the generalisation of synthetic patient generation tools to become relevant for wider populations and different diseases. With our collaboration between Amsterdam UMC, Informatics Institute, and the Faculty of Humanities, we aim to establish a dialogue between technical and social research to discuss the opportunities of the use of synthetic data and solutions to expand its applicability using artificial intelligence.
When considering explanations for the behavior of an AI model, for example explanations of the kind “what did the model consider important when producing this output”, confirmation bias can lead us to believe a machine is trustworthy because a few explanations comply with our beliefs. To prevent these situations, the HUE project will attempt to mitigate confirmation bias by investigating how explanations connect to human-understandable concepts. If successful, our method would allow us to ‘x-ray’ AI and verify whether it complies with our requirements, or rather exhibits harmful behaviors. Building on an existing conceptual framework, this work will connect different disciplines by testing and extending the framework from Medical AI to Natural Language Processing and Computer Vision. Its application to medical use cases is already of interest to industrial partners.
This interdisciplinary project explores how human-in-the-loop (HITL) interventions can foster responsible designs of artificial intelligence (AI) systems. EU regulation requires private and public institutions to implement HITL frameworks in AI decision-making. Still, critics argue that often enough HITL are set up to fail and used as a fig leaf to legitimize predefined decision outcomes. To address this issue, established researchers from the fields of humane AI and behavioral ethics will team up to conduct controlled experiments using the machine behavior approach. The core objective is to develop an AI sandbox model that can provide empirical insights into designing and implementing HITL for effective and responsible AI decision-making.
This theme started with the moonshot project 'Towards an AI4Society Sandbox’. Sandboxing is a way to test (un)desired effects of new software by using it in a simulation of the safe production or user environment. This method will be extended to develop future scenarios and probe technological and regulatory solutions discursively for their legal, societal and ethical implications. The project investigates how such a test environment should be designed to ensure AI-technologies for widest benefit of society.
The researchers form an interdisciplinary team that will work on test cases in the field of digital infrastructure, AI regulation and the impact of AI-driven applications. Together with the steering group, they are taking the first step in designing a collaborative, interfaculty AI4Society Sandbox platform. They also set up a network and community of stakeholders, consisting of researchers, students, citizens, civil servants and policymakers.
Debraj Roy (FNWI) investigates a series of mechanisms to understand the long-term impact of digital transformation on inequality, polarisation and exclusion in our society. He develops a computational framework that can provide guidelines for collectively beneficial algorithms.
Tanja Ahlin (FMG) investigates design and use of social robots for older adults. Using ethnographic methods, the project explores how social robots gather information through interacting with their users and what happens with the acquired data. At the core of this case study is the question of AI regulation: should AI systems - especially those that are targeting people with various levels of cognitive (dis)abilities such as dementia - be regulated, and if so: how?
Rocco Bellanova (FGw) focuses on 'The European Union's regulation of AI in the field of public security'. High tech solutions in the domain of counterterrorism, surveillance and profiling have a huge impact on public security and our societies. Defining sound accountability principles for AI in the field of law enforcement is crucial. The EU plays a key-role in advancing a regulatory framework. This project therefore focuses on the new mandate of the European Agency for Police Cooperation – Europol, as well as its initiatives with regard to technology innovation and its governance.
Joanna Strycharz (FMG) focuses on personalization algorithms used in online communication platforms. Via the study of the black box of algorithmic communication, she sets up the AI4Society Sandbox with focus on the topic of sandboxing to assess impact of personalization algorithms on individuals and society.
Visit the Responsible Digital Transformations website to learn more about this theme and its community.
The Steering Commitee for the theme ‘Responsible digital transformations’ consists of the following members: