I am an Assistant Professor specializing in The Human Factor in New Technologies within the Department of Communication Science at the University of Amsterdam. I am affiliated with the interdisciplinary research priority area Humane AI and represent the Program Group Political Communication and Journalism on the ethical board of ASCoR (Amsterdam School of Communication Research).
After earning my PhD from the University of Münster in Germany, I worked as a postdoctoral researcher in the Department of Social Sciences at the University of Düsseldorf. My academic journey has been driven by a passion for exploring the intersection of artificial intelligence and societal well-being.
My research and teaching examine how AI can strengthen and challenge democracy and social cohesion. Specifically, my work is organized around four key pillars:
Through my interdisciplinary approach, I aim to contribute to building AI systems that are ethically sound, socially beneficial, and aligned with democratic values.
Research methods
Current research projects
Understanding the Opportunities and Risks of Synthetic Relationships
This project focuses on the emerging trend of synthetic relationships between humans and AI systems. We investigate the potential risks of these relationships, such as emotional dependence and the erosion of genuine human connection. We further propose policy measures to mitigate these risks, such as advocating for guardrails that protect users' well-being and promote the responsible development of AI agents.
Starke, C., Ventura, A., Bersch, C., Cha, M., de Vreese, C., Doebler, P., Dong, M., Krämer, N., Leib, M., Peter, J., Schäfer, L., Soraperra, I., Szczuka, J., Tuchtfeld, E., Wald, R., & Köbis, N. (2024). Risks and protective measures for synthetic relationships. Nature Human Behaviour, 8(10), 1834–1836. https://doi.org/10.1038/s41562-024-02005-4
The Impact of GenAI on Perceptions of Disinformation
This project examines the effects of a GenAI literacy intervention. We investigate whether providing information about AI-generated disinformation increases (1) people’s ability to discern true from false online news and (2) overall skepticism toward online news.
Cognitive Biases in Human Oversight of AI
This project examines the common policy approach of using human oversight to mitigate the risks of algorithmic bias in AI systems. We caution against assuming that human intervention is a simple solution to bias, as human judgement is also prone to systematic errors based on (1) limitations in human cognitive abilities, (2) the influence of personal preferences and biases, and (3) the potential for over- or under-reliance on AI.
Understanding Political Corruption in Digital Societies
The project investigates the potential of AI to combat corruption, examining both the opportunities and challenges associated with its implementation. We examine the effectiveness of AI-based anti-corruption tools (AI-ACT) implemented top-down (by governments) or bottom-up (by citizens and non-governmental organizations).
Forjan, J., Köbis N., & Starke, C. (2024). Artificial Intelligence as a Weapon to Fight Corruption: Civil Society Actors on the Benefits and Risks of Existing Bottom-up Approaches. In A. Mattoni (Ed.), Digital Media and Anticorruption. Routledge.
Christopher Starke, Kimon Kieslich, Max Reichert, Nils Köbis (2023). Algorithms against Corruption: A Conjoint Study on Designing Automated Twitter Posts to Encourage Collective Action. Pre-Print published on OSF
Köbis, N., Starke, C. & Rahwan, I. (2022). The Promise and Perils of Using Artificial Intelligence to Fight Corruption. Nature Machine Intelligence, 4, 418–424. DOI: 10.1038/s42256-022-00489-1
Algorithmic Contestation on Social Media Platforms
Based on the EU’s Digital Services Act, this project investigates user contestation of personalised recommender systems on very large online platforms (VLOPs). We explore user preferences for non-personalised content curation, focusing on the choice to opt out of default personalised systems.
Starke, C., Metikoš, L., de Vreese, C. H., & Helberger, N. (2024). Contesting personalized recommender systems: a cross-country analysis of user preferences. Information, Communication & Society, 1-20. https://doi.org/10.1080/1369118X.2024.2363926
Research grants
05/2024 – 05/2029 Rescuing Democracy from Political Corruption in Digital Societies (RESPOND), Horizon Europe project funded by the European Commission
07/2023 – 07/2024 Understanding the Human in the Loop: Behavioral Insights to Develop Responsible Algorithms (HumAIne), Collaboration with behavioral economists, funded by the UvA IP theme “Responsible Digital Transformations”
03/2021 – 02/2024 Responsible Academic Performance Prediction: Factual and Perceived Fairness of Algorithmic Decision-Making (FAIR/HE) , Collaboration with computer scientists, funded by the German Federal Ministry for Education & Research
08/2020 – 10/2022 Discourse Data 4 Policy: AI-based Understanding of Online Discourses for Evidence-based Policy-Making (DD4P), Collaboration with computer scientists, funded by the Heinrich-Heine University of Düsseldorf
01/2020 – 01/2023 Corruption & Anti-Corruption in Empirical Research: Critical Reflections on Concepts, Data & Methods, Funded by the Constructive Advanced Thinking Initiative