For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.
Algorithms play an increasingly important role in our lives. How should we deal with this? According to Tobias Blanke (University Professor of Humanities and AI), humanities researchers are indispensable for exploring the ethical and social side of AI. ‘We need knowledge of cultural concepts in order to be able to correct errors in algorithms.’
Tobias Blanke

‘These are exciting times we are living in: for the first time in human history, we are confronted with a different type of intelligence. It does not take the shapes you see in science fiction films – aliens, the Terminator, robots taking over – but it is already out there. Every day, we deal with algorithms that influence us and make decisions for us. They organise our timeline on social media and provide suggestions on Netflix, but are also used by the government to detect crime.

The interesting  – and perhaps worrying – thing is that we often do not know exactly how algorithms work. Machine learning allows them to create their own rules. You feed them large amounts of data, and then they teach themselves rules about how to look at that data. How they reason and why they do what they do is often unclear. How should we interact with this new type of intelligence in our midst? This is one of the main challenges of our time, in which humanities scholars will play an important role.’

Algorithms are often wrong

‘Algorithms make mistakes, and we humans must be able to correct these. Do you know the story of Stanislav Petrov? He was a Soviet lieutenant who may have prevented a nuclear war in the 1980s by overruling a decision made by a computer system. One day, the Soviets' warning system reported that the US had fired five missiles at the Soviet Union. Petrov decided that it had to be a false alarm – rightly so, as it turned out – and ignored it. If the algorithm had been allowed to decide for itself, the Soviet Union would have launched a nuclear counterattack, which would have had devastating consequences.

This is an extreme example, but there are many other examples of algorithms making mistakes or acting in ways we consider to be unethical. Consider, for example, the algorithm that the Belastingdienst (Tax and Customs administration of the Netherlands) used in order to detect fraud with the childcare allowance. This turned out to be discriminatory for people with dual nationality. Another example that recently made the news: a Twitter tool that cropped portrait photos turned out not to recognise black faces. These kinds of errors do not so much stem from the algorithms themselves, but have to do with the data with which algorithms are fed. If you give them a dataset with mainly white faces, they will not learn to recognise faces with a different skin colour properly.’

Why are journalists, historians and archival scientists not consulted more often in the discussion about fake news?

Insight into culture is indispensable

‘Algorithms are trained with datasets from society, from human culture, and that is one reason why the humanities are needed. To be able to find misjudgments in an algorithm, you need to have a perception of the type of data we feed those algorithms, and you need to understand where the problems are in that data. Knowledge of cultural concepts is indispensable there. Say you want to make sure the algorithms used by the police to detect crime are free from colonial baggage. In order to do so, you need to know exactly what colonialism entails, and how it is reflected in the datasets that are used to train these algorithms. That is why algorithms cannot be corrected by a programmer alone: historians and cultural scientists, for example, are very much needed.

Research into the ethical side of AI is vital. Generally speaking, a computer scientist is primarily interested in how algorithms can perform better. There are all kinds of criteria for this in computer science, but why an algorithm does what it does in a certain context is often less important. That is exactly what researchers in the humanities are trying to find out. We try to understand how algorithms work, and compare this with our own ideas. We want to understand the context of the data that we train algorithms with.’

Source criticism

‘Another reason why the humanities are important today has to do with another big problem: fake news. Humanities scientists are uniquely capable of critically assessing sources. We have a long tradition in this and have developed very advanced mechanisms in order to guarantee the authenticity of sources. Why are journalists, historians and archival scientists not consulted more often in the discussion about fake news?

I think this is because in the internet industry, there is a widespread belief that everything should be done by algorithms. If you ask Mark Zuckerberg how to solve the problem of fake news, he would say: let us develop a nice algorithm for that. But I believe people rely too much on algorithms to solve this problem for us. Instead, everyone should be much more trained in assessing sources and consuming information.’

The humanities benefit from AI

‘Of course, the humanities can also reap the benefits of AI in many ways. Just think of databases and search engines. Anyone who has been conducting humanities research for as long as I have – about 15 years – will remember the days when you actually had to look for physical sources and could not yet find articles digitally. Today we have great search engines that do all of this work for us.

But there are several challenges when it comes to using AI in humanities research. First of all, the data which humanities scholars are interested in is generally not available in nicely accessible digital formats. For example, I have been working on a project in which we are gathering sources on the Holocaust, such as letters and documents from governments, from all around the world. These documents are stored in traditional archives and were of course never intended for computer consumption. It takes a lot of time to make that kind of data into something that computers can handle – a challenge which is often overlooked.

The second major challenge is that AI generally looks for big patterns, while humanities researchers tend to be more interested in the minor details. In my view, this is the holy grail for humanities and AI: to develop a type of AI that is suitable for this. I think we should not adapt the humanities to AI, as politics sometimes suggests, but adapt AI to the humanities.’

If you do not understand how algorithms work, how can you relate to them?

Everyone should learn to code

‘At the same time, I think that humanities scholars have a lot to learn in the field of AI. It would be a good idea for anyone to learn a bit of programming. I think time should be set aside for this in curricula of the humanities, because in order to correct algorithms, you need to know something about how they work.

During our lives, we learn how to interact with other humans and who we can and should not trust. Now we have to learn the same things when it comes to interacting with that other type of intelligence, AI. I forgot who it was who said this, but I thought it was a great statement: “programme or be programmed”. That is exactly how it works. If you do not understand how algorithms work, how can you relate to them?’

Tobias Blanke was appointed University Professor of the Humanities and AI in August 2019. Previously, he was Professor in Social and Cultural Informatics at the Department of Digital Humanities at King's College London and one of the directors of the European Digital Research Infrastructure for the Arts and Humanities (DARIAH). Blanke is one of four University Professors at the UvA in the field of AI. By establishing these four chairs, the UvA aims to give an extra boost to its AI research and teaching.