3 September 2020
Claes is distinguished research professor of Data, Democracy and Society at the University of Amsterdam's Faculty of Social and Behavioural Sciences. As of 1 September, he is the leader of the research priority area 'Human(e) AI'. We spoke to him about the social side of AI. 'Social and behavioural sciences play a crucial role in analysing the pros and cons of AI.'
'AI is a fantastically broad collective noun and it is not really a new phenomenon. Originally, it refers to machines capable of independently learning new things and discovering patterns in data using smart algorithms. Nowadays, there is renewed and intensified interest in AI, partly stemming from the exponential growth of digital data in our society and increasing automation. Although these two processes are not AI in itself, we are witnessing the transition to smart systems and this is greatly expanding the role of AI.'
'AI plays major and minor roles in almost all aspects of our everyday lives, although we are often unaware of this. For example, health apps that monitor your sleep, targeted news updates or ads that you receive while having breakfast or real-time updates about your journey to work.’
'This may sound positive, but we've also all heard about scandals such as at the Tax and Customs Administration, where tens of thousands of people were profiled as fraudsters in recent years based on discriminatory algorithms. Or in the UK where schools used dubious algorithms to determine students' grades, resulting in all kinds of students being excluded.'
'One major advantage of AI and algorithms is that you can make much more effective matches between all of the information available and specific information that somebody is looking for. Imagine the vast oceans of data you would have to navigate if these matches couldn't be made.
'However, one major disadvantage of AI is the lack of transparency regarding how these matches are made and the lack of ultimate responsibility for these matches. And what do we do when AI goes wrong and results in inequality and exclusion, such as with the Tax and Customs Administration in the Netherlands or the students' grades in the UK?'
'During the development of AI, certain basic values should be hard-wired into the systems. For example, it is important to us as a society that people have equal opportunities and equal access and that we fight against discrimination. How do you ensure that public values such as these are implemented in advance during the development of AI in order to avoid solving all kinds of problems afterwards? To do this, you need knowledge of the specific fields in which AI is applied and the social and behavioural sciences can play a substantial role in this regard.
The emphasis must shift to understanding innovation: do we have sufficient insight into the pros and cons?
'Educational sciences experts, for example, have clear insight into the processes that produce inequality in our education system. Therefore, when AI is being implemented into this system, we must capitalise on their knowledge in order to prevent certain students from being excluded. Academics don't necessarily have to be AI experts in order to provide valuable knowledge to the process, although of course, it would always be useful for them to consider how digitalisation could influence their research areas.'
'The central focus of my research has always been the optimisation of democratic processes, and in the modern era these processes are undergoing substantial changes due to data and technology. For example, political parties now use data in order to run much extremely targeted campaigns and key institutions such as municipalities, the judiciary and the tax authorities have all seen their roles transformed by AI. We have also seen how AI is playing an ever-increasing role in determining what news is presented to you and therefore what you have read about certain societal issues. It's highly likely that my newsfeed looks completely different to yours. This interplay of data and AI is creating major challenges for the system of democracy and my group and I are seeking solutions to these challenges.'
'Yes and no. All of the current debates concerning AI are about recent high-profile scandals in which something went wrong, such as at Facebook or the Tax and Customs Administration. These flashpoints are preventing fundamental discussion of how we can design self-learning systems to work fairly and effectively.’
'I have also noticed that a great deal of focus is often placed on our acceptance of innovation. I would prefer to see the emphasis shifted to understanding of innovation: do we have sufficient insight into the pros and cons?'
'The biggest myth is that AI is either all good or all bad, with technological optimists on one side and people worried about the loss of jobs and/or privacy on the other. However, whether AI is good or bad depends entirely on ourselves and the extent to which we can safeguard and embed our public and social values.'
'The Inevitable' by Kevin Kelly, a non-fiction book written in 2016 that predicts the twelve technological forces that will shape the next 30 years. Somebody recommended it to me this summer and I found it to be an accurate and well-written presentation of the fundamental processes underlying AI and technological development.'