Deepfakes are videos that have been manipulated in some way using algorithms. The UvA team created a deepfake video of a Dutch politician, putting potentially controversial words that he never actually said into his mouth. They then showed it to a panel of 287 people (all of whom were eventually informed that the video was faked). The participants were then asked questions to assess whether they found the video credible or not and if the video had affected their attitudes towards the politician and his party.
Unquestioningly accepted as genuine
Although the deepfake video was quite short and its quality not optimal, the team was surprised to learn that the majority of the participants found the video credible and did not suspect that it had been manipulated. Team member Nadia Metoui: ‘In a short period of time and with relatively limited technical resources, we were able to construct a deepfake video that was unquestioningly accepted as genuine by most of the participants in our experiment.’
The potential impact of deepfakes is so troubling that even Facebook, which has otherwise announced it will not factcheck political adverts, is developing a strategy to detect and remove deepfakes from its platforms. The internet giant announced it would not allow content that artificial intelligence or machine learning had ‘edited or synthesised […] in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say.’
Microtargeting through social media channels - where people’s data is used to divide them into ever smaller groups for targeting with online content - is one area of particular concern where this sort of video involved. Even if the false statements made in a deepfake are later rebutted or disproved, such counterarguments will often only be made in mainstream media, away from social channels, and are therefore less likely to be seen by those who had been affected by them. As the old saying goes: ‘A lie can get halfway around the world before the truth can get its boots on’.
Online scams, cyberbullying and blackmail
Furthermore, there is potential for deepfakes to have effects outside the political arena, through online scams, cyberbullying and blackmail. All of this could lead to an undermining trust across society as a whole, making it ever easier to cast doubt on any and all online information sources.
Nevertheless, deepfakes remain under-researched. It may be the case that they have not been used widely in political campaigns yet, but many observers expect them to be deployed sooner rather than later. These days there are even readily available apps which will allow the casual user to make convincing deepfakes, so understanding the effects these types of videos could have is more important than ever. Given the relative ease with which deepfakes can be made and how little research there has been into them so far, this is a subject that urgently needs further study, and the UvA is among the institutions leading the way in this area.
‘This was the first study that constructed a deepfake (voice and image) from scratch and measured its effect on people. The next step is to make a longer, even better deepfake and combine it with potential interventions to mitigate the deepfake’s negative effects. For instance, does informing people about the characteristics of deepfakes make them more able to recognise one?’Team member Tom Dobber
Ironically, although deepfakes are made using artificial intelligence, it is with AI that the best chance of combatting them lies, since AI is able to spot fakes that a human would find impossible to distinguish from the real thing. And with its wide-ranging AI expertise, the UvA will continue to conduct its research at the forefront of this important issue.