For best experience please turn on javascript and use a modern browser!

'Although a majority of organisations do not set out to discriminate, they may be unaware, however, that the systems they rely on can have discriminatory effects', writes Professor Frederik Zuiderveen Borgesius in his report to the Council of Europe on artificial intelligence (AI), algorithmic decision-making and the potential for discrimination.

Robot thinking with mathematic formulas in the background

Frederik Zuiderveen Borgesius has drafted a report for the Council of Europe's Anti Discrimination Department. It discusses the potential risk of discrimination caused by algorithmic decision-making and other types of artificial intelligence.

In both the public and private sectors, organisations make use of AI to take decisions with potentially far-reaching social implications. Government agencies might use AI for predictive policing, or to track down benefit scammers. In the private sector, companies use AI to select applicants, and banks use AI to make credit decisions. And many small decisions can have a significant cumulative effect. For example, AI-driven price discrimination might result in certain groups in society consistently paying more for the same goods and services.

Artificial intelligence has many advantages. It can, for example, promote efficiency, economic growth, and security. But AI can also jeopardise human rights and other fundamental values, such as the right to live a life free of discrimination. One particularly thorny issue is that AI can unintentionally result in unlawful and unfair discrimination. This occurs, for example, if an AI system has learned from human decisions that themselves tend to discriminate.

Inadequate legislation

Existing legislation appears unable to adequately shield people from AI-driven discrimination. The new General Data Protection Regulation (GDPR) contains rules governing AI decisions, but those rules are far from seamless. For example, if an organisation allows its employees to use AI predictions to assistthem in making decisions, it is debatable whether the AVG rules on automated decisions apply. 

Anti-discriminationlegislation also has its flaws. For example, most anti-discriminationlaws apply only if people have suffered discrimination on the basis of a legally designated ground, such as skin colour or gender. Such laws do not apply if, for example, the discrimination is based on an individual's income. In short, current legislation seem to be inadequate for securing human rights and other important values in an AI context.

Additional regulations needed

We probably need additional regulations to shield people from unfair discrimination caused by AI, concludes Zuiderveen Borgesius. However, regulating AI in general terms may not be the best course of action due to the fact that the use of AI systems is too varied to fit one set of rules. Different values may be at stake in different sectors. Sector-specific rules should, therefore, be considered. 


The report is available here, in English and in French.