Interview with Sennay Ghebreab
7 May 2026
Stop data-driven profiling by the government because of discrimination. That is the State Commission’s call. Why is this necessary?
'We see that profiling and automated risk selection by government organisations have increased in recent years, while we have witnessed several examples where this has led to discrimination against large groups of citizens, with far-reaching harm for individuals and society. Supervisory bodies such as the Dutch Data Protection Authority (Autoriteit Persoonsgegevens) and the Netherlands Institute for Human Rights het College voor de Rechten van de Mens) have repeatedly pointed to unlawful and discriminatory data processing and to growing complexity due to algorithms and AI. Nevertheless, the government continues its profiling practices.
As a State Commission we examined whether data-driven profiling is compatible with a government that aims to prevent discrimination, assessing it against the principles of the rule of law, democratic legitimacy and good governance. Our conclusion is that profiling is an expanding phenomenon that is difficult to contain, and in practice the government often acts pragmatically. We consider that untenable.'
How did it come to this?
'For government organisations, data-driven profiling has become attractive because it is seen as an efficient form of enforcement: catching many offenders with limited capacity. That idea is reinforced by political pressure to tackle fraud and by the need to keep large government systems and processes — such as the benefits system — manageable. There is a broader belief that technology and AI improve government processes. However, that techno‑optimism carries risks, especially when critical reflection is lacking.
The broader social and political context also matters. For some time now there has been a hardening climate around fraud and abuse of social benefits, fuelled by incidents and media coverage, such as the 'Bulgarian fraud' affair in 2013. This has led to stricter control practices, with the risks of discrimination insufficiently recognised.'
Where do things go wrong in practice? Can you give examples?
'The consequences of data-driven profiling have proved very significant in practice. The childcare benefits scandal is the most striking example. Discriminatory profiling was used there: among other things, (dual) nationality was used to determine who was considered high risk and was subjected to intensive checks. In checks on the living‑away‑from‑home allowance by DUO, it also emerged that students with a migration background were selected for inspection more often. Such practices have a major impact on people’s lives — financially, socially and psychologically.
These examples are likely just the tip of the iceberg. Much profiling takes place out of citizens’ sight, in systems that are not transparent to them. Government transparency and registration of these systems lag behind, making it unclear how often and in what ways profiling is applied.'
What is the alternative? The Dutch government will want to continue protecting itself against abuse, fraud and crime.
‘An important point here is that the effectiveness of data-driven profiling has not been convincingly demonstrated. Moreover, success is often defined too narrowly — as catching more fraudsters — whereas it should also be about broader societal goals such as justice, well‑being and trust in government.
The State Commission therefore calls for current applications to be halted but also points to alternatives. Think of random sampling or, where possible, full checks, combined with better signalling and additional research. Indeed, scientific research underlines that random sampling is just as effective, if not more effective. In addition, these methods are more transparent and fairer, and reduce the risk of discrimination.
At the same time we recognise that this is far‑reaching forimplementing agencies. If the government were nevertheless to deem the use of profiling necessary in future, it must first demonstrate that it is effective and does not have discriminatory effects. Without principled measures, the risk of discrimination and harm remains too great.'