For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.
EU trade policy should carve out space for the regulation of ethical and responsible artificial intelligence (AI) in future trade talks. This is the finding of a new study by researchers from the University of Amsterdam’s (UvA) Institute for Information Law. The Dutch Ministry of Foreign Affairs commissioned the study to generate further knowledge about the interface between international trade law and European norms and values when it comes to the use of AI.

As AI seeps ever more comprehensively into our daily lives - through our phones, our cars, even in our doctors' offices –  the need to ensure responsible use of such technologies becomes ever greater. Responsible use of AI is therefore a top priority for the Dutch government and for the EU as a whole. However, as the study -  co-authored by Kristina Irion and Josephine Williams – notes, AI, in keeping with many other technologies in our modern digital landscape, is frequently not location-dependent. Indeed, AI deployment in Europe today is largely dominated by leading US internet companies. Yet, as the European Commission concluded in its 2018 report on the issue, 'the main ingredients are there for the EU to become a leader in the AI revolution, in its own way and based on its values.'

The study therefore recommends that EU trade policy take steps to anticipate these transnational challenges in AI deployment. ‘E-commerce trade talks now underway do not explicitly mention AI,’ Irion comments. ‘Yet, new trade rules will inevitably include AI even before the EU can adopt its own rules on ethical and responsible AI.’

Transparency, accountability and auditability of AI systems
The study calls for an open and inclusive deliberation on the interactions between the EU’s current e-commerce proposal and EU governance of artificial intelligence. For example, the EU’s trade proposal notably backs new com­mitments to protect software source code and restrict countries’ data and technology localisation measures. Yet, protecting software source code could be at odds with demanding a healthy measure of AI transparency. EU trade policy should instead be looking to safeguard the regulatory space for EU rules on the transparency, accountability and auditability of AI systems, the authors suggest.

Furthermore, current trade talks appear to lopsidedly emphasise free data flows without con­sidering how knowledge and surplus value generated from European data should contribute to public value and societal interests. New commitments emphasising cross-border data flows risk closing out policy space for innovative data governance solutions, especially in the public sector. ‘EU trade policy should take these strategic considerations into account,’ Williams recommends, ‘when backing rules that affect European data.’

AI in the fight against poverty
The study recognises that AI holds great potential for developing countries in the alleviation of poverty. However, steps must be taken to avoid perpetuating past cycles of economic dependence, the authors say. Existing WTO law gives flexibility to developing countries to set a policy agenda that will narrow existing digital trade imbalances. The study highlights the need to leverage these WTO provisions to give developing countries leeway to protect their citizens’ data from harmful data mining practices. Given Europe’s human rights-based approach to AI, it is uniquely positioned to build consensus around areas of common concern with developing countries, such as accountable AI, technology transfer, and capacity building.