The draft EU AI Regulation is a far-reaching attempt to provide a regulatory foundation for the safe, fair, and innovative development of Artificial Intelligence in the European Union and is likely to have consequences across the globe. An important feature of the Regulation, which has so far provoked little academic debate, is its use of technical standards to help achieve its goals. However, standardisation is complicated and the nexus between standards and the European Commission’s goals is a challenging intersection of stakeholders, economic interests, and established standards development organizations. Building on extensive research and stakeholder consultation, the draft Regulation sets out a comprehensive framework for AI governance and standards. The large number of comments from stakeholders the draft Regulation has received reflect both the significance of such a proposed regulatory framework, and its anticipated global influence. It borrows mechanisms from the GDPR, but it also recognizes the unique role and challenges that AI presents. In addition, the EU itself is reconsidering its model for standardisation and is in the process of gathering input to a revised approach to European standardisation. This paper focuses on the role that the draft Regulation gives to standards for AI. Specifically, conformance with harmonised standards will create a presumption of conformity for high-risk AI applications and services – lending a level of confidence that they are in compliance with the onerous and complex requirements of the proposed Regulation and creating strong incentives for industry to comply with European standards.
The study provides an analysis of the draft EU AI Regulation’s envisioned role for technical standards in the governance of AI. As well as exploring the EU AI draft Regulation’s fitness for purpose as a template for engagement in standards, the study will also provide critiques and recommendations for suggested improvements for the draft Regulation.
The paper begins with an overview of the proposed EU AI Regulation as a whole, followed by a more detailed analysis of how the regulatory framework seeks to motivate conformance with certain technical standards. There follows a comparative analysis of the two instruments – harmonised standards and common specifications – which can be relied on by producers of ‘high risk’ AI as a way of minimising the compliance burden.
Next, there is an introduction to the world of technical standards and a brief overview of the standardisation work and roadmaps relating to AI across both European and international standards organisations. Given the EU AI Regulation’s reliance on standards, the section concludes with an overview of the Commission’s ongoing review of the European standardisation system, and what it highlights in relation to the strengths and weaknesses of the current environment.
The study goes on to provide an analysis of the benefits and risks of the proposed regulatory framework in relation to standards, including mismatches between the expectations of regulators, industry, and the standards bodies themselves, as well as the need for timeliness and effective use of limited resources. It provides some recommendations, addressed to both policymakers and standards organisations, intended to accentuate the benefits, and minimise the risks created by the interplay of EU AI regulation and global standardisation.
This report is available as a free download from this website.