On July 18, 2022, the UK government published high-level proposals for its approach to regulating uses of artificial intelligence (AI), as part of its National AI Strategy and, more broadly, its UK Digital Strategy. The government is seeking public views on the approach, which is contained in a policy paper; a more detailed White Paper will be published in late 2022.
While the proposals are at an early stage, they are clearly intended to lay down a marker for a light-touch, principles-based approach to regulating AI with the aim of encouraging growth of and investment in the UK AI ecosystem. Several key elements are expressly left for regulators to detail further, such as categorizations of “high risk” and considerations of “fairness” in the use of AI, and other areas are barely addressed at all, such as regulatory penalties, any prohibited uses of AI, and application of the approach to different categories of actors in the AI supply chain. The government also highlights several areas where the United Kingdom would allow greater flexibility than the European Commission’s recent AI-specific legislative proposal.
CURRENT UK REGULATORY LANDSCAPE
Uses of AI currently fall under a patchwork of UK legislation and regulatory requirements in areas such as data protection, equality and human rights, medical devices, and elements of financial services regulations. The UK government acknowledges that this has brought a lack of clarity in how rules apply to AI, overlapping regulatory remits, inconsistency in how rules are applied across sectors, and gaps within legislation around transparency and explainability of AI decision-making.
In July 2021, the UK government published its Plan for Digital Regulation, which was based on the following principles:
- Actively promoting innovation through removing unnecessary regulations and burdens and considering nonregulatory measures like technical standards
- Achieving forward-looking coherent outcomes through a collaborative approach between regulators and businesses, as well as by making space for businesses to test and trial new business models, products, and approaches
- Exploiting opportunities and addressing challenges in the international arena, particularly through international regulatory cooperation, in order to facilitate international interoperability
A PRO-INNOVATION, PRINCIPLES-BASED APPROACH
The policy paper proposes an approach that emphasizes principles and context of use of AI, rather than a single framework of fixed definitions and lists of risks and mitigations, so that regulators can respond proportionately to applications of AI in different contexts.
In order to approach cross-cutting challenges in a coherent and streamlined way, the government sets out (1) core characteristics of AI, allowing regulators to evolve more detailed definitions for specific remits and sectors; and (2) cross-sectoral principles that existing regulators will interpret and implement.
“Core characteristics” of AI rather than a single definition. The government proposes not to set out a universally applicable definition of AI, but to leave it to individual regulators to evolve more detailed definitions. The government comments that it does not think the European Commission’s fixed definition of “AI systems” captures the full application of AI and its regulatory implications. Instead, it identifies the combination of just two core characteristics as comprising an AI technology: the adaptiveness of the technology that is trained on data and executes according to complex patterns, and the autonomy of the technology to automate complex cognitive tasks.
Six cross-sectoral principles. The government sets out what it terms “early proposals” of six principles that UK regulators across sectors would be expected (though not required) to apply, building on the OECD Principles on Artificial Intelligence:
- Ensure that AI is used safely
- Ensure that AI is technically secure and functions as designed
- Ensure that AI is appropriately transparent and explainable
- mbed considerations of fairness into AI
- Define legal persons’ responsibility for AI governance
- Clarify routes to redress or contestability
Regulatory approach. The government does not propose to have any new or single AI regulator, or any coordinating body similar to the EU European Artificial Intelligence Board. It proposes that existing regulators will lead the process of identifying, assessing, prioritizing, and contextualizing the specific risks addressed by the cross-sectoral principles. Regulators will be encouraged to consider lighter-touch options in the first instance, such as voluntary or guidance-based approaches, in order to avoid unnecessary burdens, and to focus on high-risk concerns rather than hypothetical or low risks associated with AI.
The government does not consider that equal powers of regulators or uniformity of approach across all regulators is necessary, and so coordination will be important to avoid contradictory and divergent approaches. The United Kingdom’s Digital Regulation Cooperation Forum, which the Financial Conduct Authority joined last year as a full member, provides some infrastructure for this coordination at a domestic level, although the policy paper is light on detail as to how coordination will be achieved beyond that forum. The United Kingdom will also engage at the international level to support interoperability and responsible development of AI within international standardization.
The UK government invites comments on its proposals by September 26, 2022. The government will publish a White Paper and public consultation in late 2022, which will set out its position on the proposed framework, including putting its approach into practice and monitoring the framework. This will be in addition to the Financial Conduct Authority’s discussion paper on AI and the current financial regulatory framework, which is also expected later in 2022.