Editor’s note: This is the first post in a series that explores a range of topics about upcoming AI regulation, including an overview of the the EU AI Act and Palantir solutions that foster and support regulatory compliance when using AI. This blog post provides an overview of the EU AI Act, with a special focus on High-Risk AI Systems. In-house legal and compliance readers in particular will find in it a useful digest of key considerations for their organizations.
The EU Artificial Intelligence Act (AIA) represents the formal culmination of a process that began on April 21, 2021 with an initial draft from the EU Commission[1]. Though the EU aims through this legal framework to provide a leading vision for AI regulation, further guidelines and enforcement measures will be required to ensure meaningful implementation. The AIA aims to promote human-centric and trustworthy AI adoption within the European Union while ensuring protection of health, safety, and fundamental rights. As part of the EU’s comprehensive initiative to establish a single market and standards via a New Legislative Framework (NLF), the AIA sets harmonized rules for AI systems, addresses prohibitions, outlines requirements for high-risk AI systems, and establishes transparency guidelines based primarily on a product-focused risk-based approach. At the same time, the AIA emphasizes support for innovation, particularly for small and medium-sized enterprises (SMEs) and startups.
With most of the legislative process finalized, the AIA will enter into force 20 days after its publication in the EU Official Journal. The majority of its provisions will be applicable during a 24-month transition period. However, some provisions will have shorter or longer transition deadlines:
The consequences of noncompliance with the AIA can be significant as penalties include:
In practice, and as we have seen in similar General Data Protection Regulation (GDPR) cases, it is likely that regulators will penalize multiple violations at the same time, which could lead to even higher penalties. Moreover, it is worth noting that most public sector institutions (with exception of intelligence and defense agencies of EU member states) are unambiguously subject to the provisions of the AIA, including possible fines for infringements, while such liabilities remain unclear in the context of the GDPR.
The EU AI Act offers a comprehensive definition for AI systems, in line with the definitions set forth by the OECD and the Biden Administration Executive Order 14110 on Artificial Intelligence.
Under the AIA, an AI system is defined as:
a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;
Some systems are excluded from the scope of the AI Act if they do not involve the required degree of autonomy or solely follow rules defined by human users. Specific types of AI systems that are not covered by the AI Act include, in particular, those used for:
Additionally, and unless they constitute a part of a high-risk AI system, the AIA excludes from its regulatory scope open-source models.
The AI Act also provides definitions for general-purpose AI models and general-purpose AI systems. These came as later additions to the December 2023 draft of the AIA (at that time still referring to as “Foundation Models”) and as a result of intensive negotiations between the EU Council and Parliament. The late inclusion was a response to breakout developments and fielding of Generative AI technologies, including large language models such as ChatGPT.
An AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market;
A risk that is specific to the high-impact capabilities of general-purpose AI models, i.e. capabilities that match or exceed the capabilities recorded in the most advanced general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain;
An AI system which is based on a general-purpose AI model and which has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems;
In the context of the AIA, there are many compliance roles, but two main actors: Providers and Deployers. Accordingly, the definitions of these actors are as follows:
A natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge;
While a so called: Downstream Provider is defined as
A provider of an AI system, including a general-purpose AI system, which integrates an AI model, regardless of whether the AI model is provided by themselves and vertically integrated or provided by another entity based on contractual relations.
A natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity;
Like the GDPR, the AIA has an extraterritorial scope, which means that it applies to activities beyond the EU’s borders under certain conditions. For example, the AIA apples if:
To summarize the jurisdictional implications, the EU AI Act has an extensive geographic reach and extraterritorial applicability and applies not only to European companies, but also to those supplying AI products or services to the EU or having their AI systems’ output used within the EU. Thus, it is nearly impossible to avoid being in scope of the AIA by simply relocating. Non-European entities serving their EU customers must therefore address compliance with the full scope of applicable requirements.
The EU AI Act employs a risk-based framework, segmenting AI systems according to their potential for societal harm. This stratification dictates the level of regulation, with systems presenting greater potential for harm or other consequential impact on the well-being and livelihoods of impacted individuals requiring stricter rules and obligations. Companies must accurately classify the risk level of their AI systems, as this determines the corresponding regulatory responsibilities.
If in scope of the AIA, certain AI practices, i.e. an AI system that deploys, exploits, or uses such techniques that are deemed incompatible with EU fundamental rights and values, and are therefore banned. This includes AI systems that are designed to:
These practices are banned, reflecting their capacity to generate significant harm and transgress the standard of ethics upheld within the EU.
High-Risk AI systems pose a substantial threat to health, safety, or the fundamental rights of individuals. These systems are classified through two main criteria — existing EU laws and explicit AI Act designations.
Annex I — EU laws:
AI systems that is, or is part of, a safety component of a product and necessitates a third-party conformity assessment under relevant EU laws listed in Annex I. Examples from Annex I, Sections A and B, include:
Section A
Section B
Annex III — AI Act:
AI systems — in so far as their use is permitted by law — designated for certain high-risk purposes as listed in Annex III. Such systems encompass those intended for:
Under the AIA, high-risk AI systems face rigorous regulation. Much of the remainder of this post, including section IV, will focus on providing an overview of the attendant compliance obligations for such systems.
Besides prohibited and high-risk systems, the AIA additionally addresses minor-risk and minimal-risk AI systems.
Minor-Risk AI Systems:
These systems are subject to, in particular, transparency obligations.
Minimal-Risk AI Systems:
As the above breakdown indicates, the AIA categorizes AI systems risks according to their potential for harm and places the responsibility for identifying the appropriate risk category on organizations using these systems.
Additionally, General Purpose AI Models (GPAIs) have a separate categorization and regulation under the Act: a) GPAI models with systemic risk; b) proprietary GPAI models without systemic risk; and c) open-source GPAI models without systemic risk, with the latter often receiving lighter regulations. However, GPAIs with systemic risk, whether proprietary or open-source, are equally regulated. Importantly, GPAI regulations are not separate from the broader AI Act, hence both regimes apply to a “high-risk” GPAI system.
It is important to note that the provisions governing GPAI models are not exclusive to the remainder of the AI Act and that most users ultimately interact with AI systems, either as a provider or deployer, as well as to the models that power them. Therefore, if a GPAI model is a component to a GPAI System, both regimes would be applicable. See schematic for a useful visualization of these overlapping regimes.
Additionally, there are arguments supporting the interpretation that, for example, a SaaS platform, can consist of multiple AI Systems as defined by the AIA. The AI Office has already announced plans to issue further guidelines addressing this specific issue. However, for a definitive conclusion, it is necessary to await the release of these guidelines. Should a SaaS platform be comprised of multiple AI systems, the implications could be multifaceted, potentially involving different risk profiles and associated compliance regimes for each component. Plainly, there is need for additional regulatory guidance to help deployers (and providers) ensure appropriate governance and end-user transparency over platforms with complex interactions across varied AI systems.
AI systems classified as High-Risk AI Systems must comply with various obligations outlined in Chapter III (Article 6 ff.) of the AIA. The primary obligations can be divided into two categories:
It is worth noting that there may be exceptions and derogations to these obligations, which this post will touch on in the 3. Exceptions-section below.
There are several compliance measures that High-Risk AI systems must adhere to. While not exhaustive, here we lay out some of the most important features, including:
This section focuses on the provider and deployer requirements for High-Risk AI systems divided into three key aspects:
There are exceptions concerning responsibilities as the provider of a High-Risk AI system or the scope of obligations. Three key areas are highlighted below:
Having broken down many of the key elements of the AIA, we would like to conclude by sharing thoughts on one of the most common questions we hear from clients: What are the most important elements of the AIA that I should prioritize in preparing my organization for this regulation?:
Stay tuned — In future posts, we will explore in more detail how partnering with Palantir and using Palantir’s dedicated AI governance suite of products can help your organization ensure compliance with the AIA and other AI regulation. For a full overview and links to material tracking the thought leadership of Palantir’s Privacy and Civil Liberties team on AI regulation and other important developments of the day, check out: https://www.palantir.com/pcl/thought-leadership/.
[1] Formal work on the drafting of the Act was proceeded by various public consultations, including a solicitation for comments on a draft EU Commission Whitepaper on AI. You can find Palantir’s contributions to those early framing decisions here; see ‘May 2020 — Response to the European Commission’s Consultation on the White Paper on AI.’
A Palantir Primer on the EU AI Act was originally published in Palantir Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.
TL;DR A conversation with 4o about the potential demise of companies like Anthropic. As artificial…
Whether a company begins with a proof-of-concept or live deployment, they should start small, test…
Digital tools are not always superior. Here are some WIRED-tested agendas and notebooks to keep…
Machine learning (ML) models are built upon data.
Editor’s note: This is the second post in a series that explores a range of…
David J. Berg*, David Casler^, Romain Cledat*, Qian Huang*, Rui Lin*, Nissan Pow*, Nurcan Sonmez*,…