A Palantir Primer on the EU AI Act

Editor’s note: This is the first post in a series that explores a range of topics about upcoming AI regulation, including an overview of the the EU AI Act and Palantir solutions that foster and support regulatory compliance when using AI. This blog post provides an overview of the EU AI Act, with a special focus on High-Risk AI Systems. In-house legal and compliance readers in particular will find in it a useful digest of key considerations for their organizations.

Introduction

The EU Artificial Intelligence Act (AIA) represents the formal culmination of a process that began on April 21, 2021 with an initial draft from the EU Commission[1]. Though the EU aims through this legal framework to provide a leading vision for AI regulation, further guidelines and enforcement measures will be required to ensure meaningful implementation. The AIA aims to promote human-centric and trustworthy AI adoption within the European Union while ensuring protection of health, safety, and fundamental rights. As part of the EU’s comprehensive initiative to establish a single market and standards via a New Legislative Framework (NLF), the AIA sets harmonized rules for AI systems, addresses prohibitions, outlines requirements for high-risk AI systems, and establishes transparency guidelines based primarily on a product-focused risk-based approach. At the same time, the AIA emphasizes support for innovation, particularly for small and medium-sized enterprises (SMEs) and startups.

With most of the legislative process finalized, the AIA will enter into force 20 days after its publication in the EU Official Journal. The majority of its provisions will be applicable during a 24-month transition period. However, some provisions will have shorter or longer transition deadlines:

  • Bans on prohibited practices will become effective 6 months after the AIA’s entry into force.
  • Codes of practice will need to be implemented within 9 months after the AIA’s entry into force.
  • Most general-purpose AI rules will be applicable 12 months after the AIA’s entry into force.
  • AI systems that are components of large-scale IT systems established by the legal acts listed in Annex X (i.e., systems in the area of Freedom, Security, and Justice) will have a longer deadline of 36 months after the AIA’s entry into force.

The consequences of noncompliance with the AIA can be significant as penalties include:

  • Up to €35 million or 7% of global revenue (whichever is higher, though lower penalty limits apply for SMEs) for prohibited AI practices, such as misleading, nudging, or using dark patterns.
  • Up to €7.5 million or 1% of global revenue (whichever is higher) for supplying incorrect, incomplete, or misleading information to notified bodies and national competent authorities.
  • The 2022 AI Liability Directive rebuttable presumption stipulates causation e.g. for damages incurred in the event of non-compliance with the the AIA, which simplifies the process of pursuing damages, especially in civil litigation. Courts may also require disclosure of certain documents under specific circumstances to facilitate access to this information for damaged parties.

In practice, and as we have seen in similar General Data Protection Regulation (GDPR) cases, it is likely that regulators will penalize multiple violations at the same time, which could lead to even higher penalties. Moreover, it is worth noting that most public sector institutions (with exception of intelligence and defense agencies of EU member states) are unambiguously subject to the provisions of the AIA, including possible fines for infringements, while such liabilities remain unclear in the context of the GDPR.

Scope of the AIA: What, Who, and Where

What: Definition of AI

The EU AI Act offers a comprehensive definition for AI systems, in line with the definitions set forth by the OECD and the Biden Administration Executive Order 14110 on Artificial Intelligence.

Under the AIA, an AI system is defined as:

a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;

Some systems are excluded from the scope of the AI Act if they do not involve the required degree of autonomy or solely follow rules defined by human users. Specific types of AI systems that are not covered by the AI Act include, in particular, those used for:

  • Scientific and commercial research, development and prototyping (In commercial cases prior to market introduction)
  • Non-professional, solely personal use by individuals
  • Exclusive use for military, defense, or national security objectives

Additionally, and unless they constitute a part of a high-risk AI system, the AIA excludes from its regulatory scope open-source models.

The AI Act also provides definitions for general-purpose AI models and general-purpose AI systems. These came as later additions to the December 2023 draft of the AIA (at that time still referring to as “Foundation Models”) and as a result of intensive negotiations between the EU Council and Parliament. The late inclusion was a response to breakout developments and fielding of Generative AI technologies, including large language models such as ChatGPT.

  • General-purpose AI model is defined as:

An AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market;

  • Systematic risk of a general-purpose AI model is defined as:

A risk that is specific to the high-impact capabilities of general-purpose AI models, i.e. capabilities that match or exceed the capabilities recorded in the most advanced general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain;

  • General-purpose AI system is defined as:

An AI system which is based on a general-purpose AI model and which has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems;

Who: Stakeholders

In the context of the AIA, there are many compliance roles, but two main actors: Providers and Deployers. Accordingly, the definitions of these actors are as follows:

  • Provider is defined as:

A natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge;

While a so called: Downstream Provider is defined as

A provider of an AI system, including a general-purpose AI system, which integrates an AI model, regardless of whether the AI model is provided by themselves and vertically integrated or provided by another entity based on contractual relations.

  • Deployer is defined as:

A natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity;

Where: Extraterritorial scope

Like the GDPR, the AIA has an extraterritorial scope, which means that it applies to activities beyond the EU’s borders under certain conditions. For example, the AIA apples if:

  • a provider places an AI system or general-purpose AI models on the EU market, irrespective of whether they are established within the EU or in a third country;
  • a deployer of an AI systems has an establishment in the EU; or
  • Providers and deployers of AI systems are established or located in a third country, but the output produced by the AI system is used within the EU (“Effects principle”).

To summarize the jurisdictional implications, the EU AI Act has an extensive geographic reach and extraterritorial applicability and applies not only to European companies, but also to those supplying AI products or services to the EU or having their AI systems’ output used within the EU. Thus, it is nearly impossible to avoid being in scope of the AIA by simply relocating. Non-European entities serving their EU customers must therefore address compliance with the full scope of applicable requirements.

How it works: A risk-based approach for each AI system

The EU AI Act employs a risk-based framework, segmenting AI systems according to their potential for societal harm. This stratification dictates the level of regulation, with systems presenting greater potential for harm or other consequential impact on the well-being and livelihoods of impacted individuals requiring stricter rules and obligations. Companies must accurately classify the risk level of their AI systems, as this determines the corresponding regulatory responsibilities.

Prohibited AI Practices

If in scope of the AIA, certain AI practices, i.e. an AI system that deploys, exploits, or uses such techniques that are deemed incompatible with EU fundamental rights and values, and are therefore banned. This includes AI systems that are designed to:

  • Deploy manipulation or deceptive techniques on purpose.
  • Exploit individuals’ vulnerabilities linked to age, disability, or socio-economic status.
  • Create social scores for individuals or groups.
  • Predict an individual’s propensity to criminal activity, akin to ‘Minority Report’ style crime forecasting.
  • Compile facial recognition databases via indiscriminate scraping.
  • Infer emotions in the workplace or in educational institutions.
  • Categorize biometric data based on race, political views, gender, etc.
  • Generate real-time biometric identification in publicly accessible spaces for law enforcement purposes.

These practices are banned, reflecting their capacity to generate significant harm and transgress the standard of ethics upheld within the EU.

High-Risk AI Systems

High-Risk AI systems pose a substantial threat to health, safety, or the fundamental rights of individuals. These systems are classified through two main criteria — existing EU laws and explicit AI Act designations.

Annex I — EU laws:
AI systems that is, or is part of, a safety component of a product and necessitates a third-party conformity assessment under relevant EU laws listed in Annex I. Examples from Annex I, Sections A and B, include:

Section A

  • Machinery
  • Medical devices
  • Toys
  • Cableway installations
  • Appliances burning gaseous fuels
  • Radio equipment

Section B

  • Civil aviation security
  • Motor vehicles
  • Marine equipment

Annex III — AI Act:
AI systems — in so far as their use is permitted by law — designated for certain high-risk purposes as listed in Annex III. Such systems encompass those intended for:

  • Biometric applications, e.g. systems used for the purposes of remote biometric identification;
  • Education and training, e.g. systems intended to determine access to educational training, or to detect cheating;
  • Employment, workers management and access to self-employment, e.g. systems intended to be used for the recruitment; and
  • Law enforcement, e.g. systems to be used by authorities to evaluate the reliability of evidence in the course of the investigation or prosecution of criminal offences.

Under the AIA, high-risk AI systems face rigorous regulation. Much of the remainder of this post, including section IV, will focus on providing an overview of the attendant compliance obligations for such systems.

Minimal- and Minor-Risk AI Systems

Besides prohibited and high-risk systems, the AIA additionally addresses minor-risk and minimal-risk AI systems.

Minor-Risk AI Systems:
These systems are subject to, in particular, transparency obligations.

  • For instance, chatbots, deepfakes, and generative AI producing text for public information must clearly disclose their AI-generated nature.
  • Substantial generative AI output needs to be watermarked in machine-readable formats.

Minimal-Risk AI Systems:

  • These AI systems that are neither prohibited nor high-risk plus not subject to transparency requirements, face no specific regulation under the AI Act.
  • The Commission anticipates that the majority of AI systems will belong to this category, but further clarification and guidance of the new AI Office and/or the AI Board will be necessary.

Risk-Assessment Summary, including General Purpose AI Models

As the above breakdown indicates, the AIA categorizes AI systems risks according to their potential for harm and places the responsibility for identifying the appropriate risk category on organizations using these systems.

Additionally, General Purpose AI Models (GPAIs) have a separate categorization and regulation under the Act: a) GPAI models with systemic risk; b) proprietary GPAI models without systemic risk; and c) open-source GPAI models without systemic risk, with the latter often receiving lighter regulations. However, GPAIs with systemic risk, whether proprietary or open-source, are equally regulated. Importantly, GPAI regulations are not separate from the broader AI Act, hence both regimes apply to a “high-risk” GPAI system.

It is important to note that the provisions governing GPAI models are not exclusive to the remainder of the AI Act and that most users ultimately interact with AI systems, either as a provider or deployer, as well as to the models that power them. Therefore, if a GPAI model is a component to a GPAI System, both regimes would be applicable. See schematic for a useful visualization of these overlapping regimes.

Additionally, there are arguments supporting the interpretation that, for example, a SaaS platform, can consist of multiple AI Systems as defined by the AIA. The AI Office has already announced plans to issue further guidelines addressing this specific issue. However, for a definitive conclusion, it is necessary to await the release of these guidelines. Should a SaaS platform be comprised of multiple AI systems, the implications could be multifaceted, potentially involving different risk profiles and associated compliance regimes for each component. Plainly, there is need for additional regulatory guidance to help deployers (and providers) ensure appropriate governance and end-user transparency over platforms with complex interactions across varied AI systems.

High Risk AI systems: Compliance obligations

AI systems classified as High-Risk AI Systems must comply with various obligations outlined in Chapter III (Article 6 ff.) of the AIA. The primary obligations can be divided into two categories:

  1. General Requirements: General regulations and provisions applicable to High-Risk AI Systems (Chapter III, Section 2).
  2. Provider/Deployer Requirements: Specific obligations tailored for providers and deployers of High-Risk AI Systems (Chapter III, Section 3).

It is worth noting that there may be exceptions and derogations to these obligations, which this post will touch on in the 3. Exceptions-section below.

General Requirements

There are several compliance measures that High-Risk AI systems must adhere to. While not exhaustive, here we lay out some of the most important features, including:

  • Risk Management System (Article 9): Establishment of a risk management program that focuses on continuous evaluation of risks and the development of mitigation strategies throughout the AI system’s lifecycle. This process necessitates systematic review and regular updates.
  • Data and Data Governance (Article 10): Ensuring that personal data used for model testing is protected and deleted when testing concludes or when biases have been corrected. AI providers are required to examine possible biases affecting the health and safety of individuals and implement suitable measures to detect, prevent, and mitigate these biases.
  • Technical Documentation (Article 11): Preparation of detailed technical documentation for the AI system before it is released to the market or put into service and the maintenance of up-to-date documentation. Specific guidelines can be found in Annex IV.

Provider/Deployer Requirements

This section focuses on the provider and deployer requirements for High-Risk AI systems divided into three key aspects:

  • Quality Management Systems (Article 17): Providers must implement robust quality management systems with conformity assessment, testing, and validation. Post-market monitoring is also required.
  • Documentation and record Keeping (Article 18, 19): Providers need to maintain detailed documentation and logs of AI system development, monitoring, and maintenance, available for authorities upon request.
  • Fundamental rights Impact Assessments (Article 27): Providers and deployers shall perform as assessment of the impact on fundamental rights that the use of the AI system may produce.

Exceptions

There are exceptions concerning responsibilities as the provider of a High-Risk AI system or the scope of obligations. Three key areas are highlighted below:

  • No High-Risk AI System : In cases where an AI System is technically designated as High-Risk based on Annex III categories (i.e., biometrics, education/training, employment, or law enforcement), it may nonetheless by exempted from the High-Risk AI System requirements if the domain application does not pose a significant risk. Art. 6 (3) clarifies that this is the case under a limited number of instances, e.g., when the AI system is only intended for narrow procedural or preparatory tasks.
  • Third-Party as (new) provider: In certain cases, a third-party (typically designated as a “deployer“) may be considered the provider of a High-Risk AI system:
  • In cases where it is a High-Risk AI System based on Annex I Section A, the product manufacturer of the product is deemed the provider, Article 25 (3).
  • A third party shall be considered to be a Provider if the High-Risk AI System is marketed under his trademark or makes substantial modifications to the system, Article 25 (1).
  • Limited Scope of obligations: In cases where it is a High-Risk AI System based on Annex I Section B, only a limited scope of the AIA applies due to Art. 2 (2) and Art. 102 ff.

Next steps: What organization can do to ensure AIA-compliance

Having broken down many of the key elements of the AIA, we would like to conclude by sharing thoughts on one of the most common questions we hear from clients: What are the most important elements of the AIA that I should prioritize in preparing my organization for this regulation?:

  • Foremost, organizations should try to classify their AI Systems, i.e., what AI and GPAI systems they use, determine the use cases, and evaluate the obligations under the AIA. Furthermore, it should be considered whether any of the available exceptions or derogations may alter the status of an AI System.
  • Organizations should evaluate their existing compliance processes and consider ways of building their AI compliance on top of and in coordination with those established procedures. E.g., consider whether existing fundamental or human rights assessment frameworks might be modified to incorporate AI Systems considerations and thereby satisfy Article 27 requirements.

Stay tuned — In future posts, we will explore in more detail how partnering with Palantir and using Palantir’s dedicated AI governance suite of products can help your organization ensure compliance with the AIA and other AI regulation. For a full overview and links to material tracking the thought leadership of Palantir’s Privacy and Civil Liberties team on AI regulation and other important developments of the day, check out: https://www.palantir.com/pcl/thought-leadership/.

[1] Formal work on the drafting of the Act was proceeded by various public consultations, including a solicitation for comments on a draft EU Commission Whitepaper on AI. You can find Palantir’s contributions to those early framing decisions here; see ‘May 2020 — Response to the European Commission’s Consultation on the White Paper on AI.’


A Palantir Primer on the EU AI Act was originally published in Palantir Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.