The Artificial Intelligence Act of the European Union (AI Act)


26 minute read | October.22.2024

The European Union (EU) has reached a significant milestone by finalizing the Artificial Intelligence Act (AI Act), establishing the world's first comprehensive AI law. This article outlines the history, structure, requirements and practical consequences of the AI Act, including the considerable range of fines.

I. History of the AI Act

The AI Act adopted by the European Parliament is based on a 2021 draft by the European Commission, but it underwent several changes during the legislative process.

1. First draft of the European Commission from 2021

In April 2021, the European Commission published its draft AI Act[1] to create a uniform legal framework applicable in Member States and facilitate the free movement of goods and services in the field of AI.[2] The AI Act is an essential part of the European AI strategy,[3] which uses a risk-based approach to classify AI systems into different risk levels.[4] The classification determines the requirements and legal consequences, which can include a ban if there is an unacceptable risk.[5] Given the rapid developments in this field, the draft provisions were designed to be as future-proof as possible.[6]

2. Draft of the European Parliament of June 2022

In June 2022, the European Parliament agreed on a draft that included a number of amendments and additions to the Commission's draft.[7]

Notably, the Parliament added an AI system category called the "basic model" (Art. 3(1) no. 1 c draft AI Act), along with obligations such as reducing certain risks and processing data sets (Art. 28b draft AI Act).

A “basic model” is defined as a "system model that has been trained on a broad database, is designed for general output and can be adapted to a wide range of different tasks" (Art. 3(1) no. 1 c AI-VO-E Parl). The Parliament argued that the regulation is necessary due to the "considerable uncertainty as to how base models will develop." It also said the "complexity and unexpected effects [and] the lack of control of the downstream AI provider” necessitated a "fair distribution of responsibility along the AI value chain."[8]

3. EU Council draft of November 2022[9]

The Council aimed to narrow the definition of an AI system by focusing on "machine learning and/or logic- and knowledge-based concepts" (Section IV. 1.1 of the preliminary remarks and Art. 3(1) Council AI Act). The draft included special provisions for general purpose AI systems, including those intended to perform functions such as image or speech recognition, audio and video generation, pattern recognition, answering questions and translation (Art. 3(1b) Council AI Act).

4. Negotiations in 2023 and publication in 2024

Negotiations between the EU institutions began in June 2023[10] under significant pressure to conclude successfully before the European elections in June 2024. The outcome remained uncertain until the end. In November 2023, Germany, France and Italy reportedly proposed a position paper focusing on regulating the use of AI rather than the technology itself. They suggested mandatory self-regulation of basic AI models through codes of conduct, initially without sanctions.[11]

After three days, negotiators reached a political agreement on December 9, 2023.[12] The Council's Permanent Representatives Committee approved it on February 2, 2024, followed by parliamentary committees on February 13, 2024.[13] Parliament adopted the AI Act by a large majority on March 13, 2024.[14] The following explanations are based on the text of the AI Act Parliament adopted on March 13, 2024.[15] The final version of the AI Act text was published on July 12, 2024.[16]

II. Scope of AI Act

The AI Act primarily applies to AI systems. According to Art. 3(1) AI Act, these are "machine-based system[s] designed for varying degrees of autonomous operation." These systems must also be "adaptable and derive from the inputs received for explicit or implicit goals how to produce results such as predictions, insights, recommendations or decisions that can influence physical or virtual environments" (Art. 3(1) AI Act). The AI Act also contains rules for general-purpose AI models (see section III. 2. c) below).

Art. 2 AI Act outlines its scope, which includes a degree of extraterritoriality, similar to the General Data Protection Regulation (GDPR).

The AI Act applies to:

  • Providers who place AI systems or general-purpose AI models on the market in the EU, regardless of whether these providers are established in the Union or in a third country (Art. 2(1)(a) AI Act).
  • Operators of AI systems established or located in the Union (Art. 2(1)(b) AI Act).
  • Providers and operators of AI systems established in or located in a third country if the system’s results are used in the Union (Art. 2(1)(c) AI Act).

The AI Act defines a "provider" as "a natural or legal person, public authority, agency or other body which develops or has developed an AI system or an AI model with a general purpose and places it on the market under its own name or trademark or puts the AI system into service under its own name or trademark, whether in return for payment or free of charge" (Art. 3(3) AI Act).

A key focus is on placing on the market, operating and using AI systems in the EU. However, the regulation also aims to prevent undermining protection for EU citizens. For example, Recital 22 AI Act mentions the transfer of EU data by an actor[17] in the EU to a third country, where a high-risk AI system processes the data non-compliantly and sends the results back to the actor in the EU. There are several constellations where companies based outside the EU may fall under the regulation’s scope.

The regulation also provides exemptions in areas such as national security, defense, military and research and development, as advocated by the Council.[18]

III. Structure and Key Features

The AI Act is the first comprehensive regulation of AI worldwide, created without any prior blueprint. This makes it particularly interesting to examine the structural decisions that underpin it.

1. Future proofing and prevention of innovation inhibition

Regulating AI comprehensively is challenging due to the complexity and dynamic nature of its applications and developments. One major concern is the potential to hinder innovation. To address this, the AI Act adopts a technology-neutral approach, aiming to avoid frequent overhauls as technology rapidly evolves.[19]

The Commission will have the authority to specify and update many of the regulation’s deliberately abstract provisions through delegated acts. For example, the list of high-risk AI systems in Annex III can be amended by legal act (Art. 7 AI Act). The lack of precision in the regulation may also stem from disagreements among EU institutions on certain aspects. This grants the Commission significant influence in regulating a future key technology.

This approach allows the EU to respond quickly to future AI developments. However, from a company's perspective, it carries the risk of needing to regularly adapt compliance systems, potentially affecting investment security.

Another notable feature of the AI Act is the relief for small and medium-sized enterprises. These companies can expect lower fines (Art. 99(6) AI Act). By explicitly mentioning startups, the EU acknowledges their high innovation potential and aims to avoid disadvantaging them in international competition (see recital 8 AI Act).

2. Risk-based approach

As previously mentioned, the Commission's risk-based approach has remained in principle.

a) Prohibited AI practices

AI systems with particularly high risks are prohibited. However, the scope of this ban is very limited under Art. 5 AI Act, meaning most companies’ businesses are unlikely to be affected.

Prohibitions include subliminally influencing a person with the aim or effect of significantly impairing the ability to make an informed decision that may result in significant harm to that person or another person or group or "the untargeted extraction of facial images from the internet or from surveillance footage" to create or expand databases (Art. 5(1)(a) AI Act).

b) High-risk AI systems

High-risk AI systems are heavily regulated but not prohibited. These include:

  • Systems that – as a product or its safety component – fall under the EU harmonization legislation in Annex I AI Act. That would include medical devices, for example, if they must undergo a third-party conformity assessment with regard to its placing on the market or its putting into service in accordance with Art. 6(1) AI Act.
  • Systems listed in Annex III AI Act, such as those used in the management and operation of critical digital infrastructure (Art. 6(2) AI Act).

However, this does not apply to AI systems that "do not present a significant risk of harm to the health, safety, or fundamental rights of natural persons" and "do not significantly influence the outcome of decision-making" (Art. 6(3) AI Act). The Parliament and Council advocated this restriction, whereas the Commission’s draft classified AI systems listed in Annex III as high-risk (Art. 6(2) AI Act).

Annex III AI Act also lists specific high-risk AI systems, such as:

  • Remote biometric identification systems.
  • AI systems intended for the recruitment or selection of natural persons.

Under certain conditions, the Commission is authorized to supplement and adapt the list in Annex III by legal act (Art. 7 AI Act). Additionally, the Commission is required to issue "guidelines on the practical implementation" of the requirements no later than 18 months after the AI Act comes into force (Art. 6(5) AI Act).

For high-risk AI systems, the AI Act includes an extensive catalogue of regulations. These rules apply to providers and other stakeholders, including operators (see Section IV. 1. below). Since the legislature classifies these systems as particularly risky and sensitive to fundamental rights, the legislature has implemented multiple measures to ensure compliance.

c) Requirements for general-purpose AI models

While the Commission’s draft primarily focused on high-risk AI systems, the AI Act Parliament adopted also includes obligations for general-purpose AI models.

According to Art. 3(63) AI Act, a general-purpose AI model is generally usable and "capable of competently performing a wide range of different tasks, regardless of the manner in which it is placed on the market.” Additionally, such a model "can be integrated into a large number of downstream systems or applications.”

Unlike high-risk AI systems, the essential requirements for general-purpose AI models are primarily aimed at providers. However, additional requirements apply if systemic risk exists (Art. 55 AI Act). This risk is "specific to the high-efficiency capabilities of general-purpose AI models" (Art. 3(65) AI Act) and "has a significant impact on the Union market due to its reach or due to actual or reasonably foreseeable negative consequences for public health, safety, public security, fundamental rights or society as a whole, which may spread on a large scale across the entire value chain" (Art. 3(65) AI Act). Models with high impact capabilities "match or exceed the capabilities identified in the most advanced general-purpose AI models" (Art. 3(64) AI Act). This determination is based on suitable technical instruments and methods (Art. 51 AI Act).

d) Review and requirements from other laws

Given the various classifications of AI systems and models, and the differing legal consequences based on them, the initial step of determining the classification of a particular AI system or model is crucial. As AI development and use expands across industries, more companies will need to manage differently classified AI systems and models. This will require comprehensive compliance systems to meet numerous legal obligations, not just those from the AI Act. For example, Recital 166 of the AI Act states that if the AI Act’s requirements do not apply, companies may still need to meet the requirements of the Product Safety Regulation (EU) 2023/988, which will apply from December 13, 2024, in accordance with Art. 52.[20] These requirements may also be relevant for low-risk AI systems and should not be overlooked.

3. Scope of application and target group

The AI Act has a broad scope of application, extending even extraterritorially, and affects a wide range of stakeholders. While most obligations apply to providers of AI systems, the regulation also imposes obligations on operators, distributors and importers of AI systems. Consequently, the AI Act is not only aimed at leading technology companies developing well-known AI applications but also at companies across various industries that must adapt to the requirements of the AI Act.

4. Conformity assessment and certificates

Another special feature of the AI Act involves the regulations on notifying authorities and notified bodies (Art. 28 et seq. AI Act) as well as on conformity assessment and certificates (Art. 40 et seq. AI Act).

a) Notifying authorities and notified bodies

Art. 28(1) AI Act mandates that every Member State appoint a notifying authority. This authority is responsible for developing and executing essential processes for evaluating, designating and notifying conformity assessment bodies, as well as overseeing them. These bodies are tasked with ensuring high-risk AI systems comply with the standards set out in Art. 43 AI Act’s conformity assessment procedures.

b) Conformity assessment

Art. 3(20) AI Act defines a conformity assessment as the process to verify compliance with the criteria in Art. 8 and provisions of Title II Section 2 AI Act. The regulation typically mandates a conformity assessment by the provider in accordance with Annex VI AI Act. Nonetheless, certain situations permit an external evaluation involving a notified body, which will scrutinize the quality management system and technical documentation in line with Annex VII.

c) Certificates, EU Declaration of Conformity and CE conformity marking

Notified bodies issue certificates of conformity for a specified duration under Annex VII AI Act (Art. 44 (2) AI Act). Should a notified body determine that an AI system fails to comply with the requirements of Art. 8 et seq. AI Act, it will suspend, revoke or limit the certificate as outlined in Art. 44(3) AI Act, barring the implementation corrective measures within a reasonable time.

Providers are required to produce an EU declaration of conformity for each AI system, as mandated by Art. 47(1) AI Act. This declaration affirms that the high-risk AI system meets the criteria in Art. 8 et seq. AI Act and includes details specified in Annex V AI Act (Art. 47(2) AI Act). The declaration’s intent is for providers to assume responsibility for adhering to the requirements of Art. 8 et seq. AI Act (Art. 47(4) AI Act).

By applying the CE conformity marking, providers assert that an AI system complies with the requirements of Art. 8 et seq. CI Regulation, as well as other harmonizing legislation referenced in Annex I of the CI Regulation that mandates this marking (Art. 3(24) CI Regulation). The CE marking must be affixed to the AI system in a visible, legible and indelible manner according to Art. 48(3) AI Act. If the nature of the high-risk AI system precludes this, the marking should be placed on the packaging or accompanying documentation (Art. 48(3) sentence 2 AI Act).

5. Enforcement

a) Sanctions

The AI Act, like the GDPR, provides for significant fines for non-compliance, with the ceiling varying based on the infringement category. For the most severe breaches – violations of prohibited AI practices – fines can reach up to 7% of a company's global annual turnover from the previous financial year (Art. 99(3) AI Act). Member States can also impose their own sanctions, including fines, for breaches of the AI Act (Art. 99(1) AI Act).

b) Responsible authorities

The European Parliament successfully advocated for the establishment of an Office for Artificial Intelligence. The AI Office will primarily be responsible for monitoring general-purpose AI models (Art. 3(47) and Art. 88 et seq. AI Act).

In addition, national authorities will enforce the AI Act. Each Member State is required to establish at least one notifying authority (Art. 3(19) AI Act) and one market surveillance authority or an authority that performs both functions. Market surveillance authorities are responsible for enforcing and sanctioning under the AI Act (Recital 156, Art. 3(26) AI Act). According to Art. 85 AI Act, natural or legal persons have the right to file complaints with the relevant market surveillance authority regarding any infringements of the AI Act.

From a company's standpoint, it is beneficial to coordinate with one authority that oversees data protection and the AI Act. The effectiveness of the AI Act’s enforcement and implementation largely depends on the resources and staffing allocated by Member States.

To ensure uniform application across the EU, the AI Act includes several measures. For example, the AI Office will support and coordinate with national authorities. Additionally, the European Committee on Artificial Intelligence is created for this purpose (Art. 65(1) AI Act). The AI Committee comprises one representative from each Member State, with the AI Office participating in the meetings without voting rights (Art. 65(2) AI Act). In particular, the AI Committee aims to coordinate and align national authorities, issuing recommendations and written opinions on implementation issues and uniform application (Art. 66 AI Act).

Despite the diverse structure of authorities in Member States, it is hoped that a uniform interpretation and enforcement practice will soon be established. This will be challenging, as the AI Act is new, and the highest courts will need to clarify interpretation issues. However, this can also be advantageous. Unlike data protection law, where national laws and materially differing interpretations and enforcement practices existed before the GDPR, no established national practice exists for the AI Act. This could lead to impartial coordination among authorities on interpretation issues from the outset.

6. Effective date of the AI Act

The AI Act includes a two-year transitional period from its entry into force, starting 20 days after its publication in the Official Journal of the EU, until it becomes fully effective (Art. 113 AI Act). However, shortened transitional periods apply for certain provisions:

  • Bans on AI Practices: The transitional period is six months (Art. 113(a) AI Act). Given the limited scope of application, this will likely affect most companies only peripherally.
  • General-Purpose AI Models: The transitional period is 12 months (Art 113(b) AI Act), forcing affected companies to immediately start setting up compliance structures.
  • High-Risk AI Systems: The transitional period is extended to 36 months for high-risk AI systems in accordance with Art. 6(1) and Annex I of the AI Act (Art. 113(c) AI Act).

IV. Key Requirements of the AI Act

1. Requirements for high-risk AI systems

The AI Act primarily focuses on a framework for high-risk AI systems (see III. 2b above).

a) Requirements

Art. 8 et seq. AI Act outline the primary requirements for high-risk AI systems. In meeting these requirements, one must consider the system’s intended purpose and implement the risk management system referred to in Art. 9 AI Act. Specifically, the requirements include:

aa) Risk management system

Art. 9(1) AI Act mandates the establishment, application, documentation and maintenance of a risk management system for high-risk AI systems. This system requires planning, implementation and regular updates throughout the AI system’s lifecycle.

The process encompasses a risk analysis and the adoption of appropriate, targeted risk management measures. These aim to mitigate risks to a level deemed acceptable under Art. 9(5) AI Act. Additionally, testing protocols must ensure the systems operate as intended and meet the requirements in Art. 8 et seq. AI Act.

bb) Data and data governance

AI systems classified as high-risk must have training, validation and testing datasets that adhere to the quality standards in Art. 10(2) to (5) AI Act in accordance with Art. 10(1) AI Act. This includes making strategic decisions on concepts and data collection procedures (Art. 10 para. 2 AI Act). Additionally, the datasets must be relevant, sufficiently representative and strive to be as error-free and complete as possible (Art. 10(3) AI Act). Such rigorous standards are designed to mitigate the risk of discrimination against certain groups of people (recital 67 AI Act).

cc) Technical documentation

Art. 11 (1) AI Act mandates the preparation of comprehensive technical documentation of a high-risk AI system before it is placed on the market or put into operation. This documentation must be maintained and updated consistently. The essential content requirements are detailed in Annex IV of the AI Act.

The documentation must demonstrate the high-risk AI system’s compliance with the requirements of Art. 8 et seq. AI Act.

dd) Recording obligations

High-risk AI systems must facilitate automatic event logging during their lifecycles in accordance with Art. 12(1) AI Act. This includes capturing events relevant for facilitating post-market surveillance by the provider in accordance with Art. 72 AI Act (see Section IV. 1. d) aa) below) and allowing operators to monitor the operation of the high-risk AI systems in accordance with Art. 26(6) AI Act.

ee) Transparency and provision of information for operators

Art. 13(1) AI Act mandates that high-risk AI systems must be designed and developed in such a way that their operation is sufficiently transparent. This should help operators and providers meet their responsibilities under Art. 16 et seq. AI Act (i.e. Title II Section 3 of the AI Act). The goal is to ensure that operators can comprehend and use the system’s outputs (Art. 13(1) AI Act).

In addition, these systems must come with user instructions (Art. 13 (2) AI Act). In this regard, Art. 13 (3) AI Act prescribes mandatory content, such as the name and contact details of the provider as well as features, capabilities and performance limits of the high-risk AI system.

ff) Human supervision

Furthermore, high-risk AI systems must be developed in such a way that they can be effectively supervised by natural persons for the duration of their use in accordance with Art. 14 para. 1 of the AI Act. This is intended to prevent or minimize risks to health, safety or fundamental rights (Art. 14 para. 2 AI Act).

In addition, Art. 14 para. 4 of the AI Act stipulates, among other things, that those to whom supervision has been delegated must be able to understand the capabilities and limitations of the AI system and be able to intervene in the operation of the system.

gg) Accuracy, robustness and cybersecurity

Furthermore, according to Art. 15 (1), high-risk AI systems must be developed in such a way that they achieve an appropriate level of accuracy, robustness (i.e. resistance to errors, malfunctions or inconsistencies) and cybersecurity consistently throughout their life cycle.

b) Obligations of providers, operators and other parties

Under Art. 16 et seq. AI Act, providers of high-risk AI systems, operators and other parties have the following key obligations:

aa) Obligations for providers

Providers of high-risk AI systems must adhere to several obligations under Art. 16 AI Act. They must:

  • Fulfil the requirements of Art. 8 et seq. AI Act (i.e. Title II Section 2) and demonstrate compliance to a competent national authority upon request.
  • Supply their contact details and take necessary corrective measures as per Art. 20 AI-Act.
  • Establish a quality management system to ensure compliance with the AI Act, as specified in Art. 17 (1) AI Act. This system must be documented with written rules, procedures and instructions. Art. 17(1) outlines minimum requirements, such as:
    • A compliance concept with regulatory provisions.
    • Techniques, procedures and systematic measures for development, quality control and quality assurance.
    • A procedure for reporting serious incidents in accordance with Art. 73 AI Act.
  • Retain documentation related to quality management system for 10 years (Art. 18 AI Act).
  • Retain logs automatically generated by high-risk AI systems under the provider’s control (Art. 19 and 12(1) AI Act).
  • Comply with the registration obligations in Art. 49(1) AI Act regarding the EU database (see Section IV. 1. c) below.
  • Affix the CE marking to the high-risk AI system (Art. 48 AI Act).

Providers established outside the EU must also appoint an authorized representative within the EU in writing before making their systems available in the EU (Art. 22(1) AI Act). According to Art. 22(3) AI Act, the authorized representative has several mandatory tasks, including:

  • Ensuring that the EU declaration of conformity and the technical documentation have been prepared (Art. 11 AI Act).
  • Verifying that the provider has conducted an appropriate conformity assessment.
  • Cooperating with authorities (Art. 22(3)(d) AI Act).

Importers, distributors and other third parties can be considered suppliers under certain conditions (Art. 25 AI Act). Consequently, they are subject to the same obligations as providers under Art. 16 AI Act. This applies if they:

  • Make a material change to a high-risk AI system, ensuring it remains a high-risk AI system, or
  • Alter the intended purpose of an AI system already on the market in such a way that it becomes a high-risk AI system.

bb) Obligations of importers

According to Art. 3(6) AI Act, an "importer" is "a natural or legal person located or established in the EU who places on the market in the Union an AI system bearing the name or trademark of a natural or legal person established in a third country." Importers of high-risk AI-systems have specific obligations under Art. 23 AI-Act, including:

  • Before placing the AI system on the market, importers must verify that:
    • The provider has conducted the conformity assessment as per Art. 43 AI Act.
    • The provider has supplied technical documentation in accordance with Art. 11 AI Act and Annex IV.
    • The system bears the required CE conformity marking and is accompanied by the EU Declaration of Conformity and instructions for use.
    • The authorized representative has been designated by the provider in the third country (Art. 22 AI Act).
  • If there is sufficient reason to assume a high-risk AI system does not comply with the AI Act, is falsified or accompanied by falsified documentation, importers must not place the system on the market until it conforms.
  • Additionally, importers must indicate their contact details on the packaging or, if applicable, in the accompanying documentation of the high-risk AI system.

cc) Obligations of distributors

Art. 3(7) AI Act defines a "distributor" as "a natural or legal person in the supply chain, other than the supplier or the importer, who makes an AI system available on the Union market.” Distributors of high-risk AI systems have obligations under Art. 24 AI Act that include:

  • Before making the AI system available on the market, distributers must check:
    • Whether the high-risk AI system bears the required CE conformity marking.
    • Whether it is accompanied by a copy of the EU declaration of conformity and instructions.
    • Whether the provider or, where applicable, the importer, has fulfilled obligations regarding contact details and quality management (Art. 16(b)(c) and 23(3)).
  • If the distributor believes or has reason to believe that a high-risk AI system does not meet the requirements of Art. 8 et seq., they must make it available on the market only after conformity has been established. They also must take corrective measures.

dd) Obligations of the operator

Art. 3(4) AI Act defines an "operator" as "a natural or legal person, public authority, agency or other body which uses an AI system under its own responsibility.” Operators of high-risk AI systems are subject to the following obligations under Art. 26:

  • Use the systems in accordance with the enclosed instructions.
  • Ensure the input data corresponds to the intended purpose of the system and is sufficiently representative.
  • If there is reason to believe that using the system as instructed may pose a risk to health, safety or fundamental rights, the operator must inform the provider or distributor and the market surveillance authority and suspend the use of the system.
  • Establish competent human supervision.
  • Monitor the operation of the high-risk AI system based on the instructions for use.
  • Retain automatically generated logs to the extent that the logs are under their control.
  • For high-risk AI systems listed in Annex III AI Act that make or support decisions related to natural persons, inform those individuals that they are subject to the use of this system.

c) EU database for high-risk AI systems listed in Annex III AI Act

Providers of high-risk AI-systems listed in Annex III AI Act (with the exception of No. 2) and, where applicable, their authorized representatives, must register themselves and their systems in an EU database before placing them on the market, putting them into service or testing them.

The Commission, in cooperation with Member States, will establish an EU database containing information on registered high-risk AI systems (Art. 71(1) AI Act). Annex VIII and Annex IX of the AI Act detail the information registrants must enter into the database (Art. 71(2) AI Act). This includes the contact details of the provider and information about the high-risk AI system, such as its intended purpose and, in the case of Art. 60 AI Act, details about the test.

d) Requirements for post-market surveillance and reporting of serious incidents

After placing high-risk AI systems on the market, providers have several obligations, particularly regarding post-market observation:

aa) Post-market observation

According to Art. 72(1) AI Act, providers must set up and document a monitoring system appropriate to the type of AI technology and the associated risks.

The system is designed to collect, document and analyse data on the performance of high-risk AI systems. The system should enable the provider to continuously evaluate compliance with the requirements outlined in Art. 8 et seq. AI Act.

The monitoring system must be based on a corresponding plan, which is part of the technical documentation listed in Annex IV AI Act (Art. 72(3) AI Act).

bb) Reporting serious incidents

Providers of high-risk AI systems placed on the market in the EU must report serious incidents.

According to Art. 3(49) AI Act, a serious incident is an incident or malfunction of an AI system that directly or indirectly results in severe consequences, such as death or serious harm to a person’s health. The notification must be made immediately after establishing the causal link between the AI system and the serious incident or the likelihood of such a link. It must be reported no later than 15 days after a person or company becomes aware of the incident (Art. 73 (2) AI Act). For particularly serious incidents, even shorter deadlines apply (Art. 73(3)(5) AI Act).

2. The AI Act includes provisions for AI models with a general purpose (see III. 2. c) above), especially when there is a systemic risk.

According to recital 97 AI Act, general-purpose AI models are essential components of AI systems but do not constitute AI systems by themselves. Additional components, such as user interfaces, are required to transform them into complete AI systems. Recital 97 AI Act states: "AI models are generally integrated into and form part of AI systems."

Recital 99 AI Act provides that large generative AI models are typical examples of a general-purpose AI models. These models can generate content, such as text, audio, images or video, and can easily be adopted to a wide range of tasks.

a) AI models with a general purpose

aa) Requirements

Providers of general-purpose AI models must adhere to specific requirements as outlined in Art. 53(1) AI Act. These include:

  • Prepare and maintain up-to-date technical documentation for the AI model. This documentation must include at least the information specified in Annex XI AI Act, such as a general description of the AI model.
  • Prepare, update and make available information and documentation to providers of AI systems who intend to integrate the AI model into their AI systems. This information should include at least the details specified in Annex VII of the AI Act, which covers conformity based on an assessment of the quality management system and of the technical documentation.
  • Ensure that provided information enables AI system providers to thoroughly understand the capabilities and limitations of the AI model.
  • Implement a strategy to comply with EU copyright law, including through state-of-the-art technologies.
  • Create and publish a summary of content used to train the AI model based on the template provided by the AI Office.

The first two requirements – complying with EU copyright law and creating a content summary – do not apply to AI models provided under a free and open license with known conditions whose parameters are publicly available (Art. 53 (2) AI Act).

The Commission is authorized to adapt the requirements in Annexes XI and XII through delegated acts (Art. 53(6) AI Act).

bb) Obligation to appoint an authorized representative for non-EU providers

Before providers established in a third country place a general-purpose AI model on the market in the EU, they must appoint an authorized representative established in the EU in writing (Art. 54(1) AI Act). The representative’s tasks include mandatory responsibilities as specified by the AI Act, such as ensuring that a general-purpose AI model complies with requirements and cooperating with the AI Office (Art. 54(2) AI Act).

b) AI models with general purpose and systemic risk

In addition to the general requirements, further obligations apply to providers of such AI models that pose a systemic risk (see III. 2. c) above).

aa) Classification

The AI Act outlines criteria for determining when a general-purpose AI model poses a systemic risk. This is particularly the case if it has been determined, using suitable technical instruments and methods, that the AI model has capabilities with a high degree of impact or that the existence of such capabilities or impact has been established by a Commission decision (Art. 51(1) AI Act). In the former case, the provider must notify the Commission without delay and within two weeks at the latest (Art. 52(1) AI Act). In this respect, the provider has the opportunity to present arguments to the Commission that counter the classification of the AI model as posing a systemic risk (Art. 52(2) AI Act). The Commission will examine these arguments and make a final determination (Art. 52(3) AI Act).

bb) Requirements

Providers of general-purpose AI models that pose a systemic risk must adhere to additional requirements as specified in Art. 55(1) AI Act. These include:

  • Perform an assessment of the AI model based on standardized protocols and tools. This assessment must include attack tests to identify and mitigate system risks.
  • Identify potential systemic risks at the EU level, including their causes, arising from developing, placing on the market or using the model.
  • Investigate, document and promptly report serious incidents to the AI Office and, where appropriate, to national authorities. This report should include information on possible remedial action.
  • Ensure cybersecurity protection for the AI model.

c) Codes of conduct

The AI Office is tasked with promoting and facilitating the creation of codes of conduct at the EU level (Art. 56(1) AI Act). These codes aim to enable providers to demonstrate compliance with requirements for general purpose AI models (Recital 117 AI Act).

3. Transparency obligations for certain AI systems

Irrespective of any requirements for high-risk AI systems, Art. 50 AI Act imposes transparency obligations for providers and operators with regard to certain AI systems. These obligations aim primarily to ensure that natural persons know when they interact with an AI system or content generated by it. According to Art. 50(1) to (4) AI Act, the transparency obligations apply to AI systems that:

  • Are intended for direct interaction with natural persons.
  • Recognize or categorize emotions based on biometric information.
  • Create so-called deepfakes.

Companies, especially those in the end-customer sector, should ensure they provide appropriate information to users if these requirements apply. This includes making clear when users are dealing with an AI system or content generated by such a system.

4. Sanctions under the AI Act

The AI Act establishes graduated fines for infringements, as outlined in Art. 99(3) to (5) AI Act. The fines are based on a company's global annual turnover in the previous financial year and are categorized as follows:

  • For violations of the bans on certain AI practices under Art. 5 AI Act: EUR 35 million or 7%.
  • For violations of various provisions, such as the obligations of providers, operators and other parties involved in high-risk AI systems as listed above under IV. 1. b) or the transparency obligations for certain AI systems (see IV. 3): EUR 15 million or 3%.
  • For providing false, incomplete or misleading information to notified bodies and national authorities in response to requests for information: EUR 7.5 million or 1.5%.

According to Art. 99(7) AI Act, fines will be based on "all relevant circumstances of the specific situation" of the individual case. Regulators will take several criteria into account, including the nature, gravity and duration of the infringement.

The Commission may impose fines of EUR 15 million or 3% on providers of general-purpose AI models for certain intentional or negligent infringements. This provision will come into effect two years after the AI Act enters into force (see Art. 113(b) AI Act).

V. Overall Conclusion and Outlook

The AI Act adopts a risk-based approach and establishes clear requirements and procedures for market participants across various areas. This approach necessitates an individual assessment to determine whether and to what extent AI Act requirements apply to an AI system or general-purpose AI model.

It is possible that some companies, despite being very active in AI, may be subject only to limited obligations. At the same time, the AI Act includes some obligations for general purpose AI models that were not part of the original Commission draft.

Given that almost all companies of a certain size develop, use or are considering using AI systems, it is crucial for every company to establish compliance processes. Companies should conduct an initial review to classify AI systems or models under the categories provided for in the AI Act. They also should ensure ongoing compliance with the corresponding requirements of the AI Act.

It will be interesting to observe whether the EU can achieve its goal of creating "a global standard for the regulation of AI in other jurisdictions" through the AI Act.[21] If it does, that could offer significant advantages for companies that have adapted their structures and processes to comply with the AI Act.

Dr. Daniel Ashkar, attorney at law, is counsel in the cyber and data law practice group of Orrick in Munich.

Dr. Christian Schröder, lawyer, is a partner and head of the European cyber and data law practice group at Orrick in Düsseldorf.

 


[1] European Commission, Law on Artificial Intelligence, 21.4.2021, at https://eur- lex.europa.eu/legal-content/DE/TXT/HTML/?uri=CELEX:52021PC0206 (accessed: 18.3.2024), hereinafter "KI-VO-E Kom".

[2] Recital 1 KI-VO-E Com.

[3] European Commission, European Approach to Artificial Intelligence, at https://di gital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence (accessed: 18.3.2024).

[4] European Commission, AI Act, at https://digital-strategy.ec.europa.eu/de/policies/regulatory-framework-ai (accessed: 18.3.2024).

[5] European Commission, AI Act, at https://digital-strategy.ec.europa.eu/de/policies/regulatory-framework-ai (accessed: 18.3.2024).

[6] European Commission, AI Act, at https://digital-strategy.ec.europa.eu/de/policies/regulatory-framework-ai (accessed: 18.3.2024).

[7] European Parliament, Amendments of the European Parliament of 14.6.2023 to the Act on Artificial Intelligence and amending certain Union acts (COM(2021)0206 - C9-0146/2021 - 2021/0106(COD)), at https://www.europarl. europa.eu/doceo/document/TA-9-2023-0236_EN.html (accessed 18.3.2024), hereinafter "KI-VO Parl".

[8] Recital 60g KI-VO-E Parl.

[9] Council of the EU, Proposal for a Regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union acts, 25.11.2022, at https://data.consilium.europa.eu/doc/document/ST-14954-2022-INIT/de/pdf (accessed: 18.3. 2024), hereinafter "AI Act Council".

[10] Council of the EU, Artificial Intelligence Act: Council and Parliament agree on world's first regulation of AI, 9.12.2023, at https://www.consilium.europa.eu/de/press/ press-releases/2023/12/09/artificial-intelligence-act-council-and-parliament-strike-a-deal-on-the-first-worldwide-rules-for-ai/ (accessed: 18.3.2024).

[11] Rinke, Exclusive: Germany, France and Italy reach agreement on future AI Act, 20.11.2023, at https://www.reuters.com/technology/germany-france-italy-reach-agreement-future-ai-regulation-2023-11-18/ (Abruf: 18.3.2024).

[12] Council of the EU, Artificial Intelligence Act: Council and Parliament agree on world's first regulation of AI, 9.12.2023, at https://www.consilium.europa.eu/de/press/ press-releases/2023/12/09/artificial-intelligence-act-council-and-parliament-strike-a-deal-on-the-first-worldwide-rules-for-ai/ (accessed: 18.3.2024).

[13] Cf. Bracy/Andrews, EU countries vote unanimously to approve AI Act, 2.2.2024, at https://iapp.org/news/a/eu-countries-vote-unanimously-to-approve-ai-act/ (accessed: 18.3.2024); Lomas, EU AI Act secures committees' backing ahead of full parliament vote, 13.2.2024, https://ca.style.yahoo.com/eu-ai-act-secures-committees-102423323.html (accessed: 18.3.2024).

[14] European Commission, EU schafft Blaupause für vertrauenswürdige KI         in der        ganzen Welt, PM dated 13.3.2023, at https://germany.representation.ec.europa.eu/news/eu- schafft-blaupause-fur-vertrauenswurdige-ki-der-ganzen-welt-2024-03-13_en?prefLang=en (accessed: 18.3.2024).

[15] European Parliament, Adopted text of the Artificial Intelligence Act, 13.3.2024, at https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_ DE.pdf (accessed: 18.3.2024). As the final version of the AI Act was not yet available at the time of going to press, definitions and terminology in the final version of the AI Act may differ from those used in this article.

[16] https://eur-lex.europa.eu/eli/reg/2024/1689/oj.

[17] According to Art. 3 (8) of the AI Act, the term "actor" refers to the provider, the product manufacturer, the operator, the user, the authorized representative, the importer or the distributor.

[18] Section IV. 4.1 of the preliminary remarks and Art. 2 para. 3 of the KI-VO-E Council.

[19] Point 5.2.1 of the explanatory memorandum to the KI-VO-E Com.

[20] According to its Art. 2, the Product Safety Regulation "lays down essential requirements for the safety of consumer products placed or made available on the market". According to Article 3(1), this regulation applies to "any article which ... is intended for consumers or is likely to be used by consumers under reasonably foreseeable conditions, even if it is not intended for them".

[21] Council of the EU, Artificial Intelligence Act: Council and Parliament agree on world's first regulation of AI, 9.12.2023 at https://www.consilium.europa.eu/de/press/ press-releases/2023/12/09/artificial-intelligence-act-council-and-parliament-strike-a-deal-on-the-first-worldwide-rules-for-ai/ (accessed: 18.3.2024).