What the EU AI Act Means for Global Businesses | Orrick RegFi Podcast
Listen on Apple
Listen on Spotify

RegFi Episode 68: What the EU AI Act Means for Global Businesses
 42 min listen

Christian Schröder, leader of Orrick’s European Cyber, Privacy & Data Innovation practice, joins RegFi co-hosts Jerry Buckley and Caroline Stapleton to break down the EU Artificial Intelligence Act. The conversation explores the act’s global reach, regulatory framework and what companies can do to prepare as some provisions become effective on Aug. 2, 2025.

Links:

 

 

  • Jerry Buckley:

    Hello, this is Jerry Buckley, and I am here with co-host Caroline Stapleton and our partner Christian Schröder, who is resident in Orrick’s Düsseldorf office. Christian leads Orrick’s Cyber, Privacy and Data Innovation Group in Europe. We have asked him to join us to discuss the EU AI Act.

    That act is the first comprehensive legislative initiative to regulate the use of artificial intelligence. For the benefit of our listeners who have not been following the EU AI Act developments, we’ve asked Christian to walk us through the principal features of the Act. And with this background, in subsequent episodes, we will be discussing the current status of the EU AI Act and what it will mean for global companies. We’ll also be talking about how to build a governance framework for that act.

    And we’re going to be talking about prohibited AI and what businesses are at risk and more. But to start us off, Christian, could you give our listeners a brief overview of the EU AI Act? Specifically, what is its main purpose? To whom does it apply? And why should organizations be paying close attention to it?
    Christian Schröder:  Happy to do so. So, let’s maybe start with the purpose. So, the EU Artificial Intelligence Act, that’s the long form of the EU AI Act, is the world’s first comprehensive legal framework specifically regulating artificial intelligence. Its primary objectives are to ensure that AI systems placed on the EU market are safe, trustworthy and respect fundamental rights, democracy and the rule of law. They foster innovation and investment in AI while providing clear rules and legal certainty for businesses. This may sound a little bit counterintuitive. However, the EU believes that if there are clear rules, transparency, and, for example, for high-risk AI systems, a CE marking, AI systems can find better market acceptance. For the EU, due to its legal framework and its culture, it is clear that new systems, which may have a significant impact on individual rights and freedoms but also on our society as a whole, need to be regulated. It is then better to have one set of rules on the EU level than 27 different rules by the EU member states.

    Whether this objective will be achieved remains to be seen. The AI Act introduces a tiered, risk-based regulatory approach imposing different obligations depending on the potential risks posed by an AI system. It also aims to support the development of a single EU market for AI, encourage responsible innovation, and protect individuals from harmful or unethical uses of AI. What does it apply to? The EU AI Act applies broadly to a range of factors involved in the development, deployment and use of artificial intelligence systems and general-purpose AI models, the GPAIMs. Its scope is both territorial and extraterritorial, meaning it can apply to organizations and individuals both inside and outside the European Union under certain circumstances. This means, obviously, all companies that reside within the European Union.

    And by the way, when I speak about the EU, I always also speak about the European Economic Free Trade Area, the EEA, which includes Iceland, Norway, and Liechtenstein. Switzerland is not included, and the United Kingdom is about to develop its own rules for AI.

    However, it also applies to companies residing outside the EU but who either place an AI system onto the European market or put it into service. Or, for example, providers or deployers of AI systems located outside the EU, where the output produced by an AI system is used in the EU. As you can see, many organizations, even outside the EU, can potentially fall within the scope of the EU AI Act and should thus carefully consider whether they are subject to it. Why should organizations pay attention to it? Apart from the fines, which we will discuss later on, compliance with the EU AI Act will determine whether products have access to the EU market.

    Whether the usage of an AI system leads to significant risk, for example, product liability. But many businesses also value their reputation and trust in their brand. The Act emphasizes transparency, accountability, human oversight in AI systems. By adhering to these principles, organizations can build trust with consumers, business partners and regulators.
    Demonstrating a commitment to responsible AI can enhance an organization’s reputation and foster long-term customer loyalty. Even though many requirements under the AI Act are not yet in force, it is highly recommended to take a look at it now, because product compliance may take some time, in particular for providers of high-risk AI systems. Undergoing through the entire effort of achieving compliance and the respective conformity assessment will take time.

    Further, companies procuring AI may find it helpful to ensure that they’re buying products and services which they can also continue using in a few years, as compliance with the requirements may have an impact on their overall liability, not only from an EU AI perspective. Think about high-risk AI HR tools. Compliance will have a significant impact on the workforce acceptance.
    Caroline Stapleton:  Christian, thanks so much for being with us today. And as you’ve previewed just now, the EU AI Act has such a broad and far-reaching scope that I think at times it can be challenging for organizations to even conceptualize how much of it applies to them. Are there heightened expectations for particular industries or types of companies? And so I’ll put it to you. What types of technology providers and organizations need to pay particular attention to this act? And are there any specific roles or industries that should be especially focused in on their obligations here?
    Christian:  Yeah, so generally it is industry agnostic. The AI Act, as you already mentioned, has a very broad scope of application and affects a wide range of stakeholders. AI primarily governs two types of technology. AI systems, that is, any machine-based system, for example, software, that’s designed to operate with varying levels of autonomy to infer how to generate outputs, such as predictions, content, recommendations or decisions, that can influence physical or virtual environments and that may continue to adapt after deployment.

    And then we have got the general-purpose AI models, the GPAIMs, which I already mentioned before. That’s an AI model that displays significant generality, is capable of competently performing a wide range of distinct tasks and can be integrated into a variety of downstream services or applications.

    So, think about the various LLMs that are on the market. They’re not built for a specific use purpose. They are, therefore, general-purpose AI models. The obligations imposed on AI systems vary based on whether the system or its use qualifies as prohibited AI, high-risk AI or otherwise involves AI interacting directly with individuals or exposing individuals to AI-generated content, for example, individual user-facing AI.

    There are also certain exceptions available for AI systems and GPAIMs that are subject to free- and open-source licenses or currently in the pre-market research, testing or development stages. The AI Act imposes obligations on organizations based on their role in relation to the covered technology. So most obligations apply to providers, of course, right? They can build, generate, build the AI system, and the deployers, the users, while importers and distributors mainly face regulatory compliance verification and documentation obligations.

    However, the AI Act is not only aimed at leading technology companies who are developing well-known AI applications but also at companies across various industries that must adapt to the requirements to the AI Act. Actually, we need to go a little bit through the definitions, but they’re really key for understanding which obligations apply.

    So, let’s start with the providers. These are parties that develop or have another party develop on their behalf an AI system or GPAIM and either place the AI system or GPAIM on the market or put the AI system into service in the EU under their own name or trademark, whether for payment or free. That doesn’t matter.

    Or their output from the AI system is used in the EU. Placing on the market means the first-time use of an AI system or GPAIM in the EU, and putting it into service means supplying an AI system for first use to a deployer or for use in the EU for its intended purpose. An importer, that’s a party located or established in the EU that places an AI system or market in the EU under the name or trademark of a person or legal entity outside the European Union.

    Then we’ve got the distributors. These are parties to make an AI system available in the EU, but do not qualify as a provider or importer. So making it available on the market means supplying an AI system or GPAIM for distribution or use in the EU as part of a commercial activity, whether or not for payment. And then last, the deployers. These are most of the companies. That’s basically everyone. All the parties that use an AI system in connection with a professional or commercial activity, that are established or located in the EU, or use an AI system to generate outputs that are used in the EU.

    So very similar to the GDPR, the use of an AI system for purely private purposes is exempt. So that’s not covered under the AI Act. However, be aware, under certain circumstances, when a party becomes heavily involved in the technical development and/or placing onto the market of a high-risk AI system, such as, for example, by substantially modifying the system, modifying the intended purpose of the system, or — and that’s very, very careful to remember here — putting their name or trademark on the system, the party may be deemed the provider of that system.

    So, a product manufacturer may be deemed a high-risk AI system provider where the high-risk AI system is placed on the market in the EU together with the manufacturer’s product vendor, the manufacturer’s name or trademark. So that’s a very important step. Before you affix your own trademark or name, you should be very careful and understand which obligations will be incumbent on you.
    Jerry:  Well, there’s a lot there for people to think about, and there are many ways in which you can come under the Act and its requirements. You know, it really is the first comprehensive attempt to regulate AI worldwide, and it’s created without a prior blueprint. You have to imagine the challenge that existed for the creators of this legislation. How does the Act aim to balance the need for regulation with the goal of not stifling innovation? Could you walk us through the structure of the Act and how it addresses future developments in AI and manages different levels of risk for AI?
    Christian:  Yeah, Jerry, as you already mentioned, it’s not easy, and it wasn’t easy for the legislator and the EU as well. That’s something that became very clear when looking at the genesis of the European AI Act. When the legislative process of the EU AI Act was almost finished, suddenly LLMs came into the market, and due to their significant impact on the ways we now, or even more so in the future, will work, the legislators had to quickly amend the Act and introduce the specific requirements for GPAIMs. So that was quite an interesting thing that really came at the very end.

    How does the EU want to achieve that the EU AI Act will be accepted and not outdated once it is finally all in force? That has been done — tried to be done — by several means. So first of all, it aims to have a technology-neutral approach. Regulating AI comprehensively is challenging due to the complexity and dynamic nature of its applications and developments. One major concern is the potential to hinder innovation. To address this, the AI Act aims to adopt a technology-neutral approach, aiming to avoid frequent overhauls as technology rapidly evolves.

    The European Commission will thus have the authority to specify and update some of the regulations provisions through delegated acts. That’s a fairly new way of drafting a law. The legislator obviously foresaw that updates would be needed on a regular basis and thus didn’t even try to regulate everything from the beginning in the act. For example, the list of high-risk AI systems in Annex III can be amended by a legal act of the European Commission any time. This approach allows the EU to respond quickly to future AI developments.

    However, from a company’s perspective, it carries the risk of needing to regularly adapt compliance systems, potentially also affecting investment security. For small- and medium-sized businesses, these companies can expect lower fines. And by explicitly mentioning startups, the EU acknowledges their high innovation potential and aims to avoid disadvantaging them in international competition. Finally and foremost, by taking a risk-based approach, the AI Act uses a risk-based approach to classify systems into different risk levels.

    So first and foremost, it’s really important to know the AI Act does not prohibit AI generally. So unlike the GDPR, which simply says everything is prohibited — any processing is prohibited — unless you have a clear legal justification for it, the AI Act takes a totally different approach and says only selected AI systems are prohibited, and then there’s varying levels depending on the risk that will be applied to the different AI systems. The classification determines the requirements and legal consequences, which can include a ban if there’s an unacceptable risk, or further significant regulations. So, for example, prohibited AI practices. This is, by the way, already in place, already enforced. AI systems with unacceptable risks are prohibited. However, the scope of this ban is very limited under Article 5 of the AI Act, meaning most companies’ business are unlikely to be affected.

    Prohibitions include, for example, a social scoring system, which leads to significant impact on individuals in our society; subliminally influencing a person with the objective or effect of significantly impairing the ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken, in a manner that causes, or reasonably is likely to cause, significant harm to that person or another person or group; and the untargeted scraping of facial images from the internet or CCTV footage to create or expand databases.

    There’s been already a huge guidance on what current prohibited AI encompasses published, which we’ll discuss in a separate podcast. And then the high-risk AI system. They are heavily regulated but not prohibited. They are permissible. High-risk AI systems are systems that pose a risk to the health, safety, or fundamental rights of a person or are considered critical infrastructure. High-risk AI is determined by the annexes to the AI Act, and the Commission is required to issue guidelines on the practical implementation of these requirements no later than February 2, 2026.

    High-risk AI systems include, for example, biometric identification of natural persons; AI systems to be used for the recruitment or the selection of natural persons; high-risk systems intended to be used for emotion recognition. However, this does not apply to AI systems that do not pose a significant risk of harm to the health, safety, or fundamental rights of natural persons and do not materially influence the outcome of decision-making. In other words, even if the system may initially fall within the bucket of a high-risk AI system, a specific analysis of its potential impact on the health or fundamental rights can exclude the system from being considered a high-risk system.

    Many of our clients currently go through the analysis, performing a mapping of their systems, AI systems generally, in particular those ones considered a high-risk AI system, to understand which bucket these systems fall into. And then we have got the GPAIMs, right? Unlike the high-risk AI systems, the essential requirements for general-purpose AI systems are only aimed at providers.

    However, additional requirements apply if systemic risks exist. A systemic risk that is specific to the high-impact capabilities of general-purpose AI models and has a significant impact on the European Union market due to its reach or due to actual reasonably foreseeable negative effects on public health, safety, public security or fundamental rights. Basically, these are very large LLM models.

    And then we’ve got, and it’s a very key principle, the transparency — or transparency obligations — for providers and deployers of certain AI systems. Irrespective of any requirement for high-risk AI systems, Article 50 of the Act imposes transparency obligations for providers and deployers with regard to certain AI systems. Transparency is a key principle, as I mentioned already. These obligations aim primarily to ensure that natural persons know when they interact with an AI system or content generated by it. The transparency obligations apply to AI systems that are either intended for direct interaction with natural persons, for example, a chatbot, or recognize or categorize emotions based on their biometric information, or those who create so-called deepfakes.
    Jerry:  You know, I’m just going to make one observation, Christian. It is a highly complex regulation, and it must have taken a lot of thought. Quickly, when did the process start? When did the thought about legislating in this area start in Europe?
    Christian:  For the specific AI Act, right?
    Jerry: Yes.
    Christian:  A long time ago, really. But then it was really kind of very quickly done. So, by the way, among us right now, mainly, kind of the entire legislative process was about three years. I need to kind of review the specifics. But this was really kind of pushed through very quickly because, I mean, the development is so dynamic anyways. So you’ve got to start somewhere and then quickly get it through. And at the same time, pretty much all the other European acts, such as the Data Act, the DSA, everything else was created as well, which is why right now everyone feels a little bit overwhelmed. And there’s little bit of integration or adaptation of all these different laws, little alignment of all these kinds of laws with each other.

    So the AI Act, for example, sets out lots of requirements for the AI system, but it does not regulate at all how to use personal data when using an AI system, right? So they didn’t want to touch the “Bible,” the “European Bible,” the GDPR, also for political reasons. And that makes it difficult to implement and use, actually, the AI Act. So it’s just a sectoral approach in a certain way. And companies feeding their AI systems with personal data still struggle with the question of how and when, in use, to personal data. So that’s something that we could, for example, discuss also in a separate podcast on how to kind of reconcile the requirements of the AI Act together with GDPR. And it’s an entire different story.
    Caroline:  So, Christian, I think it’s such an important point that you made about how this act is not a prohibition. It’s more of a risk-based regulation that looks at, rather than different industries, different use cases and different roles, kind of industry agnostic within how organizations work when thinking about what the different requirements are. But the issue is the complexity. I think as companies are looking to comply with the EU AI Act, many may be unsure about the specific obligations that they’re going to face. And so I’ll put it to you.

    What would you say are the most important requirements for companies to be aware of, especially for those that are dealing with the high-risk AI systems that you’ve described? And I think our listeners will be really interested in just a couple examples of key action items that you think are really important for providers, importers, distributors, deployers to know about and the practical steps that companies are taking or at least should be taking to ensure compliance.

    So again, not asking for everything, but what do you think are the most important things that absolutely companies should know about and be taking active steps to comply with?
    Christian:  That varies depending on the different functions. So whether you’re a provider or the deployer of a high-risk system. So generally, Article 8 and sequence of the AI Act determines or specifies the requirements. And many of the requirements are yet to be clarified. So not easy for companies currently to prepare.

    First of all, they need to implement, for a high-risk AI system, a risk management system. So the AI mandates the establishment, application, documentation and maintenance of a risk management system. The system requires planning, implementation, regular updates throughout the AI system’s lifecycle. The process encompasses a risk-based analysis and the adoption of appropriate, targeted risk management measures. These aim to mitigate risk to a level deemed acceptable under the AI Act. Additionally, testing protocols must ensure that the systems operate as intended and meet the requirements and continue to meet these requirements.

    Then we need to implement data and data governance. AI systems classified as high-risk must have training, validation and testing datasets that adhere to the quality standards set out in the AI Act. This includes making strategic decisions on concepts and data collection procedures. Then they need to have a technical documentation. The AI Act mandates the preparation of a comprehensive technical documentation of a high-risk AI system before it is being placed on the European market or put into its service. The documentation must be maintained and updated consistently. That’s a general thing, right? It’s not just about initially meeting these requirements but meeting these requirements throughout the entire cycle of the AI system. Recording obligations. High-risk AI systems must facilitate automated event logging during the lifecycle so that one can actually check and record and see what the system has produced.

    Transparency, I mentioned already, is really, really key in particular for high-risk systems. High-risk AI systems must be designed and developed in such a way that their operation is sufficiently transparent.

    In addition, these systems must come with user interactions. In this regard, the AI Act prescribes mandatory content, such as the name and contact details of the providers, as well as the technical capabilities and characteristics of the high-risk AI system. Another key principle of the AI Act is human oversight. So high-risk AI systems must be developed in such a way that they can be effectively supervised by natural persons for the duration of their use. This is intended to prevent or minimize risk to health, safety or fundamental rights. This does not mean that any step the system performs must be closely monitored. However, its outcome needs to be reviewed on a regular basis to ensure that one identifies and reacts to flaws. Last but not least, the accuracy, robustness, and cybersecurity are very important requirements. A high-risk AI system must be developed in such a way that they achieve an appropriate level of accuracy and robustness, resistance to errors, malfunctions or inconsistencies, and certainly the cybersecurity must be met consistently throughout its lifecycle.

    However, all these obligations not only apply at the time, as I mentioned, the system is put onto the market, they apply throughout the entire value chain. Providers of high-risk AI systems, deployers and other parties have the following key obligations. So for providers are the main obligations, obviously, right? They must adhere to several obligations. So they must fulfill, on the one side, generally the obligation set up by the AI Act. They must establish a quality management system or ensure compliance with the AI Act is met and documented. The system must be documented with written rules, procedures and instructions. So you need to build up a little bit of a compliance there. Thus, outline the minimum requirements for the strategy for regulatory compliance techniques, procedures and systematic actions. They must retain the documentation related to the quality management system for 10 years. They must retain logs automatically generated by high-risk AI systems under the provider’s control and, under certain circumstances, comply with the registration obligations regarding the EU database for certain high-risk AI systems.

    So there’s a kind of list that they must be registered to it. And then that is actually really important. So for the market acceptance, they must affix a CE marking to the high-risk AI system. So in the end, when you have met all these kind of requirements, you must undergo a conformity assessment. And then once you have this assessment, you have a CE compliance marking on your product. Again, with a view to actually enabling your sale and usage of the AI market. And that’s actually fairly recognized and understood in the European market.

    Providers established outside the EU must also appoint an authorized representative in the EU, inviting before they’re making their systems available in the EU. So that’s not a new concept, right? That’s something that we know already from under the GDPR as well. DSA as well. As already mentioned, importers, distributors and other parties can be considered providers under certain conditions. So, for example, if they put their name and trademark on it, then they are also subject to the same obligations.

    And as I mentioned, there’s an obligation to not only look at it at the beginning but also throughout the lifecycle. So the providers are required to conduct post-market surveillance and reporting of serious incidents. So after placing an AI risk system on the market, providers have several obligations. Providers must set up and document a monitoring system appropriate to the nature of the AI technology and the associated risks. They must report any serious incidents that they see.

    Then we’ve got the obligations of importers and distributors. They have limited requirements. So they have specific obligations before placing an AI system on the market. For example, distributors making it available on the market must verify that the provider or the distributor or the importer has fulfilled obligations regarding contact details, for example. They must also ensure that the provider has fulfilled obligations such as supplying the technical documentation, that the system bears the required CE marking and that an authorized representative has been designated by the provider in the third country.

    And if there’s sufficient reason to assume that a high-risk AI system does not comply with the AI Act, importers must not place the system onto the market. So, I mean, they have obligations that they can adhere to because they may not have, necessarily, the deep insight into how the system works, but they can, for example, look and check — is the contact details there, is the CE marking there, is the representative appointed? And then we’ve got the obligations of the deployers, of all of us using AI systems, right? These companies must use the systems in accordance with the enclosed instructions from the provider. They must monitor the operation of the high-risk AI system based on the instructions for use.

    So again, that’s documented, that’s something that they know or should be known. They must ensure that the input data corresponds to the intended purpose of the system and is sufficiently representative. And if there is any reason to believe that using the system as instructed may pose a risk to health, safety or fundamental rights, the employer must inform the provider or distributor on the market, or end the market surveillance authority and suspend the use of the system.

    Finally, they must establish human oversight. So, I mean, we’ve already seen it in the market that sometimes AI systems have been used by some companies. And then, in the end, if the outputs weren’t very convincing, then this company said, “But it wasn’t us.” That’s not possible, right? I mean, you need to take a look. And if you are not convinced that the kind of output is actually what the system promised to do — very surprising. Then you need to not only inform, but you need to also actually stop using it. Then, we have got the specific requirements for GPAIMs, but as this is kind of fairly detailed and not applying to most companies, I will skip it for the time today.

    But as I mentioned already before, transparency is really key. So companies, especially those in the end-sector customer, they should ensure that appropriate information to users is given. This includes making clear when the users are dealing with an AI system or content generated by such that it is an AI system. So if a chatbot is there, even though the chatbot may interact very naturally, the chatbot must be saying, you know, I am an AI system or the like. That’s very visible. That’s fine, too.
    Caroline:  You know, I think you’ve emphasized, Christian, that transparency is really a bedrock of this legislation and kind of a requirement that to me at least seems to dovetail with that interest in transparency is the requirement to implement AI literacy measures. And since February 2, 2025, all users of AI systems covered by the act are required to implement AI literacy measures. Could you tell us a little bit about what AI literacy means in this context? And again, what steps organizations are taking to ensure that they comply with this requirement?
    Christian:  Yeah, very important that you already know that it is already in place. As many businesses don’t know of it, it’s already in place. Companies need to train. So what does it mean? It does not mean learn the EU AI Act by heart. It seems to mean to develop and train on holistically how AI systems should be used by a company.

    You go to the workforce and preferably have a very targeted training in place, which reflects your products and services, how your departments work, how they collaborate, which reflects the current AI organization already in place so that the workforce really understands that this is really targeted, geared towards them. And then they need to understand or should understand by the training. When using an AI system, you can, for example, easily violate third-party IP rights. For example, a system that generates an image with a brand, which looks like a very well-known brand, then you cannot say, “Look, this is a new kind of thing that the AI system has created.”

    It may still violate and interfere with the IP rights of a third party, so you need to be cognizant of that and take appropriate measures that there is no such infringement. Then secondly, it is very important to understand that AI systems can also lead to risk to your own IP rights or trade secrets, for example, right? And then one should also be understanding on how to ensure compliance with the European AI Act, but — and yeah, stop, because I would really emphasize most of our clients, we operate in different jurisdictions.

    So the AI Act does not require — and in my view, shouldn’t really require — to provide training on the specific requirements in a specific jurisdiction only. You need to take a holistic view on it, and you need to be cognizant and aware of the other requirements in the different jurisdictions you’re in. So again, going back to what it means, AI literacy is really providing a holistic training on how to use AI in your company, take into consideration IP rights, your own IP rights, but also compliance with the different laws and preferably with the different laws in all the jurisdictions you’re in.
    Jerry:  Well, Christian, compliance with the EU AI Act is clearly critical, given the significant fines and enforcement mechanisms that are in place. Could you explain the main risks for an organization that fails to comply and how enforcement will work? Who are the key authorities involved? And should a company be aware of potential sanctions and what might they be?
    Christian:  Yeah, when we talk about enforcement, I typically kind of start when advising clients that it’s not likely that we will see significant fines coming up very soon. That’s also what we hear from the regulators for various reasons. So first of all, you know, many of the provisions under the European AI Act are not yet enforced, right? Then separately, everyone in the market is still kind of learning on how AI works and how the European AI Act is supposed to be implemented as well. We will receive further guidance and the supervisory authority are cognizant of this difficult situation many companies are in.

    But eventually fines will come, and they have teeth, right? The AI Act establishes graduated fines for infringements. The fines are based on a company’s global annual turnover and the previous financial year and are categorized as follows. So for violations of the bans on certain AI practices — prohibited AI basically — it could be up to 35 million or 7 percent of the total worldwide annual turnover for an entire undertaking. So it is not about the revenue that a company generates in the European Union. They would take into consideration the entire revenue of the entire undertaking worldwide.
    Jerry:  That's pretty significant, isn't it? Yeah.
    Christian:  Yeah. Absolutely. And I mean, that’s the kind of mechanism that’s the same as under the DSA, under the GDPR as well. And some of the fines were already fairly significant. So that’s definitely something one should consider. Then for violations of various provisions, such as the obligations of providers, deployers or other parties involved in high-risk assistance, or the transparency obligations for certain AI systems, it is up to 15 million or 3% of the total worldwide annual turnover for undertakings, whichever is higher.

    And then there’s for providing false and incomplete, misleading information to notified bodies and national authorities in response to requests, for example, 7.8 million or 1% of the total worldwide annual turnover. And according to [Article] 99 of the AI Act, fines will be based on all relevant circumstances of a specific situation of the individual case. Regulators will take several criteria into consideration, including the nature, gravity and duration of the infringement.

    So again, like with all the other laws that most businesses are subject to, it absolutely makes sense to prepare and to document the preparedness compliance, because any such kind of attempts will certainly show a certain willingness to comply with the Act and, in case of a violation, reduce the risk of getting hit with a very high fine.
    The Commission may impose fines of 50 million or 3% of the total full annual turnover on providers for general-purpose AI models for certain intentional or even negligent infringements.

    And then we’ve got different systems of authorities. So on the one side, we have got a newly established AI office at the Commission’s level. It will be primarily responsible for monitoring and enforcing the European AI Act for general-purpose AI models. In its center, it is basically the center of AI expertise across the European Union and thus plays a key role in implementing the AI Act, especially for the general-purpose AI model, fostering the development and use of trustworthy AI in the traditional corporation.

    And then we have, on the national member states level, the national authorities enforcing the AI Act. Each member state is required to establish at least one notifying authority and one market surveillance authority, or an authority that performs both functions. The market surveillance authorities are responsible for enforcing and sanctioning under the AI Act.

    And according to Article 85 of the AI Act, natural and legal persons have the right to file complaints with a relevant market surveillance authority regarding any infringements. In addition, we have got, potentially, damage claims that individuals can potentially claim, but the main enforcement would be defined.
    Jerry:  Well, in light of the enforcement mechanisms and the potential fines under the EU Act, do you have any personal recommendations for companies to consider in case of a worst-case scenario? For example, there are lessons from existing laws and best practices that organizations should think of, trying to analogize between other acts and how you would prepare and comply with the EU AI Act.
    Christian:  So most compliance investigations and enforcement actions under the AI Act will be conducted by the national and market surveillance authorities. However, there’s a significant exception for providers of general-purpose AI models. As I mentioned, this will be done by the European Commission, right? So unlike with the GDPR, where we had the kind of single point of contact and one supervisory authority in a specific country could, for example, be the sole competent authority to supervise and enforce violations under the GDPR all across the European Union. You know, for GPAIMs, this is taken away from the national level and given to the Commission’s level for various reasons. And it’s worth noting that until now, the Commission has exercised such investigative and enforcement powers only in the field of competition law so far.

    So, what can we learn from competition law? European law based on case law provides a special rule for investigations by the European Commission. You know, these companies should take a look at the legal professional privilege, but being external EU-qualified lawyers and their clients, that’s something that when dealing with such investigations, is oftentimes not seen.

    So, while the legal privilege is a very different one in the United States as compared to the European Union legal privilege, it is really important to kind of maintain the best defense posture, which means that in such case of an investigation, you really need to make sure that you’ve got the appropriate, qualified professionals paid, net.

    And according to the European LLP, the European Commission inspectors are not allowed to investigate any documents which are covered by the LLP. Conditions of the LLP that can be applied in competition law are written communications between a company, but the lawyer must be registered with the bar in the European Union and it must be an external lawyer, so similar likely to the U.S. perspective. So it’s worth remembering that although the European LLP rules originated from competition investigations, they’re likely applicable in other contexts, due to their foundation of the Charter of Fundamental Rights and the European Union. And providers of GPAIMs may also be able to benefit from this privilege if the commission conducts an investigation.

    So that’s a very important point to make.
    Caroline:  Christian, as we wrap up, just thinking about everything that we’ve talked about and learned today about this act, and again, just the number of different requirements based on how you’re using AI and, you know, the different types of risk that are presented by different use cases. What are the key takeaways for our listeners regarding the overall impact of the Act? Are there one or two things that everyone listening to this podcast should remember? And do you have any thoughts on the implications of this act for the future of AI regulation, both within the EU and as a global standard?
    Christian:  Lots of good questions, Caroline. And so I think a key principle is — and to remember — not everything is prohibited. The AI Act doesn’t want to prohibit AI systems generally. It wants to foster AI. Whether this is to be achieved, we don’t know. It remains to be seen how it will work. But it obviously has adopted a risk-based approach. And the risk-based approach requires a case-by-case assessment to determine whether and to what extent the requirements of the AI Act apply to a specific AI system or general-purpose AI model. So very practically speaking, similar to the GDPR, and I know most companies don’t like it, but do a mapping.

    Take a look at all your AI systems and understand which legal threshold they are subject to. That really may relieve you from a lot of burden and uncertainty, and it’s absolutely necessary for understanding and building on compliance. I know this is a painful task, but that’s actually a must, and which many companies right now undergo. Given that almost all companies of a certain size develop and use or are considering using AI systems, it’s crucial for every company to establish compliance processes.

    So companies should begin by conducting initial review to classify the AI systems, which I already mentioned. They should also establish processes to ensure ongoing compliance with the applicable requirements under the AI Act. And how do they do so? What we’re currently working with clients is building an AI governance structure, meaning understanding who is to be involved when an AI system is to be introduced or really used in a system. And that certainly is not just legal. It’s not just the security department. It’s not just HR. It’s all, it’s IT, right?

    The key decision makers, they all need to be involved, and they need to meet on a regular level in order to pace with the significant development of it. Only this way, companies will ensure that they can tackle, oversee the developments of AI implemented into the organization without being perceived as someone who’s prohibiting AI, because that would, by default, not work anyways. So really having a good compliance structure is something — an oversight organization — that is something that I would definitely recommend everyone to do.

    It absolutely remains to be seen whether the EU will succeed in establishing the EU AI Act as a global standard for the regulation of AI in other jurisdictions. I mean, that is the goal, right? But many companies take different approaches, and it remains to be seen how the AI Act will work. We currently kind of have lots of discussions on the European level, whether the AI Act should be postponed. Many companies have issued such a view and cannot. But to my knowledge right now, the European Commission just confirmed that they would not think of any suspension of the European AI Act. We need to see how it works out.
    Jerry:  Christian, we have run out of time, and this has been an extraordinarily comprehensive overview of the EU AI Act, its purpose, and the implementation, and it’s the practical side of getting ready for compliance. Thank you so much for joining us. As you mentioned, we’ll be looking at other issues in subsequent podcast episodes, but this has been a wonderful setting of the table for our listeners to understand the EU AI Act. Thank you.