Inside the EU AI Act: Implementation, Enforcement & What’s Next | Orrick RegFi Podcast
Listen on Apple
Listen on Spotify

Episode 69: Inside the EU AI Act: Implementation, Enforcement & What’s Next
 27 min listen

In our continuing coverage of the EU Artificial Intelligence Act, co-hosts Jerry Buckley and Caroline Stapleton are joined by Orrick partners Shannon Yavorsky and Julia Apostle. The conversation explores how companies can navigate emerging AI compliance requirements in both the European Union and the United States, including assessing risk, mapping AI systems and developing scalable governance frameworks. Since recording this episode, implementation of the EU AI Act has proceeded apace, with the EU Commission releasing its final guidance for developers of general-purpose AI models (GPAIM) and the Code of Practice undergoing its final approval process before the GPAIM obligations take effect on August 2.

Links:

 

 

  • Jerry Buckley

    Hello. This is Jerry Buckley, and I am here with co-host Caroline Stapleton. We’re joined by two of our partners, Shannon Yavorsky and Julia Apostle.

    Shannon, who is resident in Orrick’s San Francisco office, heads Orrick’s Global Cyber, Privacy & Data Innovation Group and co-leads the firm’s Artificial Intelligence Group.

    Julia, who is resident in Orrick’s Paris office, advises clients on strategic technology transactions, the regulation of digital goods and services, including compliance with the EU AI Act, which is the subject of our podcast episode today. In our last podcast episode, Christian Schröder gave us an overview of the European AI Act — its purpose, its scope, and its main principal obligations. Today, we discuss in more detail the current state of play in Europe regarding implementation of the AI Act, and also the regulatory landscape for AI in the United States and how companies should be reacting.

    Let’s start with you, Julia. With less than a month to go before parts of the European Union’s AI Act come into force, both American and European companies are calling for a pause in the provisions. And this is getting support from some politicians, like Swedish Prime Minister Ulf Kristersson, who called the AI rules confusing and asked the EU to pause. With the August deadline for implementation of certain parts of the Act right around the corner, how do you think the EU Commission and the EU AI Board will respond? And for the benefit of our listeners, maybe you could describe the function of the EU AI Board.

    Julia Apostle

    Thanks, Jerry. Happy to, and happy to be here speaking with you both today. So, starting with the pause, I think, at this point, we no longer think that the AI Act will be paused. The head of tech for the European Commission, Henna Virkkunen, came out last week — or at the end of last week, saying that there would be no pause, that the text is the text, and a law is a law. And so, it will be taking effect as planned. Whether the law is simplified, however, is a separate question, and there is still a suggestion that certain provisions of the law will be rendered more simple as part of an overall European regulatory simplification agenda that is happening right now.

    So, there will be a proposal that will come out towards the end of this year that will address certain provisions of the AI Act, and there may be simplifications in relation to some of the obligations. For example, how they apply to small- and medium-sized businesses and probably those businesses in Europe. So, we can hope for some simplification, but probably not a pause.

    And in terms of who is or what is the AI office, that is a new regulatory body that is part of the European Commission that will be responsible for overall oversight and enforcement of the AI Act but, more specifically, has exclusive jurisdiction in relation to the developers of what are called general-purpose AI models, or foundation models, and general-purpose AI systems, which are AI systems that incorporate general-purpose AI models, where the developer of the system and the developer of the model are the same party. So, that’s quite new. The AI office has exclusive jurisdiction over there.

    Jerry Buckley

    So, those are generally what we refer to as GenAI models. Is that right?

    Julia Apostle

    GenAI, yes, but the definition of general-purpose AI model doesn’t have the ability to generate content as a precondition. So, there can be large models that satisfy the conditions of being a general-purpose AI model that are not capable of generating text. Large classification models might be an example, where models are extremely good at classifying information and identifying information, but they don’t actually generate new text themselves, so those are conceivably another category.

    Jerry Buckley

    Thanks for that clarification, Julia.

    Caroline Stapleton

    Thanks so much, Julia, and welcome to both of you. So, Shannon, here in the U.S., we know there’s a lot of AI-related legislative activity happening at the state level, but it seems less likely there’s going to be a federal law, at least in the near term. Many of us have heard that as part of the reconciliation bill signed on July 4th, there was an effort to place a ten-year moratorium on state AI legislation, but that was removed from the final version of the bill. So, can you tell us more about what you think is likely to happen now and what AI legislative trends you’re seeing in the U.S. at the state level but also what you might foresee at the federal level?

    Shannon Yavorsky

    Yeah, that’s a great question, Caroline, and thank you for having me on the podcast today. Legislative activity at the state level on AI is off the charts right now. Since the start of the year, there have been about a thousand laws proposed in relation to AI, and there’s no comprehensive federal legislation or regulation yet that specifically regulates the development or use of AI. So, there have been a few proposals at the federal level, but no bills have moved past the earliest stages of the lawmaking process so far. So, no federal law on the horizon just yet.

    In terms of the administration, the Trump administration has taken a more permissive approach to AI so far. And in January of this year, Trump issued an executive order rescinding actions taken by the Biden administration that were inconsistent with what may be described as a strongly pro-innovation development approach to federal oversight of AI. And while there are no AI-specific laws at the federal level, the different governmental agencies have made it very clear that they will regulate AI as it falls within their jurisdictional authority.

    So, for example, the DOJ, the FTC, the EEOC, and the CFPB, as it then was, issued a joint statement in 2023, clarifying that their authority applies to software and algorithmic processes, including AI. So, we can expect some oversight from federal agencies just as AI falls within their jurisdictional authority. So, against that backdrop, states are taking the lead on AI-specific regulation in the U.S., and most bills and enacted laws at the state level selectively regulate certain aspects of AI development, deployment and use.

    So relevant topics of concern for state regulators include CSAM, obligations on social media companies to contain the distribution of non-consensual sexual content, lots of different obligations on large online platforms to remove or label certain AI-generated political content during election seasons, transparency — so watermarking of certain GenAI content — consumer-facing disclosures for chatbots. And this is probably the one that most consumers are seeing right now.

    If you go on a website and there is a chatbot feature, it’s required for the provider of that chatbot to make it very clear that someone’s interacting with software and not with a human. There’s been one state that has been leading the path on comprehensive AI regulation, and that’s Colorado. In May 2024, Colorado enacted the first comprehensive U.S. AI law, the Colorado AI Act. And we think, you know, it bears a lot of similarities to the EU AI Act in terms of how it’s organized and creates duties for developers and deployers of AI in some of the similar ways that the EU AI Act regulates deployers and developers. And I know Julie is going to talk a little bit more about that as well.

    And unlike certain state privacy laws, there’s no revenue threshold for applicability of the Colorado law. The act applies to all developers and deployers of high-risk AI systems in Colorado. So, pretty broad applicability.

    The other thing that I would say is that the U.S. state privacy laws — there are 19 of them now, and many of them speak to what is called automated decision-making technology or ADMT, and that has been a sort of proxy for AI regulation coming out of the state privacy laws, which has been pretty interesting. And then, as you mentioned, Caroline, there was a proposal for a moratorium on the enforcement of the state AI laws, and that actually proceeded.

    It was then watered down to a five-year moratorium, and then it finally died. So, that did not proceed, and there’s no moratorium on state AI enforcement, at least for now. So that was a lot, but that’s a high-level overview of what is happening in AI legislation on the U.S. side of the house.

    Jerry Buckley

    Shannon, just one point that I think would be of interest, and that is that most of the U.S. privacy laws have an exemption for institutions that are subject to the Gramm-Leach-Bliley Act or the Fair Credit Reporting Act. And that, of course, affects a lot of fintech companies and financial institutions. But on the AI side, because AI is not covered by those statutes, I think financial institutions are going to have to be much more aware of what’s happening in the artificial intelligence legislative and regulatory environment.

    Shannon Yavorsky

    I think that is exactly right. The other thing that I would say about that, that’s been sort of under discussion lately, is the entity-level versus the data-level exemption for the GLBA-covered entities or GLBA-covered data has been the subject of a lot of discussion in the privacy world in the last couple of months. So, you know, there were some laws — like fully privacy laws — that carved out GLBA-covered entities. And some of those states are walking that back, so it’s just a GLBA-covered data exemption instead of the full entity exemption, which I think is a really interesting turn of events.

    Jerry Buckley

    Turning back to you, Julia. Are we going to end up with a situation where companies have to deal with conflicting laws? Will there be a so-called Brussels Effect for the EU AI Act? And, please explain what the Brussels Effect is.

    Julia Apostle

    Thanks, Jerry. That’s a very good question. And the Brussels Effect, more commonly associated with the GDPR and actually attributed to a law professor from Columbia University. Her name is Anu Bradford. She is the person who first talked about this concept and described it as being, you know, split into two.

    There’s de facto Brussels effect and a de jure Brussels effect. So, a de facto Brussels effect is one where companies are incentivized to comply with European rules everywhere and not just in relation to their products that are made available or services made available in Europe because it’s more cost‑effective to do so. That’s the de facto effect. The de jure effect is where different countries decide to emulate European legislation. So, whether or not there will be a Brussels effect in relation to the AI Act is, of course, the million-dollar question.

    And my own view, and it’s shared by others, is that there will probably be a partial Brussels effect, but not necessarily a wholehearted Brussels effect. And the reasons for that are several, but one of them is that the AI Act doesn’t actually regulate all AI. So, if you take the GDPR, the GDPR ostensibly covers all personal data, right? If any kind of personal data, it’s covered by the GDPR.

    The AI Act does not apply to all AI systems, and it does not apply to all categories of users of AI systems in the same way. So, for example, the AI Act, most of the obligations under the Act apply to systems that are classified as high-risk. Systems that are not high-risk have very few obligations. So, actually, the terrain is wide open for another country with another law to impose a whole bunch of different obligations in relation to AI systems that satisfy a different use case that is not occupied by the AI Act, for example.

    Similarly, most of the obligations under the AI Act apply to developers of those systems, so-called providers. There are not so many obligations that apply to actual users of the systems. So, there could, again, be legislation — and there probably will be legislation — that comes in and imposes certain rules around use of AI systems. And the European Commission is already talking about introducing new rules that will apply in relation to the use of AI in the workplace.

    So, that’s a reason why there are gaps in the AI Act that could be filled by legislation in a different jurisdiction, and therefore the Brussels effect would not be complete. There’s also another reason — is that the AI Act is at core a product safety legislation, unlike the GDPR. And as a product safety legislation, it follows that model whereby the law sets out a number of high principles or high-level principles or obligations that very intentionally need to be specified through the adoption and articulation of technical standards, which are not yet in place, at least in relation to the high-risk AI systems.

    So, the AI Act gets criticized a lot because the obligations are so high-level, they’re so vague, no one knows how to actually comply with them, and that’s actually intentional for the most part because we’re waiting for the technical standards to be articulated.

    And this is where the AI Act is innovative — is that it’s trying to regulate AI in a way that reflects European values and human rights, or human rights as conceived of by the European Union. So those technical standards are meant to reflect European values and the perspective of rights. That means that they will not necessarily be translatable into other jurisdictions that put an emphasis on different values and different rights, or they protect rights in a different way.

    And indeed, there’s already been a discussion in relation to one technical standard that came out of ISO around quality management, which, you know, for a little while, everyone was thinking, “Oh, this will be the standard for quality management that’s adopted at the European level.” And the European AI office has already said, “Well, we don’t actually think it does the trick. We don’t think it completely reflects the approach that we want to push forward in the EU AI Act.”

    And so that’s another reason why, okay, a company or countries, different legislators in different jurisdictions will say, “We may take the framework, but maybe our technical standards will be different, or we won’t have technical standards.” That’s why I’m not personally convinced that we’ll have a Brussels effect with the EU AI Act.

    Jerry Buckley

    But you do envision a semi-Brussels effect?

    Julia Apostle

    Semi, yeah. This is what Shannon mentioned and also what you mentioned earlier — is the risk-based approach. That is something that we’re already seeing in other legislation, right? So, the Colorado Act that we talked about, that takes a risk-based approach. And also, these obligations in relation to transparency — the user should be able to know when they’re interacting with an AI system.

    That’s in the AI Act. Whether it can be attributed to the AI Act is another question because it’s also sort of an advertising, consumer protection type of principle, not necessarily unique to Europe. But yes, there will be some elements, I think, that are adopted and that are already being adopted.

    Jerry Buckley

    You know, we really do have to respect the European Commission and the European AI Office for providing an intellectual foundation. No one knows the answer, but they have put forth something. Now, whether it should be implemented immediately, or the timing, but the fact that they have taken the time to think through these issues is impressive. And there will be some Brussels effect. I think there already is.

    And It’s fascinating that, I think, for U.S. companies and our podcast listeners to look at this and think, “Well, here’s a foundational, if not perfect, start.” And that’s what we may have to build off. The danger, of course, is that if we have a proliferation of standards and there’s enough variation, it is almost going to be impossible for any company other than the largest to comply. And I think that’s what’s reflected in the comments of the Swedish prime minister and others.

    Julia Apostle

    Yeah, absolutely. I agree a thousand percent. It’s an incredibly complex piece of legislation. And in that, I agree that it’s a very ambitious undertaking by the European Commission. That’s great for the Commission because it also sets the bar high. It says, “This is a complex area, it is a complex technology, and any law that seeks to regulate this technology will have to recognize and deal with that complexity.” So that is something the European Commission has done.

    And unfortunately, it does leave open the possibility — because of the approach that has been taken — for conflicting approaches and conflicting applications. And we also have already within Europe not just conflicting laws but overlapping laws. And that is another issue that has caused this pushback, both from within Europe and outside of Europe, is that there are so many new laws. Some of them do address AI, either directly or indirectly. And there’s no sort of general recognition of all this overlap and that complexity that it causes.

    And that is why — at least there’s the simplification initiative that’s been started — but that might solve some issues in Europe, but it will not solve for the international equivalent, which is bound to arise.

    Jerry Buckley

    Now — and there is also the concern that is expressed that maybe the advance of AI in Europe will be slowed up by complexity. And even though the law has a reach beyond Europe, and, as Shannon mentions, there are many laws being considered in the United States — some of them with the Brussels effect, as mentioned — in a less regulatory environment, you may see more advances and that Europe could be impacted by having such a complex set of rules, which is why they’re thinking about simplification, I believe.

    Julia Apostle

    Yes. I mean — yes, that’s why they’re thinking about simplification. Whether the issue of regulation is the only potential impediment to innovation in Europe, I’m not sure. I mean, that’s a big debate and a super interesting debate because it’s almost too soon to tell. It’s a chicken and egg type scenario. We’re both at the start of innovating in a massive way around new forms of AI and AI applications, and the regulation is just starting off.

    So, it’s hard to know whether the laws are really going to slow something down. The analogies that are made that — you know, I have some sympathy for personally — are, well, you know, look at — France is known for its aerospace sector, its cars and its pharmaceuticals, and those are all heavily regulated industries that have managed to thrive and innovate. And the same is true elsewhere in Europe, as well. So, is that a counterexample? Time will tell.

    Jerry Buckley

    Exactly. Well, thank you. That’s very useful observations.

    Caroline Stapleton

    So, given what appears to be a fluid regulatory environment in Europe and in the U.S., how should companies approach AI regulatory compliance? And Shannon, how would you respond if this question came from a company that’s based in the U.S. but operates globally?

    Shannon Yavorsky

    Such a great question, Caroline. It’s a question that many of our clients are thinking through right now in great detail, as there’s currently a somewhat limited AI regulatory environment, but everybody knows that in 18 months, two years’ time, there will be just an incredibly complex legislative landscape in relation to AI.

    So, we get a lot of questions around how to build an AI compliance program against this backdrop. And what we recommend to our clients is building a principles-based approach to compliance, similar to how we built privacy programs around really core principles. So, looking at the OECD principles, looking to the EU AI Act, to the NIST Risk Management Framework and ISO 42001 and figuring out the commonalities in those frameworks and then building a broad foundation for compliance that we think will then have jurisdiction-specific overlays.

    So, for example, an international company that operates in the U.S. and the EU, maybe they build their European program around the EU AI Act, but they might decide in other jurisdictions not to meet that high, high standard in the U.S. because it might not be required. So really looking at building a principles-based approach and then potentially having jurisdictional overlays depending on the organization’s risk tolerance, if it’s a regulated industry or not. So looking at some of those core questions to help build the pillars of compliance.

    Jerry Buckley

    We’re approaching the end of our time, but before we close, could each of you share two or three suggestions as to what companies should be doing now to assure compliance with the evolving AI regulatory environment?

    Shannon Yavorsky

    Yeah, thanks, Jerry. I think the first thing that I would say to companies is to look at what AI systems are actually in use. I think a lot of companies don’t really have their arms around exactly how AI is being used within the organization, and that’s really critical for building a compliance program.

    So, my first tip would be to have a conversation with different stakeholders to better understand what the AI landscape is actually in use at the organization. And then sit down and draw a strategy for the next couple of years around building the AI compliance program and building it around those core principles. I think those are two things that companies can do today that will put them in a really good position for when these laws do come online over the next 18 to 24 months.

    Julia Apostle

    Thanks, Jerry. Well, I definitely agree with everything that Shannon has said. And building on that from a European perspective, given how close we are to the first of a few August 2nd deadlines, companies should also be thinking about how they’re using, specifically, general-purpose AI models or generative AI or other forms of large models, given that the code of practice for the developers of those models was just published.

    And this code of practice clarifies the obligations that take effect on the 2nd of August this year for the developers by specifying, for the most part, all their transparency obligations, like all the documents that these developers have to develop and make available both for the public, for regulators but, most importantly, also for downstream integrators of those models. And that is a business strategy for a lot of companies.

    A lot of companies are looking at how can we use these large models for our benefit? Should we take the models and build our own applications for internal use? Should we integrate these models into systems that we then make available on the market? In either scenario, they will constitute downstream providers, and the code of practice is designed to ensure that downstream providers have available to them information about the models that will help them implement those models in a safe way and also in compliance with any of the obligations that they might have. So, that’s something to take into consideration now and to start looking at that code and saying, “Where do I benefit? What’s in here that could be a benefit to me and my intended use cases of these models?”

    Another thing that companies should be thinking about if they’re active in Europe is not only the mapping of the systems but also the classifying of those systems into the buckets recognized by the European AI Act. So, are they high risk? Are they prohibited? Most likely not. Or are they sort of, you know, pretty straightforward and going to be subject to very few obligations, which frankly is the case for the vast majority of AI systems.

    And that will probably reassure a lot of companies to know that actually having done the mapping, having done the classification, we figured out that “Geez, we just need to tell people that they’re interacting with an AI system,” right? And that will be reassuring for companies going forward over the next few months.

    Jerry Buckley

    Well, thank you so much. And both of you have shared things that are very useful and practical for those that are listening. And in fact, in combination with what Christian told us in the last episode, I think people have the basics of what has to be done.

    The challenge will be: there’s not a lot of time between now and August 2nd. And it’s understood that enforcement will not be so vigorous at first, so there will be a little bit more time to come into compliance. But still, this is a wake-up call. If there’s not going to be a delay, then people really have to get down to business.

    Julia Apostle

    Agreed.

    Jerry Buckley

    So, again, thank you both. It’s been great having you with us, and I look forward to further discussion on this issue that is going to only evolve more and more over the next few years.

    Julia Apostle

    Thank you, and thank you both for having us.