The EU AI Act’s provisions have started to apply, including those relating to prohibited AI systems and general-purpose AI models (GPAIM).
The bulk of the remaining obligations take effect on 2 August 2026, and authorities will be able to enforce compliance from that date.
Here are six steps to take as part of an overall AI governance strategy, to ensure you are ready for the next AI Act’s next phase:
- Conduct an AI mapping exercise to identify AI systems and general-purpose AI models your company is using, developing, importing or distributing in Europe.
- Clarify your company’s role in relation to each AI system and GPAIM.
- Determine whether the Act applies to AI systems and GPAIMs you identified in the mapping exercise.
- Classify AI systems and GPAIMs based on their risk level.
- Review AI-related services contracts and due diligence processes.
- With these steps completed, put in place an AI governance framework – or adapt your existing compliance program to the Act’s requirements.
6 EU AI Act Compliance Steps to Take
1. Conduct an AI mapping exercise to identify AI systems and GPAIMs your company is using, developing, importing or distributing.
- Consider a department-by-department approach to help identify less visible use cases.
- A growing number of free AI systems are accessible online via APIs and mobile apps. Keep in mind that using those systems for professional purposes can trigger transparency obligations under the Act.
- Focus on the functionality of any products identified as “AI.”
- Keep in mind the key definitions in the Act.
- AI system: The Act defines this as “a machine-based system designed to operate with varying levels of autonomy.” It “may exhibit adaptiveness after deployment and … for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
- A “key characteristic” of an AI system is its “ability to infer” to generate outputs. The European Commission has published guidelines on the definition of “AI system” that provide clarity regarding which AI systems are in scope of the regulation and which are not. [See our article about these guidelines here]
- AI systems are differentiated from other human-defined, rules-based systems that automatically execute operations, which are beyond the scope of the regulation.
- GPAIM: The law defines a GPAIM as a model “trained with a large amount of data using self-supervision at scale that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market.” It “can be integrated into a variety of downstream applications, except AI models that are used for research, development or prototyping activities before they are released to market.”
- Examples include models in generative AI systems that create text, images, code and/or audio.
- The Act imposes obligations on GPAIM developers, so companies that develop their own AI models should determine whether they are covered.
- General-Purpose AI system: This is a system is one based on a general-purpose AI model, and which has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems. This distinction may be relevant for compliance.
- Guidelines published by the European Commission make clear that the Commission will consider two factors as “indicative” that a model is GPAI: whether the model was trained using a level of compute that exceeds a threshold of 10²³ floating point operations (FLOPs) and whether it can also create language, text-to-image or text-to-video=. [See our article on these guidelines here.]
- GPAIMs that do not have systemic risk and that are made available under open-source licenses that satisfy the criteria set out at Article 53(2) of the AI Act (e.g. parameters and model architecture must be publicly available) will be exempt from certain obligations under the Act. If your organization develops or integrates GPAIMs, then you should assess whether the applicable license will trigger the exemption. [For more information regarding open-source GPAIMs, see our article here.]
- Make sure your process for identifying regulated AI systems and GPAIMs is documented and replicable, since this will be an ongoing obligation.
2. Clarify your company’s role in relation to AI systems and GPAIMs.
The AI Act affects a wide range of operators along the AI value chain, including providers, deployers, importers, distributors and product manufacturers. The obligations vary depending on an organization’s role.
Organizations that provide (i.e., develop) high-risk AI systems have the most legal obligations, followed by deployers. Importers and distributors have more limited responsibilities. Only providers face obligations related to GPAIMs.
Determining the role(s) your organization plays – and the resulting responsibilities – can be tricky, such as when AI products are co-developed or when a company makes additional developments to an existing AI product. In those cases, you may need to review and amend agreements (Step 5 has more details).
3. Determine whether the Act applies to the AI systems and GPAIMs identified in the mapping exercise.
The Act applies to:
- AI systems that are “placed on the market” or “put into service” in the EU by a provider.
- GPAIMs a provider has “placed on the market.”
- The concepts of “placed on the market” and “put into service” are derived from European product safety law. They have their own definitions in the AI Act.
The Act can apply to providers based outside the EU if they meet the criteria above. It will also apply to providers and deployers of AI systems based outside the EU if the outputs produced are used in the EU, to avoid circumvention of the Act’s requirements.
The Act will not apply if AI systems are:
- Placed on the market, put into service or used exclusively for military or defense or national security purposes.
- Developed or put into service solely for scientific research and development.
- Released under free and open-source licenses, unless the system is prohibited or high-risk.
Companies should determine when their AI systems and/or GPAIMs will be covered by the Act. The Act has a grandfathering provision that extends the period of compliance for AI systems placed on the market in Europe before 2 August 2026, and GPAIMs placed on the market before 2 August 2025.
4. Classify AI systems and GPAIMs based on their level of risk.
The Act imposes obligations in relation to AI systems and GPAIMs depending on their level of risk.
A company should determine whether any of the AI systems it uses or develops may be classified as prohibited or high risk under the Act. Even lower risk AI systems may still be subject to certain obligations.
Obligations applicable to providers of GPAIMs also vary depending on the model’s risk level. GPAIMs with “systemic risk” are subject to additional risk identification and mitigation requirements, along with incident reporting. Providers of GPAIMs may sign up to the voluntary Code of Practice published in August 2025. The Code is particularly detailed in relation to the obligations of providers of GPAIM with systemic risk to implement processes for risk identification and mitigation.
Understanding the risk classification of AI systems and AI models, as well as the company’s role in relation to these, is the foundation for AI governance (see Step 6).
5. Start incorporating AI Act requirements into your contracts and due diligence processes.
Make sure any in-progress agreements and acquisitions covering AI products will reflect the requirements of the AI Act when the Act comes into force. This will likely require changes to contract terms as well as due diligence and revised procurement processes.
A key issue to address for both providers and deployers is the extent to which licensed AI systems and GPAIMs can be modified and/or fine-tuned (in the case of GPAIMs) before responsibility for compliance shifts from the provider to the deployer.
You may also have executed agreements that need to be amended to reflect new and ongoing product developments that involve AI technologies.
6. Put in place an AI governance framework.
“AI governance” refers to the company’s internal policies and processes to ensure its use and development of AI aligns with its mission, risk profile, legal obligations and ethical priorities. The steps outlined above contribute to an AI governance strategy.
An effective AI governance framework also requires support from senior leadership. People with diverse profiles from different parts of the company should contribute to the framework – for example, in the form of an internal AI working group or by conducting relevant risk analyses.
A key element of AI governance includes ensuring an adequate level of AI literacy amongst the employees and contractors that operate AI systems on behalf of your organization; this is also an obligation under the EU AI Act that took effect in February 2025. [See our article on the AI literacy obligation is available here.]
Finally, an AI governance framework will also help align AI-related compliance initiatives with overlapping compliance duties, such as those related to data protection, product safety and cybersecurity.
Want to know more? Contact the authors (Julia Apostle and Sarah Schaedler)