5 minute read | May.07.2026
The European co-legislators have reached a political agreement on the Digital Omnibus on AI (AI Omnibus) that will modify and simplify certain provisions of the EU AI Act (Regulation (EU) 2024/1689), ahead of the majority of the AI Act's provisions taking effect on 2 August 2026.
Here are seven of the key changes to the AI Act you need to know:
The obligations applicable to high-risk AI systems designated under Article 6(2) and Annex III of the AI Act will take effect on 2 December 2027, instead of 2 August this year. The obligations applicable to high-risk AI systems subject to existing EU sectoral legislation listed in Annex I of the AI Act and covered by Article 6(1) will take effect on 2 August 2028. The definition of “safety component” has also been modified.
In parallel, providers of AI systems that generate synthetic content will have until 2 December 2026 to comply with the content marking obligations imposed by Article 50(2). At this stage it is not clear whether grandfathering provisions will apply to AI systems placed on the market before 2 August 2026. All other Article 50 transparency obligations remain applicable from 2 August of this year. The voluntary Code of Practice on Transparency of AI-Generated Content should be finalized in the coming weeks.
Given the short extension of provider transparency obligations, both providers and deployers of generative AI systems should familiarize themselves with the Code of Practice.
The AI Omnibus introduces a sector-specific compromise to address possible overlaps between the high-risk AI obligations under the AI Act and requirements of existing sectoral legislation set out in Annex I of the Act, in particular in relation to the EU Machinery Regulation (EU) 2023/1230. A new mechanism will enable the Commission, through implementing acts, to resolve situations where sectoral law contains AI-specific requirements equivalent to those of the AI Act, by limiting the latter's application in those specific cases. The Commission will issue guidance to help economic operators comply with both the AI Act and applicable sectoral product-safety regimes.
The AI Omnibus extends several AI Act measures intended to simplify compliance for SMEs to a newly defined category of “small mid‑cap” enterprises, including simplified technical documentation templates that notified bodies must accept, more proportionate quality‑management expectations, priority access to regulatory sandboxes and more tailored penalty caps. The category of small mid-cap enterprises is made up of enterprises that are not small and medium-sized enterprises, employ fewer than 750 people and have an annual turnover not exceeding €150 million or an annual balance sheet total not exceeding €129 million.
Two new prohibitions target AI systems that generate or manipulate realistic non-consensual intimate material (NCII / NCIM) of identifiable individuals without their consent – including so-called “nudifier” applications – and child sexual abuse material (CSAM). Within scope are systems designed for those purposes, or systems where such outputs are reasonably foreseeable and reproducible in the absence of reasonable, proportionate and effective safeguards. Explicit-consent-based intimate content, lawful tools for detecting, investigating or moderating CSAM, and the development of the underlying generative capabilities are not banned as such. These prohibitions apply from 2 December 2026.
The AI Act is amended to extend the legal basis for processing special‑category data under the GDPR for bias detection. The amendment will build on the existing rule provided by Article 10(5) (Data & Data Governance), which today only covers providers of high‑risk systems. Processing will be subject to a “strict necessity” standard and a number of mandatory safeguards such as subsidiarity versus non‑sensitive or synthetic data, pseudonymization, access controls, no onward sharing and timely deletion, and does not create any obligation to perform bias detection.
The AI Omnibus strengthens central EU-level enforcement by giving the AI Office supervisory competence over AI systems based on a general-purpose AI (GPAI) model developed by the same provider or same group of undertakings, and over AI systems integrated into "very large online platforms" or "very large online search engines" as designated under the EU Digital Services Act. National authorities remain competent for specific categories listed in the agreement, including law enforcement, border management, judicial authorities and financial institutions.
The Omnibus extends the deadline for Member States to have at least one national AI regulatory sandbox operational from 2 August 2026 to 2 August 2027 and in parallel creates a separate EU‑level sandbox operated by the AI Office, with priority access for SMEs, start‑ups and small mid‑caps.
The official communications from the institutions do not mention whether the Act’s AI literacy obligations have been amended, as originally proposed by the Commission. In addition, the obligation for providers to register AI systems that are considered exempt from the AI Act under Article 6(3) is maintained.
The trilogue held on 7 May negotiations resulted in a deal, but the amending regulation must still be legally reviewed, formally adopted and published in the Official Journal of the European Union before 2 August 2026 for the amendments to take effect.
For more information about how the AI Omnibus amendments will impact AI governance efforts, contact Julia Apostle.
Prepared with the assistance of Antoine Allard.