Abstract blue and gold visualization of EU law transforming into AI regulation with neural network patterns

The EU Just Rewrote Key Parts of Its AI Law. Here's What Changed.

AIntelligenceHub
··12 min read

On May 7, 2026, EU lawmakers agreed on the Digital Omnibus on AI, amending the AI Act with timeline extensions, new deepfake prohibitions, and simplified compliance rules. Here's what changed.

At 4:30 in the morning on May 7, 2026, negotiators from the European Parliament and the Council of the EU reached a provisional agreement that will change how thousands of AI companies operate across Europe. The deal, officially called the Digital Omnibus on AI, amends the EU AI Act, the world's most detailed regulatory framework for artificial intelligence.

The deal was fast. The EU AI Act only became enforceable in stages starting in 2024, and already it's been revised. The reason is partly competitive pressure: EU lawmakers watched the US, China, and UK take lighter approaches to AI governance, and some worried that an unmodified AI Act would slow European AI development just as the global race accelerated. The other reason is practical: companies were genuinely struggling to understand what the law required and how to comply before the original August 2026 deadline.

Here's what changed, what didn't, and what your legal and engineering teams need to know.

Every Change Made to the EU AI Act in This Deal

The most significant change in the omnibus deal is a deadline extension for high-risk AI systems.

The EU AI Act classifies AI systems by risk level. At the top of the risk classification sit high-risk systems: AI used in employment decisions, access to education, credit scoring, law enforcement, migration screening, and critical infrastructure management. These systems face the strictest obligations under the law, including mandatory technical documentation, logging requirements, human oversight mechanisms, fundamental rights impact assessments, and registration in an EU database.

Under the original AI Act, companies deploying these high-risk systems were supposed to meet full compliance requirements by August 2, 2026. That deadline is now gone. Under the omnibus deal, stand-alone high-risk AI systems classified under Annex III, covering areas like employment, education, credit, and access to essential services, now face a compliance deadline of December 2, 2027. High-risk AI embedded in products already regulated by dedicated EU safety laws, including medical devices, machinery, toys, and connected vehicles, gets until August 2, 2028. That's 16 months of additional runway for Annex III systems and more than 24 months for Annex I embedded products.

Why did this happen? The compliance requirements for high-risk AI are genuinely demanding. Providers have to conduct conformity assessments, maintain technical documentation for the life of the system, log every consequential decision, run bias testing, implement human oversight, and build audit trails. Platforms like ServiceNow and NVIDIA's Project Arc are early examples of what this human oversight infrastructure looks like in enterprise deployments. Many companies are far from ready. The EU Commission and regulators appear to have accepted that enforcing an August 2026 deadline against a large number of non-compliant but legitimate businesses would create chaos without improving safety outcomes. The extension is not a reprieve from compliance. It's more time to get there.

While the EU gave companies more time on the compliance side, it moved faster on prohibition. The omnibus deal adds two new prohibited practices to Article 5, which lists AI uses that are outright banned across the EU. Both new prohibitions take effect December 2, 2026.

The first new ban covers nudifier tools. These are AI systems designed to generate non-consensual intimate imagery, tools that strip clothing from photos of real people without their consent. The trigger was specific: during late 2025 and early 2026, millions of non-consensual sexual deepfake images were generated using Grok, xAI's chatbot. The volume and visibility of those cases prompted both Parliament and Council members to push for a prohibition at the AI model level, not just at the platform moderation layer. The second new ban covers AI-generated child sexual abuse material. The original AI Act addressed synthetic content broadly but didn't name CSAM generation as an explicitly prohibited use. The omnibus closes that gap. Both bans apply to the systems themselves, not just to operators who deploy them. A company that builds a foundation model with pathways to generating non-consensual intimate imagery, even if it doesn't advertise that feature, will face scrutiny under the new provisions.

One more change surprised many compliance teams: the omnibus actually accelerated one deadline. Under the original AI Act, companies deploying AI-generated content on public platforms had a six-month grace period after the GPAI provisions entered force to implement technical solutions for marking AI-generated audio and video. The omnibus reduced that grace period to three months. The effective deadline for synthetic content disclosure and watermarking solutions is now December 2, 2026, applying to systems that entered the market before August 2, 2026. If your product generates synthetic audio, video, or images for public consumption in the EU, and you haven't started engineering a content marking solution, you're behind schedule.

One of the more contested items in the negotiations was whether to eliminate the registration requirement under Article 6(3). The AI Act includes a public EU database where AI system providers must register their systems. This registration applies even to systems that the provider has self-assessed as not meeting the high-risk threshold, as long as those systems fall within the scope of Annex III categories. If you're building an AI tool that touches employment decisions, credit scoring, or other regulated categories, you may need to register it with the EU database even if you've concluded it doesn't technically qualify as high-risk. The European Commission proposed eliminating this obligation as part of its simplification push. The Parliament pushed back. In the final deal, the registration obligation survived intact. Your self-assessment decision, which would have remained internal under the Commission's proposal, now becomes a public database record that regulators and litigants can examine and challenge.

The deal also includes genuine simplification wins. Certification and audit procedures have been streamlined. For sectors with dedicated EU product safety frameworks, the omnibus reduces duplicative compliance requirements between the AI Act and existing sectoral regulation. The clearest beneficiary is the machinery sector, which now satisfies AI safety requirements through the existing machinery regulation's delegated acts framework rather than running a parallel AI Act compliance program. Other Annex I sectors, including medical devices, toys, and vehicles, may see similar relief through implementing acts, though those details haven't been finalized yet. Enforcement is also being centralized: rather than leaving primary enforcement to 27 different national authorities interpreting the same rules differently, the deal moves more authority to the AI Office, a pan-EU body created specifically to oversee AI regulation. For GPAI model oversight in particular, the AI Office becomes the lead enforcement authority. A permanent EU-level regulatory sandbox is established under the deal as well, with a deadline for national authorities to establish their own sandboxes of August 2, 2027. Companies can use the sandbox to test AI systems with regulators before commercial deployment, reducing the risk of building products that fail compliance review at launch.

Small and medium enterprises had dedicated carve-outs in the original AI Act. The omnibus extends those privileges to small mid-cap companies with fewer than 1,500 employees, covering AI startups that have grown past SME thresholds but aren't large enough to run enterprise-scale compliance programs. Depending on how the AI Office interprets the implementing guidance, some growth-stage AI companies may now qualify for more flexibility in their compliance approach, including simplified conformity procedures and lighter documentation obligations. This is meaningful for a category of company that often got squeezed: past the SME threshold but not resourced like an enterprise.

Put the full picture together and you get a deal that's simultaneously more permissive on implementation timelines and more restrictive on prohibited uses. Companies that were counting on the high-risk AI deadline to stay at August 2026 now have more time. Companies that were operating nudifier tools or similar CSAM-generating systems don't. The EU drew a clear line between "we'll give you more time to comply" and "there are things you just can't do, and those rules are tightening." That's a meaningful distinction that gets lost when observers read the omnibus as simply a softening move.

Core Requirements the Omnibus Left Unchanged

One of the most important things to understand about this deal is what it didn't touch.

General-purpose AI model obligations, defined in Articles 50 through 55 of the AI Act, are completely unchanged. These provisions cover the large foundation models that power most modern AI products, including GPT-class systems, Claude, Gemini, Llama, and their equivalents. The GPAI rules require model providers to maintain technical documentation, register models with authorities, disclose copyright training data, and for the most capable models with systemic risk, conduct adversarial testing and report serious incidents.

These obligations were already in force before the omnibus. Article 4 AI literacy requirements and GPAI Articles 50 through 55 entered into force on August 2, 2025. They were not extended. If you're a foundation model provider, the omnibus didn't give you extra time on anything. If you're an application developer building on top of a major foundation model, your provider's GPAI compliance situation is unchanged.

The GPAI obligations surviving untouched also matters for the systemic risk tier, which applies to models trained on more than 10^25 FLOPs. These high-capability models face enhanced obligations including red-teaming, incident reporting to the AI Office, and mandatory cybersecurity measures. None of that changed.

The core requirements for high-risk AI systems also remain substantively unchanged. The obligations around technical documentation, conformity assessments, bias testing, human oversight, and fundamental rights assessments still exist. The omnibus extended timelines, not requirements. Companies working toward compliance should continue against the same technical standard, just with a later deadline.

The omnibus also doesn't resolve what many AI companies are quietly more worried about: the Data Omnibus. Separately from AI governance, the EU is negotiating amendments to GDPR and related data protection rules to make it easier for AI companies to process personal data for model training. Current GDPR enforcement has created real friction for companies trying to fine-tune or build models on European user data. The Data Omnibus aims to address some of that, but it's still being negotiated and was not part of this week's agreement. If your biggest compliance challenge is data for AI training rather than model deployment governance, this week's deal doesn't help you directly. Watch the Data Omnibus negotiations over the coming months.

The AI Office, established within the European Commission in February 2024, is worth understanding given its expanded role. It operates as a pan-EU body with responsibility for overseeing GPAI model compliance, coordinating enforcement across member states, and developing technical standards under the AI Act. It's distinct from national market surveillance authorities, which handle high-risk AI enforcement within their own member states. Under the omnibus, it becomes the lead enforcement authority for GPAI model obligations and gets more centralized authority over certain cross-border enforcement actions. For companies operating at EU scale, this means a single institutional voice interpreting GPAI rules, rather than 27 national interpretations that could diverge. The AI Office has been publishing guidance on the AI Act's technical requirements, including draft model evaluation frameworks and documentation templates. These documents aren't legally binding yet, but they signal what the AI Office expects. Companies preparing for compliance should be following these publications closely.

Practical Steps for AI Companies in Europe Now

The omnibus deal isn't legally final yet. The provisional political agreement still needs formal endorsement by both the Council and Parliament, then legal-linguistic revision, then publication in the Official Journal. Formal adoption is expected within weeks. But companies shouldn't wait.

The deadline extensions on high-risk AI give you more time, but they don't change what you'll ultimately need to build. December 2, 2027 for Annex III systems is roughly 19 months away. That's not a long runway for organizations that need to design risk management systems, human oversight mechanisms, technical documentation practices, and data governance processes from scratch. Companies that treat the extension as a reason to delay will hit the same compliance cliff at a later date.

The watermarking deadline doesn't give you more time. December 2, 2026 is seven months away. If your product generates synthetic audio, video, or images for public audiences in the EU, disclosure infrastructure needs to be in engineering now.

The new nudifier and CSAM prohibitions also take effect December 2026. If your model or platform has any pathway to generating non-consensual intimate imagery, close it now. The technical safeguards required aren't aspirational, they're legally mandatory within months.

The registration obligation means your self-assessment of whether your product is high-risk needs to be documented and defensible, because it will become a public record. Companies that have been treating their AI Act classification decisions as informal internal judgments need to treat them as public commitments backed by documentation.

For teams building internal governance programs, AIntelligenceHub's Enterprise AI Governance Checklist for 2026 covers the categories companies operating under the AI Act need to track: risk classification, documentation, human oversight, incident response, and regulatory registration.

The omnibus deal will be read by some observers as evidence that the EU is backing away from AI regulation. That reading is too simple. The core requirements of the AI Act remain unchanged. What changed is the timeline and some procedural requirements. The EU didn't walk back what it's asking of AI companies. It gave them more time to get there, while simultaneously adding new prohibitions and tightening the watermarking timeline.

Compared to the US approach, which released a national policy framework in March 2026 built around voluntary industry agreements and federal preemption of state laws, the EU still looks like the more prescriptive regulator. Compared to the UK's sector-led, non-binding approach, the EU is far more detailed. The omnibus doesn't change that comparative picture.

The AI Act's core structure, risk classification, prohibited practices, GPAI obligations, fundamental rights assessments, human oversight requirements, and enforcement authority, remains intact. Companies that have been quietly hoping the EU would abandon the AI Act should update their expectations. The EU just invested significant political energy in making it work. The question is no longer whether it will be enforced. The question is whether your products will be ready when enforcement begins.

For compliance teams working through the AI Act now, the near-term priority order looks like this. First, classify every AI product against the Annex III list and document the reasoning. This is due before anything else because your classification becomes public. Second, if any product generates synthetic content for EU audiences, start watermarking and disclosure engineering immediately. December 2, 2026 is closer than it looks. Third, scan your model or platform for any pathway to non-consensual intimate imagery generation and close it. The legal exposure from violating a clear prohibition is different in kind from missing a compliance deadline. Fourth, if you're in a regulated product category like medical devices or machinery, contact your regulatory counsel about whether the omnibus's sectoral carve-outs apply to your specific products before assuming you're covered.

The provisional agreement reached in Brussels at 4:30 AM on May 7 represents the EU's answer to critics who said the AI Act was unworkable: a targeted set of timeline adjustments, new prohibitions, and streamlined procedures, while keeping the substance of the most ambitious AI governance framework in the world. EU lawmakers didn't retreat. They recalibrated. The companies that treat this as a signal to ease up on compliance preparation will be the ones scrambling again when the next deadline approaches, except next time there won't be an omnibus to rescue them.

Weekly newsletter

Get a weekly summary of our most popular articles

Every week we send one email with a summary of the most popular articles on AIntelligenceHub so you can stay up-to-date on the latest AI trends and topics.

One weekly email. No sponsored sends. Unsubscribe when you want.

Comments

Every comment is reviewed before it appears on the site.

Comments stay pending until review. Posts with more than two links are held back.

Related articles