BestAIFor.com
AI governance

EU AI Act Digital Omnibus: What the Proposed Risk Assessment Rollback Actually Removes

A
Alice Thornton
April 13, 202611 min read
Share:
EU AI Act Digital Omnibus: What the Proposed Risk Assessment Rollback Actually Removes

TL;DR: The EU's Digital Omnibus package, currently under Parliamentary review, would remove the requirement for AI companies to publish risk assessments for high-risk AI systems — including systems used in hiring, performance monitoring, and credit decisions. The August 2026 compliance deadline has been removed with no fixed replacement date. Amnesty International and EDRi are calling it the most significant rollback of digital rights at the EU level in a generation.

  • Under the Digital Omnibus, companies deploying high-risk AI — including employment AI systems — would no longer need to publish risk assessments under the AI Act.
  • The August 2026 compliance deadline for high-risk AI is removed; a new date depends on "harmonized standards" with no fixed timeline.
  • The Corporate Europe Observatory documented how Big Tech lobbied article-by-article for the specific changes now included in the Omnibus package.
  • A separate GDPR amendment in the same package redefines what constitutes personal data, potentially expanding AI training data pools.
  • Amnesty International published a statement in April 2026 calling the Omnibus "an unprecedented rollback of rights online" across the EU.
  • Business groups and several EU member states support the package; civil society — Amnesty, EDRi, Corporate Europe Observatory — opposes it.

EU AI Act Digital Omnibus: What the Proposed Risk Assessment Rollback Actually Removes

Who benefits when the EU decides not to require AI companies to disclose their own risk assessments? That's the question embedded in the Digital Omnibus, the European Commission's "simplification" package proposed in November 2025 and currently moving through the European Parliament and Council of the EU.

One of its central provisions would remove the requirement for companies deploying high-risk AI systems to publish risk assessments for those systems. A second provision delays the August 2026 compliance deadline for high-risk AI to an undefined future date, tied to the production of harmonized standards that do not yet exist. A third amendment, buried in accompanying GDPR reforms, redefines what constitutes personal data in ways civil society groups say will expand the data pool available for AI training.

These are not fringe concerns raised by critics who oppose AI development. They are documented policy changes in a published legislative proposal, and they affect a specific category of AI systems — the ones that make decisions about employment, credit, education, and access to essential services. Understanding what the Omnibus removes requires understanding what the original AI Act required, who it required it from, and why those requirements existed in the first place.

What "High-Risk AI" Means in the Original AI Act

The EU AI Act's original framework classified AI systems into risk tiers. High-risk systems — defined in Annex III — include AI used in hiring and recruitment, employee monitoring and performance evaluation, credit and insurance scoring, educational assessment, and administration of social benefits. The risk classification was not a symbolic label. It carried concrete obligations.

Under the original August 2026 compliance deadline, companies deploying high-risk AI were required to maintain documentation on how those systems were designed, tested, and validated; to conduct conformity assessments before deployment; and, critically, to publish risk assessments that would give regulators and affected individuals access to information about how automated decisions were being made.

The affected use cases span labor markets at scale. AI-driven applicant screening tools are in active use across European employers in financial services, logistics, healthcare, and tech. AI performance monitoring systems that track worker productivity, flag under-performance, or generate the data used in termination decisions are deployed in warehouses, call centers, and knowledge work environments. The risk assessment requirement was the mechanism through which a worker, job applicant, or loan applicant could — in principle — understand what criteria an automated system applied to a decision that directly affected them.

What the Digital Omnibus Removes and What Stays

The Omnibus proposal modifies several provisions of the AI Act simultaneously. The most significant changes for high-risk AI are: first, companies would no longer be required to publish risk assessments for high-risk systems, giving them, in Amnesty International's phrasing, "free rein to decide the levels of risk their systems pose." Second, the August 2026 compliance date is replaced with a conditional future date tied to availability of harmonized standards and Commission guidelines — with no fixed deadline.

ProvisionOriginal AI Act (August 2026)Digital Omnibus Proposal
Risk assessments for high-risk AIRequired — must be publishedRemoved
High-risk compliance deadlineAugust 2, 2026Postponed — no fixed date
Conformity assessmentsRequired before deploymentConditionally delayed
AI-generated content labellingRequired from August 2026Retained (Code of Practice approach)
Prohibited AI systems (biometric surveillance, social scoring)Prohibited from February 2025Unchanged
GPAI transparency obligationsRequired from August 2025Largely retained

The changes are not uniform across the AI Act. Provisions on prohibited AI systems — real-time biometric surveillance, social scoring, subliminal manipulation — remain intact. Obligations on general-purpose AI model providers are largely retained. The rollback is concentrated in the high-risk category: precisely the category that covers automated employment decisions affecting millions of EU workers and job seekers.

How the Proposal Got Here: Big Tech's Lobbying Record

The Digital Omnibus did not emerge from a vacuum. The Corporate Europe Observatory published a detailed analysis in January 2026 tracing, article by article, which specific changes in the Omnibus package correspond to positions advanced by Big Tech lobbying groups in submissions to the European Commission. The overlap is extensive.

The mechanism is familiar: companies facing compliance costs from a specific provision submit to Commission consultations arguing the provisions are technically unworkable, disproportionately burdensome, or redundant with other frameworks. The Commission's Omnibus package, nominally a "simplification" exercise to reduce regulatory burden, reflects many of those arguments directly in its text.

The Commission's framing is competitiveness: the EU's regulatory environment is slowing AI adoption relative to the United States and China, and simplification is necessary to close that gap. That argument may have merit on some provisions. The question is whether removing the risk assessment requirement for employment AI systems — the mechanism that makes accountability possible — is a competitiveness issue or an accountability issue. The Corporate Europe Observatory's January 2026 analysis documents which interests were served by each specific change, and the answer is consistent across provisions.

The GDPR Amendment Nobody Is Discussing

The Digital Omnibus is not limited to the AI Act. The same package includes proposed amendments to GDPR that redefine what constitutes personal data. EDRi warns that the redefinition narrows the scope of data qualifying as "personal," which in practice allows more categories of data to be collected and used for AI training without triggering the consent and rights provisions that currently apply.

The two amendments work in the same direction. If AI companies no longer need to publish risk assessments for high-risk systems, and if the data those systems process is increasingly excluded from personal data definitions, the combined effect is to reduce both the visibility into how automated systems work and the legal basis for individuals to challenge decisions those systems produce. That's not speculation — it's the documented combined effect of two provisions in a published legislative text.

The European Data Protection Board has not yet published a formal opinion on the GDPR amendments as of April 2026. That process is ongoing. The timeline for a Board opinion will likely run alongside, not ahead of, Parliamentary consideration of the Omnibus — meaning the accountability gap the amendments create may be debated without the EDPB's formal technical assessment in hand.

What Civil Society Organizations Are Saying

Amnesty International's April 2026 statement describes the Omnibus as "an unprecedented rollback of rights online at the EU level." The statement identifies three specific harms: expanded corporate and state surveillance; reduced protection from AI-based discrimination; and weakened mechanisms for individuals to challenge automated decisions. EDRi — the European Digital Rights network, representing 45 civil society organizations across Europe — called the Omnibus "a major rollback of EU digital protections."

The Corporate Europe Observatory's analysis is more granular. It traced specific lobbying positions from Google, Meta, and industry associations including DigitalEurope to specific provisions in the Omnibus text, documenting that many of the most significant changes mirror positions industry groups advanced in Commission submissions. EDRi's full analysis covers both the AI Act and GDPR changes in detail.

The counter-argument from the Commission and supporting member states is that harmonized standards for high-risk AI compliance don't yet exist, that the August 2026 deadline was never practically achievable, and that delaying requirements until the compliance infrastructure is in place is more rational than enforcing requirements against standards that haven't been written. That argument has some force. Delaying an unenforceable deadline is different from eliminating accountability requirements. The text of the Omnibus, as currently drafted, does more than the former.

When the Digital Omnibus Changes Don't Affect Your Organization

If your organization deploys AI systems that fall outside the high-risk category — generative content tools, recommendation engines, internal productivity tools, customer-facing chatbots for non-regulated services — the Omnibus changes to the AI Act's high-risk provisions don't alter your compliance posture in the near term. Prohibited AI provisions remain in force from February 2025. GPAI transparency obligations for model providers are largely retained.

  • ☐ Does your organization deploy AI in hiring, applicant screening, or employee performance monitoring?
  • ☐ Do your AI systems affect decisions about credit, insurance, education, or access to essential services for EU users?
  • ☐ Were you planning compliance for the original August 2026 AI Act deadline?
  • ☐ Does your organization process personal data of EU citizens in any of the above contexts?

Two or more checked: the Omnibus changes are directly material to your compliance planning. A delay is not a clearance to pause compliance work — high-risk requirements will still enter into force at an unspecified future date. The internal audit, documentation, and governance processes you build now won't be wasted if the deadline moves; they're the foundation that makes a credible compliance posture possible when standards do arrive.

FAQ

Is the Digital Omnibus final?

No. As of April 2026, the Digital Omnibus is under the ordinary EU legislative procedure and requires formal adoption by both the European Parliament and the Council. Formal adoption is expected later in 2026, but the text can be amended during that process. Civil society groups are actively lobbying MEPs to restore the accountability provisions before final adoption.

What is the current compliance status for high-risk AI systems under the EU AI Act?

The August 2026 deadline has not yet been formally removed — that depends on the Omnibus passing into law. Organizations in scope for high-risk AI should treat their compliance programs as live until the text is formally adopted. Assuming the delay will pass before beginning compliance work is a planning risk, not a strategy.

Why are prohibited AI systems unaffected by the Omnibus?

Prohibited systems — including real-time biometric surveillance in public spaces, social scoring, and subliminal manipulation — entered into force in February 2025 and are not part of the high-risk compliance framework. The Omnibus targets compliance burden on deployers of permitted high-risk AI, not the categorical prohibitions the EU has already enacted.

What is the GPAI Code of Practice status?

A second draft of the Code of Practice on AI-generated content marking was published on March 3, 2026. This process operates on a separate timeline from high-risk AI compliance and is not affected by the Digital Omnibus. General-purpose AI model providers' transparency obligations are largely retained in the current Omnibus text.

How does the Digital Omnibus affect workers subject to AI monitoring in the EU?

If passed as drafted, workers subject to AI-driven performance monitoring will have less legal recourse than the original AI Act intended. Without mandatory risk assessments, there is no public documentation of what criteria automated monitoring systems apply. Workers retain rights under existing EU labor law and GDPR, but the AI Act's specific accountability layer for automated employment decisions would be delayed or removed.

Conclusion: Next Steps

The Digital Omnibus is still in legislative process. The provisions that most concern civil society — removal of risk assessments for high-risk AI, the GDPR data redefinition — are not yet law. There is still a process through which they can be amended, and that process runs through the European Parliament.

For organizations operating in or selling to the EU market, the practical step is to continue AI Act compliance work on the assumption that high-risk requirements will eventually enter into force, regardless of the exact date. The documentation, audit, and governance processes that compliance requires are not wasted if the deadline moves.

The more useful observation is structural: the gap between the EU's stated identity as a global standard-setter for AI governance and the provisions currently under Parliamentary review is the most informative signal in this story. Understanding that gap — who closed it, through what process, for whose benefit — is the context behind every subsequent EU AI policy debate. The Corporate Europe Observatory's lobbying documentation and EDRi's analysis are the most detailed public records of that process currently available.

A
> Editor in Chief **20 years in tech media**, the first 10 in PR and Corporate Comms for enterprises and startups, the latter 10 in tech media. I care a lot about whether content is honest, readable, and useful to people who aren’t trying to sound smart. I'm currently very passionate about the societal and economic impact of AI and the philosophical implications of the changes we will see in the coming decades.

Related Articles