BestAIFor.com
AI governance

EU AI Act High-Risk Compliance Is Slipping to 2027 — Here's Who Benefits

A
Alice Thornton
March 20, 202610 min read
Share:
EU AI Act High-Risk Compliance Is Slipping to 2027 — Here's Who Benefits

TL;DR: The EU Commission missed its February 2, 2026 deadline to publish guidance on high-risk AI obligations. The EU Council has now proposed pushing enforcement for standalone high-risk AI systems to December 2, 2027 — sixteen months past the original date. Only 8 of 27 member states have designated enforcement authorities. The workers and job seekers the Act was designed to protect are looking at a minimum two-year enforcement gap.

Key Takeaways

  • EU Commission missed the February 2, 2026 guidance deadline for high-risk AI obligations under Article 6 of the AI Act — no guidance published as of mid-March 2026.
  • EU Council agreed March 13, 2026 to push standalone high-risk AI system rules to December 2, 2027; systems embedded in products face August 2, 2028.
  • The Digital Omnibus package ties enforcement activation to harmonised standards and technical tools — tools with no published timeline or availability date.
  • Only 8 of 27 EU member states have designated a national enforcement authority for the AI Act as of March 2026.
  • High-risk AI categories affected include automated hiring, credit scoring, biometric surveillance, and AI in critical infrastructure — all with documented harm to workers in peer-reviewed research.
  • During this enforcement gap, companies deploying AI in high-risk categories face no mandatory compliance obligations under the Act.

EU AI Act High-Risk Compliance Is Slipping to 2027 — Here's Who Benefits

When a regulation misses its own deadline, the first question is not procedural. It is political. Who asked for the extension? Who benefits from the delay? The EU AI Act's high-risk provisions were written to govern AI systems most likely to harm workers, job seekers, and citizens: automated hiring tools, credit scoring algorithms, biometric identification systems, AI deployed in critical infrastructure. Those obligations were scheduled to apply August 2, 2026. On March 13, the EU Council formally agreed to a position pushing that deadline to December 2, 2027 at the earliest.

The path to this extension was not accidental. The European Commission missed its own guidance deadline — required to publish Article 6 guidance by February 2, 2026, and did not. That failure gave political cover for a larger delay already moving through the Digital Omnibus legislative package. The result is a two-year gap during which companies deploying AI in the riskiest categories face no mandatory compliance obligations under the Act. That gap has a cost. The Commission's public statements have not mentioned who bears it.

What High-Risk AI Was Supposed to Regulate

The AI Act's risk classification places AI in four tiers. The high-risk category covers systems embedded in critical infrastructure, educational credentialing, employment decisions, access to essential private services like credit and insurance, and law enforcement applications. These are not edge cases — they are the systems most workers interact with when applying for jobs, accessing credit, or navigating public services.

The regulatory logic is direct: these systems require mandatory transparency, human oversight, data quality controls, and documented technical evidence before deployment. Under the August 2026 deadline, companies using automated hiring tools or credit scoring models would have needed to demonstrate compliance. Under the proposed December 2027 deadline, they will not. There is currently no authoritative interpretation of what compliance requires, because the Commission has not published it.

How the Commission Missed Its Own Deadline

Article 6 of the AI Act required the Commission to publish guidance by February 2, 2026 on how operators of high-risk AI systems should meet their obligations. No guidance was published. The IAPP reported directly that the Commission missed the deadline with no substitute document, no draft, and no replacement date beyond end of 2026. Without guidance, companies cannot assess whether their systems are high-risk or what compliance requires. That ambiguity is not neutral — it operates in favor of organizations that prefer to keep deploying while the regulatory framework remains undefined.

EU AI Act MilestoneOriginal DateStatus / Revised Date
Prohibited AI practices applyFeb 2, 2025In force (unchanged)
GPAI transparency rules applyAug 2, 2025In force (unchanged)
Commission guidance on high-risk systemsFeb 2, 2026Not published — missed
High-risk AI rules (standalone systems)Aug 2, 2026Dec 2, 2027 (proposed)
High-risk AI rules (embedded in products)Aug 2, 2026Aug 2, 2028 (proposed)
Member states with designated enforcement authorityAll 27 expected8 of 27 as of March 2026

Enforcement cannot meaningfully precede guidance. And guidance now has no firm delivery date. The architecture of delay is structurally in place.

Who Asked for the Extension — The Digital Omnibus Trail

The EU Digital Omnibus is a legislative package framed as administrative simplification. One provision links the effective date of high-risk AI obligations to the availability of harmonised standards and technical tools — standards that are not published, with no stated completion date. The practical effect is that the August 2026 enforcement deadline becomes conditional on an undefined prior condition. It no longer has a fixed date.

Industry associations had been arguing since 2025 that the harmonised standards timeline was unrealistic for compliance. The EU Council's March 13 position — December 2027 for standalone systems, August 2028 for embedded systems — is the institutional response to that pressure. Whether the standards timeline was genuinely unready or whether readiness was the lobbying argument is a question the Commission's official communications do not address. Tracing the beneficiaries is straightforward: the industries most subject to high-risk rules — platform hiring, consumer credit, biometric services — lobbied for this extension. The Commission missed the deadline that would have made delay harder to justify. The Council agreed to the extension the industry wanted.

The 8-of-27 Enforcement Gap

Eight member states have designated a national single point of contact for the AI Act, according to the European Parliament think tank's enforcement analysis published March 18, 2026. That is eight of twenty-seven. The remaining nineteen have no designated authority to receive complaints, investigate violations, or coordinate cross-border enforcement.

The AI Act's enforcement model is decentralized — each member state designates its own national market surveillance authorities. Without those authorities in place, there is no practical mechanism to act on violations even if the rules were currently in force. For workers in the nineteen member states without a designated authority, there is no regulatory contact point even on paper. The structural enforcement gap exists independently of the timeline delay — and the timeline delay removes any deadline pressure to close it.

What the Delay Costs Workers and Citizens

The high-risk AI tier covers systems that determine whether you get called back for a job interview, whether your credit application is approved, whether a law enforcement algorithm marks you as a risk. Those systems are operating across the EU today without mandatory transparency or human oversight requirements. The August 2026 deadline was designed to change that. The December 2027 date preserves the current situation for sixteen more months at minimum.

The cost is not easy to aggregate — there is no centralized register of AI system deployments in high-risk categories across EU member states. That is partly because the Act has not yet required one. What is documentable is the pattern: algorithmic hiring tools have demonstrated demographic bias in peer-reviewed research; automated credit decisions have been successfully challenged for opacity in multiple EU jurisdictions. The Act was designed to create enforceable standards against those failure modes. The Commission has not published an impact assessment for the enforcement delay. That absence is itself a data point about whose interests the delay analysis was not designed to serve.

When You Should NOT Expect EU AI Act Protection Yet

  • ☐ If you are applying for jobs through an automated hiring platform — the rules requiring human oversight and transparency do not apply until December 2027 at the earliest.
  • ☐ If you are subject to AI-assisted credit or insurance decisions — mandatory data quality and documentation requirements for these systems are not yet in force.
  • ☐ If you work in a sector using AI for performance monitoring or task allocation — the high-risk rules governing these systems have no fixed enforcement date.
  • ☐ If you expected regulatory pressure to push employers to disclose how AI affects hiring or promotion decisions — that pressure has moved to 2027 at earliest.
  • ☐ If you live in one of the 19 EU member states without a designated enforcement authority — there is currently no national body to receive complaints about AI Act violations.

FAQ

Is the EU AI Act completely unenforced until 2027?

No. Prohibited AI practices — social scoring, real-time biometric surveillance in public spaces, subliminal manipulation — have applied since February 2025. Transparency obligations for general-purpose AI models apply now. The delay specifically affects high-risk AI system obligations under Article 6.

What are harmonised standards and why does their absence block enforcement?

Harmonised standards are technical specifications defining exactly how to meet regulatory requirements — data quality thresholds, audit documentation formats, testing protocols. Without them, compliance is a legal assertion without a verifiable technical basis. Enforcement against undefined standards is legally fragile and practically unworkable.

Who designated the 8 national enforcement contacts so far?

The EU AI Act requires member states to designate a national single point of contact for coordination. As of March 2026, eight have done so. The European Parliament think tank's March 18, 2026 analysis documents this as a significant readiness gap with no stated remediation timeline.

Can companies voluntarily comply with high-risk rules before the new deadline?

Yes. The Act permits voluntary compliance. There is no regulatory incentive to do so before the mandatory date, and no public registry of companies that are voluntarily meeting the high-risk requirements. Voluntary compliance without verification is not meaningfully distinguishable from a compliance claim.

What is the next formal step in the AI Act timeline process?

The EU Council's March 13 position opens trialogue negotiations between the Council, the European Parliament, and the Commission. The Parliament has not yet taken a position on the Digital Omnibus provisions affecting the AI Act timeline.

Conclusion: Next Steps

The EU AI Act was a political commitment to put enforceable standards around AI systems that affect workers' economic lives — hiring, credit, surveillance, performance monitoring. The Commission missed its first guidance deadline. The Council agreed to a sixteen-month extension for the rules most relevant to those systems. Nineteen member states have no enforcement authority in place. No impact assessment has been published on what these delays cost the people the Act was written to protect.

The practical steps depend on your role. If you work in labor policy or advocacy, the trialogue negotiation between the Council and Parliament is the next decision point — and the Parliament has not yet taken a public position on the Digital Omnibus AI Act provisions. If you work in compliance, there is no auditable definition of high-risk AI compliance to work toward yet; your effort is better directed at internal governance documentation that will matter when mandatory rules arrive. Read the EU Council's March 13 press release and the Parliament think tank's March 18 enforcement analysis directly. The gap between the institutional language and the situation on the ground is a useful signal about the distance between regulatory commitment and regulatory capacity.

A
> Editor in Chief **20 years in tech media**, the first 10 in PR and Corporate Comms for enterprises and startups, the latter 10 in tech media. I care a lot about whether content is honest, readable, and useful to people who aren’t trying to sound smart. I'm currently very passionate about the societal and economic impact of AI and the philosophical implications of the changes we will see in the coming decades.