The Potential Impact of Artificial Intelligence on Global Society Over the Next Five Years
A beginner-friendly look at how AI may reshape work, health, education, and governance from 2026 to 2031

TL;DR: The EU's Digital Omnibus package, currently under Parliamentary review, would remove the requirement for AI companies to publish risk assessments for high-risk AI systems — including systems used in hiring, performance monitoring, and credit decisions. The August 2026 compliance deadline has been removed with no fixed replacement date. Amnesty International and EDRi are calling it the most significant rollback of digital rights at the EU level in a generation.
Who benefits when the EU decides not to require AI companies to disclose their own risk assessments? That's the question embedded in the Digital Omnibus, the European Commission's "simplification" package proposed in November 2025 and currently moving through the European Parliament and Council of the EU.
One of its central provisions would remove the requirement for companies deploying high-risk AI systems to publish risk assessments for those systems. A second provision delays the August 2026 compliance deadline for high-risk AI to an undefined future date, tied to the production of harmonized standards that do not yet exist. A third amendment, buried in accompanying GDPR reforms, redefines what constitutes personal data in ways civil society groups say will expand the data pool available for AI training.
These are not fringe concerns raised by critics who oppose AI development. They are documented policy changes in a published legislative proposal, and they affect a specific category of AI systems — the ones that make decisions about employment, credit, education, and access to essential services. Understanding what the Omnibus removes requires understanding what the original AI Act required, who it required it from, and why those requirements existed in the first place.
The EU AI Act's original framework classified AI systems into risk tiers. High-risk systems — defined in Annex III — include AI used in hiring and recruitment, employee monitoring and performance evaluation, credit and insurance scoring, educational assessment, and administration of social benefits. The risk classification was not a symbolic label. It carried concrete obligations.
Under the original August 2026 compliance deadline, companies deploying high-risk AI were required to maintain documentation on how those systems were designed, tested, and validated; to conduct conformity assessments before deployment; and, critically, to publish risk assessments that would give regulators and affected individuals access to information about how automated decisions were being made.
The affected use cases span labor markets at scale. AI-driven applicant screening tools are in active use across European employers in financial services, logistics, healthcare, and tech. AI performance monitoring systems that track worker productivity, flag under-performance, or generate the data used in termination decisions are deployed in warehouses, call centers, and knowledge work environments. The risk assessment requirement was the mechanism through which a worker, job applicant, or loan applicant could — in principle — understand what criteria an automated system applied to a decision that directly affected them.
The Omnibus proposal modifies several provisions of the AI Act simultaneously. The most significant changes for high-risk AI are: first, companies would no longer be required to publish risk assessments for high-risk systems, giving them, in Amnesty International's phrasing, "free rein to decide the levels of risk their systems pose." Second, the August 2026 compliance date is replaced with a conditional future date tied to availability of harmonized standards and Commission guidelines — with no fixed deadline.
| Provision | Original AI Act (August 2026) | Digital Omnibus Proposal |
|---|---|---|
| Risk assessments for high-risk AI | Required — must be published | Removed |
| High-risk compliance deadline | August 2, 2026 | Postponed — no fixed date |
| Conformity assessments | Required before deployment | Conditionally delayed |
| AI-generated content labelling | Required from August 2026 | Retained (Code of Practice approach) |
| Prohibited AI systems (biometric surveillance, social scoring) | Prohibited from February 2025 | Unchanged |
| GPAI transparency obligations | Required from August 2025 | Largely retained |
The changes are not uniform across the AI Act. Provisions on prohibited AI systems — real-time biometric surveillance, social scoring, subliminal manipulation — remain intact. Obligations on general-purpose AI model providers are largely retained. The rollback is concentrated in the high-risk category: precisely the category that covers automated employment decisions affecting millions of EU workers and job seekers.
The Digital Omnibus did not emerge from a vacuum. The Corporate Europe Observatory published a detailed analysis in January 2026 tracing, article by article, which specific changes in the Omnibus package correspond to positions advanced by Big Tech lobbying groups in submissions to the European Commission. The overlap is extensive.
The mechanism is familiar: companies facing compliance costs from a specific provision submit to Commission consultations arguing the provisions are technically unworkable, disproportionately burdensome, or redundant with other frameworks. The Commission's Omnibus package, nominally a "simplification" exercise to reduce regulatory burden, reflects many of those arguments directly in its text.
The Commission's framing is competitiveness: the EU's regulatory environment is slowing AI adoption relative to the United States and China, and simplification is necessary to close that gap. That argument may have merit on some provisions. The question is whether removing the risk assessment requirement for employment AI systems — the mechanism that makes accountability possible — is a competitiveness issue or an accountability issue. The Corporate Europe Observatory's January 2026 analysis documents which interests were served by each specific change, and the answer is consistent across provisions.
The Digital Omnibus is not limited to the AI Act. The same package includes proposed amendments to GDPR that redefine what constitutes personal data. EDRi warns that the redefinition narrows the scope of data qualifying as "personal," which in practice allows more categories of data to be collected and used for AI training without triggering the consent and rights provisions that currently apply.
The two amendments work in the same direction. If AI companies no longer need to publish risk assessments for high-risk systems, and if the data those systems process is increasingly excluded from personal data definitions, the combined effect is to reduce both the visibility into how automated systems work and the legal basis for individuals to challenge decisions those systems produce. That's not speculation — it's the documented combined effect of two provisions in a published legislative text.
The European Data Protection Board has not yet published a formal opinion on the GDPR amendments as of April 2026. That process is ongoing. The timeline for a Board opinion will likely run alongside, not ahead of, Parliamentary consideration of the Omnibus — meaning the accountability gap the amendments create may be debated without the EDPB's formal technical assessment in hand.
Amnesty International's April 2026 statement describes the Omnibus as "an unprecedented rollback of rights online at the EU level." The statement identifies three specific harms: expanded corporate and state surveillance; reduced protection from AI-based discrimination; and weakened mechanisms for individuals to challenge automated decisions. EDRi — the European Digital Rights network, representing 45 civil society organizations across Europe — called the Omnibus "a major rollback of EU digital protections."
The Corporate Europe Observatory's analysis is more granular. It traced specific lobbying positions from Google, Meta, and industry associations including DigitalEurope to specific provisions in the Omnibus text, documenting that many of the most significant changes mirror positions industry groups advanced in Commission submissions. EDRi's full analysis covers both the AI Act and GDPR changes in detail.
The counter-argument from the Commission and supporting member states is that harmonized standards for high-risk AI compliance don't yet exist, that the August 2026 deadline was never practically achievable, and that delaying requirements until the compliance infrastructure is in place is more rational than enforcing requirements against standards that haven't been written. That argument has some force. Delaying an unenforceable deadline is different from eliminating accountability requirements. The text of the Omnibus, as currently drafted, does more than the former.
If your organization deploys AI systems that fall outside the high-risk category — generative content tools, recommendation engines, internal productivity tools, customer-facing chatbots for non-regulated services — the Omnibus changes to the AI Act's high-risk provisions don't alter your compliance posture in the near term. Prohibited AI provisions remain in force from February 2025. GPAI transparency obligations for model providers are largely retained.
Two or more checked: the Omnibus changes are directly material to your compliance planning. A delay is not a clearance to pause compliance work — high-risk requirements will still enter into force at an unspecified future date. The internal audit, documentation, and governance processes you build now won't be wasted if the deadline moves; they're the foundation that makes a credible compliance posture possible when standards do arrive.
No. As of April 2026, the Digital Omnibus is under the ordinary EU legislative procedure and requires formal adoption by both the European Parliament and the Council. Formal adoption is expected later in 2026, but the text can be amended during that process. Civil society groups are actively lobbying MEPs to restore the accountability provisions before final adoption.
The August 2026 deadline has not yet been formally removed — that depends on the Omnibus passing into law. Organizations in scope for high-risk AI should treat their compliance programs as live until the text is formally adopted. Assuming the delay will pass before beginning compliance work is a planning risk, not a strategy.
Prohibited systems — including real-time biometric surveillance in public spaces, social scoring, and subliminal manipulation — entered into force in February 2025 and are not part of the high-risk compliance framework. The Omnibus targets compliance burden on deployers of permitted high-risk AI, not the categorical prohibitions the EU has already enacted.
A second draft of the Code of Practice on AI-generated content marking was published on March 3, 2026. This process operates on a separate timeline from high-risk AI compliance and is not affected by the Digital Omnibus. General-purpose AI model providers' transparency obligations are largely retained in the current Omnibus text.
If passed as drafted, workers subject to AI-driven performance monitoring will have less legal recourse than the original AI Act intended. Without mandatory risk assessments, there is no public documentation of what criteria automated monitoring systems apply. Workers retain rights under existing EU labor law and GDPR, but the AI Act's specific accountability layer for automated employment decisions would be delayed or removed.
The Digital Omnibus is still in legislative process. The provisions that most concern civil society — removal of risk assessments for high-risk AI, the GDPR data redefinition — are not yet law. There is still a process through which they can be amended, and that process runs through the European Parliament.
For organizations operating in or selling to the EU market, the practical step is to continue AI Act compliance work on the assumption that high-risk requirements will eventually enter into force, regardless of the exact date. The documentation, audit, and governance processes that compliance requires are not wasted if the deadline moves.
The more useful observation is structural: the gap between the EU's stated identity as a global standard-setter for AI governance and the provisions currently under Parliamentary review is the most informative signal in this story. Understanding that gap — who closed it, through what process, for whose benefit — is the context behind every subsequent EU AI policy debate. The Corporate Europe Observatory's lobbying documentation and EDRi's analysis are the most detailed public records of that process currently available.
A beginner-friendly look at how AI may reshape work, health, education, and governance from 2026 to 2031
AI marketing tools in 2026 integrate SEO, design, and automation to help marketers work smarter
Why LLM Benchmarks Fail Your AI Agent (The 0.95^10 Problem)
EU Commission missed its February 2026 AI Act guidance deadline. EU Council now proposes pushing high-risk AI enforcement to December 2027. Only 8 of 27 member states have enforcement authorities in place.
Muck Rack's 2026 journalism survey found 82% of journalists use AI, up from 77%. But concern about unchecked AI rose 8 points to 26%. Here is what the numbers mean for editorial teams.
The News/Media Alliance signed a 50/50 AI licensing deal with Bria covering 2,200 publishers on enterprise RAG queries. The split sounds equitable. Bria controls the attribution algorithm.
The Dallas Fed's February 2026 analysis shows entry-level positions fell 16% in top AI-exposed industries while experienced workers' wages rose 16.7%. The split is structural, not temporary.