The Unfinished Rescue
The Digital Omnibus Fails to Sever the AI Act from the Gravitational Field of Data Protection Constitutionalism
This is the first in a short series of posts in which I try to do two things at once. First, to explain what is actually going on beneath the surface of the Digital Omnibus debate, stripped of the press releases, slogans, and ritual invocations of “fundamental rights”. Second, to explain why the reaction to the Omnibus has been so unusually tense, so juridified, and so frankly hostile. Much of that tension only really makes sense once you read the EDPB-EDPS JOINT OPINION 1/2026 (Digital Omnibus on AI) closely. It is not just a technical disagreement about simplification or safeguards. It is a struggle over who gets to define the governing logic of EU digital regulation going forward, and who gets to remain at the centre of it. In that sense, the Omnibus debate is as much about institutional power and regulatory identity as it is about AI, data protection, or innovation.
The European Commission’s proposal for a Digital Omnibus Regulation, introduced in November 2025, marks a pivotal moment in the evolution of European technology governance. Presented as a package of targeted simplification measures intended to reduce administrative burdens and smooth the implementation of the AI Act Regulation (EU) 2024/1689, the Omnibus has instead become the focal point of a deeper constitutional struggle. This dispute is not primarily about compliance schedules or documentation formalities. It reflects a clash between two fundamentally different legal epistemologies: the market-oriented, technocratic logic of product safety associated with the New Legislative Framework, and the rights-centred, procedural logic of data protection constitutionalism.
This Substack offers an analysis of that conflict. It argues that the AI Act was structurally conceived as a product safety instrument, a technical regulatory regime governing market access, risk classification, and conformity assessment under Article 114 TFEU. By contrast, the European Data Protection Board and the European Data Protection Supervisor, acting through their Joint Opinion 1/2026, are engaged in a strategic and sophisticated effort to reconstitutionalise the AI Act from within. By asserting the primacy of data protection principles such as strict necessity, data minimisation, and the central supervisory role of data protection authorities, the EDPB seeks to anchor AI governance firmly within the gravitational field of fundamental rights law, thereby reshaping and effectively overriding the Act’s product safety foundations.
What is being resisted is not simplification as such, but the loss of interpretive and supervisory primacy that simplification implies.
The analysis suggests that the Commission’s Omnibus proposal, while taking some necessary steps towards simplification, including the removal of registration obligations for specific lower-risk systems and adjustments to the legal basis for bias detection, does not go far enough to insulate the AI Act from this normative overhang. Accordingly, this post tries to explain how the EDPB’s interpretive strategies risk paralysing the AI Act’s enforcement architecture by imposing a surveillance-oriented logic on a framework intended to operate under a safety-oriented one. It argues that the Commission should have adopted a far more robust approach by explicitly codifying statistical necessity as a form of legal necessity, removing data protection authorities from co-governance roles within regulatory sandboxes, and introducing a supremacy clause designed to shield the product safety regime from the chilling effects of the GDPR.
The Constitutional Coup: The AI Act’s Identity Crisis
To understand the stakes of the Digital Omnibus and the EDPB’s Joint Opinion, one must first dissect the structural identity of the AI Act. The friction we observe today is not accidental; it is the result of a legislative experiment that attempted to graft a fundamental rights narrative onto a product safety chassis. This section explores the “legal DNA” of the Act and the conflicting “regulatory grammars” that are now colliding.
The Legal DNA: Product Safety and the New Legislative Framework
The AI Act is, in its skeleton and muscle, a piece of product safety legislation. It relies on the “New Legislative Framework” (NLF), a regulatory model established by Decision 768/2008/EC and Regulation (EC) 765/2008. This framework was designed to facilitate the free movement of goods within the Single Market while ensuring high levels of user safety. Its primary legal basis is Article 114 of the Treaty on the Functioning of the European Union (TFEU), which empowers the EU to adopt measures to approximate national provisions to ensure the functioning of the internal market. The “regulatory grammar” of the NLF is specific, technocratic, and ex-ante. It consists of:
Essential Requirements: The law sets high-level safety objectives (e.g., “the device must not overheat,” “the AI must be accurate”). These are performance goals, not moral prescriptions.
Harmonised Standards: Private standardisation bodies (CEN/CENELEC) write the detailed technical specifications that presume conformity with the law. This delegates the “how” of compliance to engineering experts.
Conformity Assessment: Manufacturers (or third-party notified bodies) verify that the product meets the standards before it enters the market (ex-ante).
CE Marking: The physical (or digital) signal that the product is compliant, serving as a passport for entry into the 27 Member States.
Market Surveillance: National authorities (Market Surveillance Authorities or MSAs) monitor products after they are sold, ordering recalls or withdrawals if they prove unsafe.
This system is inherently technocratic and risk-based. In the NLF, “risk” is a probabilistic calculation of physical harm or non-compliance, managed through engineering controls. It is not typically a moral adjudication of subjective rights violations. When a toy manufacturer self-certifies a doll under the Toy Safety Directive, they check for loose parts (choking hazards) and the doll’s chemical composition. They are not conducting a fundamental rights impact assessment on the child’s right to play or freedom from manipulation.
The “Fundamental Rights” Injection
The AI Act represents a mutation of the NLF. It takes this machinery, designed for elevators, pressure vessels, and toys, and applies it to “high-risk” algorithmic systems that impact fundamental rights (migration, employment, justice, democracy). Here lies the core tension. In the AI Act, fundamental rights function operationally as risk vectors. A “risk to fundamental rights” is treated as a safety defect, analogous to a risk of electric shock in a toaster or a brake failure in a car. The Act attempts to translate constitutional claims (e.g., non-discrimination, privacy, due process) into engineering and governance constraints (e.g., data governance, bias mitigation, robustness, human oversight). The mechanism for protecting rights under this model is technical compliance: if the system is built correctly according to harmonised standards, appropriately documented in the technical file, and marked with a CE, the legal assumption is that rights are protected. The manufacturer has discharged their duty.
The EDPB’s Counter-Narrative: Data Protection Constitutionalism
The EDPB, representing the collective will of the EU’s national data protection authorities (DPAs), operates within a fundamentally different legal framework. Their authority stems from Article 16 TFEU (the right to data protection) and Article 8 of the Charter of Fundamental Rights. In the view of the EDPB (as articulated in their guidance and the Joint Opinion 1/2026), “risk” is not a safety defect to be managed, but an interference with a fundamental right that must be continuously justified. This perspective is rooted in what scholars call “Data Protection Constitutionalism,” characterised by:
Strict Necessity: Processing is prohibited unless it is essential for a specific purpose. Convenience, cost-saving, or “better performance” are not valid justifications.
Minimisation: Using the least amount of data possible is a legal imperative, not an efficiency metric. The burden is always on the controller to prove they could not have achieved the result with less data.
Proportionality: Every interference must be weighed against the objective, subject to strict judicial scrutiny.
Justiciability: Individuals have subjective rights (access, deletion, objection) that can be enforced directly against the provider, regardless of technical certification.
The EDPB’s approach to the AI Act is to reject the “technicisation” of rights. They argue that compliance with technical standards (even harmonised ones) does not exhaust the obligation to protect fundamental rights. They seek to “reconstitutionalise” the AI Act from the inside by insisting that AI governance must remain tethered to the logic of the GDPR, where specific consent, strict necessity, and DPA oversight reign supreme.
The Digital Omnibus is the battlefield where these two logics (Product Safety (market efficiency, technical compliance) and Data Protection (rights restriction, strict justification)) collide. The Commission’s proposal aims to simplify the Act to make it workable; the EDPB’s opinion seeks to entrench its own interpretive authority to ensure that “simplification” does not mean “deregulation” of rights.
The Digital Omnibus Proposal: A Timid Step Toward Autonomy
The EC’s “Digital Omnibus” proposal did not emerge in a vacuum. It was a response to a crisis of competitiveness and regulatory complexity identified by the Draghi Report (2024) and the Commission’s own Competitiveness Compass (2025). These reports warned that the cumulative weight of the GDPR, AI Act, Data Act, and Cyber Resilience Act was stifling European innovation, creating a “regulatory thicket” that made it nearly impossible for SMEs to scale. The Omnibus aims to “simplify” the implementation of the AI Act. However, a close reading reveals that “simplification” is a euphemism for a tactical retreat from some of the Act’s more unworkable overlaps with the GDPR. The Commission is attempting to carve out space for the AI Act to function as a product safety regime, independent of the paralysing scrutiny of data protection formalism (some might say extremism).
The Rationale: Competitiveness and Coherence
The explanatory memorandum of the Omnibus explicitly frames the initiative as a “stress test” of the digital rulebook. The goal is to reduce administrative burdens by at least 25% (35% for SMEs) by 2029. The proposal identifies specific friction points where the AI Act’s requirements, when combined with the GDPR and other laws, create duplication or legal uncertainty. For example:
Duplicative Reporting: Companies currently face incident reporting obligations under GDPR, NIS2, and the AI Act. The Omnibus proposes a “single entry point”.
Overlapping Competence: Both DPAs and MSAs claim jurisdiction over AI systems involving personal data. The Omnibus seeks to clarify the AI Office’s “exclusive competence” in some instances.
Impossible Timelines: The delay in harmonised standards (CEN/CENELEC) meant companies would be forced to comply with high-risk rules without the necessary technical specifications.
Key Simplification Measures in the Omnibus
The proposal introduces several amendments critical to this Substack’s analysis:
Registration Relief: Removing the obligation to register in the EU database for providers who rely on the Article 6(3) derogation (i.e., systems listed in Annex III that do not pose a “significant risk”).
Bias Detection Standard: Changing the requirement for processing special category data for bias detection from “strictly necessary” to “necessary” (New Article 4a).
AI Literacy: Downgrading the obligation for providers/deployers to ensure AI literacy to an obligation for Member States to “encourage” it.
Timeline Adjustments: Delaying the application of high-risk rules (Annex III and I) to 6–12 months after the availability of harmonised standards and support measures, with a backstop of Dec 2027/Aug 2028.
SME/SMC Benefits: Extending regulatory privileges (reduced fines, simplified documentation) from SMEs to “Small Mid-Caps” (SMCs).
These changes are not merely administrative; they are structural attempts to loosen the grip of the “rights-first” approach. By removing registration for low-risk systems, the Commission asserts that not every AI system needs to be visible to the public—a classic product-safety stance (we don’t track every safe toaster). By removing “strict” from necessity, it acknowledges that engineering reality often requires broad data usage to find bias, contradicting the GDPR’s minimisation dogma. However, as the subsequent analysis will show, the Commission’s “rescue operation” stops short of the necessary surgery. It leaves the “normative overhang” intact, allowing the EDPB to counter-attack through interpretation.
What makes the Joint Opinion so revealing is the way it repeatedly reframes every proposed simplification as a latent competence loss that must be clawed back through doctrinal insistence. Across the document, the EDPB and EDPS do not simply argue that data protection concerns must be respected. They insist that GDPR logic, supervisory presence, and interpretive primacy must remain structurally embedded at every critical junction of the AI Act, even where the Omnibus explicitly seeks to reallocate authority. This is most obvious in three moves.
First, on registration, the Board insists that even systems expressly deemed non-high risk under Article 6(3) must remain publicly registered, not because the AI Act requires it for safety oversight, but because registration enables anticipatory scrutiny by DPAs and fundamental rights bodies, complete with reputational pressure and early enforcement triggers. This is not about risk management; it is about preserving surveillance visibility and intervention capacity.
Second, on sandboxes, the Opinion treats EU-level innovation spaces as intolerable unless DPAs are formally “associated” with supervision and unless the EDPB itself acquires an advisory role and observer status on the AI Board. The argument is telling: because sandboxes may involve personal data, full GDPR governance must follow, even though the legal consequence is to neutralise the sandbox as a space of regulatory experimentation.
“Power doesn’t corrupt. Power reveals.”
— House of Cards
Third, on AI Office competence, the EDPB nominally accepts centralisation, only to hollow it out by insisting on constant coordination with DPAs whenever privacy or data protection risks are present, a condition so broad that it effectively preserves parallel jurisdiction over most general-purpose AI systems. Throughout the Opinion, competence claims are reinforced by an elastic use of “fundamental rights” language that collapses AI Act obligations into GDPR supervision, for example, by asserting that DPAs are “first and foremost competent” wherever personal data processing occurs, even when the AI Act has created a separate product safety regime. Read together, these positions do not reflect a narrow concern about safeguards. They reflect an institutional strategy to remain indispensable by ensuring that no meaningful simplification, centralisation, or decoupling can occur without reaffirming the authority of the DPA and the EDPB. In that sense, the Joint Opinion is less a response to the Omnibus than a defensive manoeuvre against regulatory displacement, using necessity, transparency, and rights rhetoric to reassert relevance and hold ground in an AI governance architecture that is slowly moving beyond them.
What this first post has tried to show is that the Digital Omnibus is not a marginal clean-up exercise, but a stress test of the EU’s entire digital regulatory settlement. The friction it has generated is not accidental, nor is it well explained by appeals to administrative burden or abstract rights protection alone.
The Digital Omnibus is not a clean-up exercise. It is a stress test of whether EU AI governance will operate as a product-safety regime or remain trapped within the constitutional logic of data protection.
It reflects a deeper conflict between two regulatory grammars that were never fully reconciled in the AI Act, and which the Omnibus now forces into the open. Legal arguments here are doing double duty as institutional defences. Simplification is framed as a constitutional threat not only for what it changes, but also for what it displaces. In the next post, I turn directly to the reaction itself. I unpack the EDPB and EDPS Joint Advisory Opinion as an entrenchment move, one that reveals a deeper anxiety about relevance and authority. Faced with an AI regime that reallocates competence and weakens traditional points of control, the EDPB doubles down on expansive competence claims and rights-based rhetoric to stay central to enforcement. The Opinion reads less like neutral guidance and more like a bid to reclaim gravitational pull, keeping AI governance tethered to data protection by insisting that nothing meaningful can happen without it.


Couldn't agree more. Your insight into this deeper constitutional struggle, beyond just technical disagreements, is really sharp. What if the outcome of this institutional power struggle ends up definining the very essence of EU digital policy, potentially sacrificing human-centric AI principles for pure market logic? That would be a huge misstep.