Simplification and Governance Trade Offs in the Omnibus
Data as Anchor and Practice-Based Regulation
This third instalment builds directly on the analytical foundations established in Parts I and II. Post I set out the clash between expansive and disciplined conceptions of personal data, while Post II examined how SRB-aligned simplification helps recover the true aim of data protection by preventing the GDPR from drifting into the “law of everything”. This post turns to the next layer of the argument: the governance consequences of that conceptual recalibration. Once the definition of personal data is narrowed to the domain in which it has legitimate regulatory traction, the entire architecture of EU digital regulation begins to look different. The key question becomes not merely what personal data is, but how the boundary around it redistributes regulatory functions across the wider system. That is the trade-off at the heart of this post: whether a refined data anchor stabilises governance, or whether a shift toward practice-based regulation is needed to absorb the digital harms that fall outside that anchor. Part I of the series can be found here. Part II can be found here.
Introduction
A recalibrated conception of personal data does more than tidy up definitional ambiguity. It forces a reconsideration of the governance architecture in which the GDPR operates and, by extension, the role that data protection can realistically play in Europe’s broader digital regulatory constitution. For years, the GDPR has functioned not just as a privacy statute but as the gravitational centre of EU digital law. Competition authorities, consumer protection agencies, media regulators, labour inspectorates, and cybersecurity bodies have interpreted their mandates through, or against, the conceptual structure of the GDPR. This was never intentional. It was an emergent consequence of a legal system whose definitional trigger expanded to cover almost all modern data processing. As the GDPR expanded beyond its design, it warped the surrounding regulatory ecosystem, producing the phenomenon Purtova warned about: the “law of everything”.[1] The system began to treat personal data as the universal substrate of digital life and the GDPR as the universal solvent for digital harms.
This section examines the governance implications of stepping back from that universalism. The move from an absolute notion of personal data to a contextual, SRB-aligned[2] understanding is not simply a doctrinal refinement. It is a redistribution of regulatory authority. It determines which instruments govern which harms, how institutional competencies are allocated, and whether Europe ends up with a coherent, polycentric digital regulatory[3] system or a fragmented patchwork in which multiple instruments compete for jurisdiction.
Anchors and Alternatives: Two Models of Regulatory Intervention
The central debate can be framed as a question of anchoring. Should regulatory intervention hinge on whether information qualifies as personal data? Or should the trigger be the nature of the practice and the harm it generates, regardless of the informational substrate involved? Purtova and Newell push hard for the latter, arguing that the data anchor inherently misaligns regulatory effort by tying intervention to a category that no longer predicts where digital harms arise.[4] In their view, the GDPR’s conceptual architecture cannot serve as the backbone of modern governance because harms now emerge from inference, modelling, and system-level dynamics, not from the handling of identifiable information as such.
What this argument underplays, however, is that the collapse they describe is not the inevitable consequence of anchoring but of overextension. It was the inflation of “personal data” into a universal solvent that destabilised the architecture: once everything became personal data, the GDPR was forced to operate simultaneously as a labour law, a consumer protection regime, an antitrust instrument, a platform governance statute, and a general-purpose algorithmic accountability framework. No legal system can carry that much conceptual freight without deforming under the strain. Yet, in this sense, collapse is not terminal. It is reversible precisely because it arises from a definitional distortion rather than a structural impossibility. By recalibrating the trigger, through SRB’s contextual identifiability[5] and the Omnibus’s clarification of entity-relativity[6], the system can be restored to functional coherence. The GDPR can return to its constitutional domain while the surrounding regulatory ecosystem reclaims the terrain that should never have been ceded to data protection in the first place.
As I have argued in my previous posts, the dichotomy (data-anchored regulation versus practice-anchored regulation) collapses under closer scrutiny. Data protection is one instrument within a multi-instrument governance framework, and the real question is how this instrument interacts with the others. The failure of the pre-Omnibus system was not that the GDPR had a data anchor. It was that the anchor had been attached to a category so vast that it rendered itself meaningless. When personal data becomes synonymous with information, the GDPR becomes structurally misaligned. It must either attempt to regulate every digital harm or contort its doctrines to address harms that fall entirely outside its normative scope. Under such conditions, any anchor would fail, not because anchoring is wrong, but because it was tied to everything at once.
SRB as the Reconstitution of a Viable Anchor
Once the definitional perimeter becomes real rather than theoretical, the anchor stabilises. Personal data becomes a legally and conceptually intelligible category again. Within that sphere, the GDPR’s architecture operates with remarkable internal coherence. Purpose limitation, fairness, transparency, minimisation, rights, obligations, and accountability hang together precisely because they are designed for the relational management of person-linked information. Outside that sphere, the GDPR’s tools lose traction. They were never intended to regulate harms that do not turn on identifiability or personal information.
Consider the now-common situation in which a public authority transmits a fully pseudonymised dataset to an academic research team that lacks any technical or legal means to re-identify the subjects. Under the pre-SRB absolutist model, this transfer triggered full GDPR compliance, from data subject rights to DPIAs, despite the receiving actors having no capacity to link the data back to individuals. Regulators and institutions spent disproportionate time navigating theoretical privacy risks rather than evaluating the study’s substantive research ethics or societal impact. Post-SRB, that same flow can be treated as non-personal from the recipient’s perspective, allowing the GDPR to govern the authority’s upstream processing while freeing downstream actors from obligations that served no protective purpose. The result is not deregulation but doctrinal precision: the GDPR applies where its logic fits and recedes where its machinery is orthogonal to the activity’s actual risk profile.
(Thanks to Peter Craddock for this example) Consider a weekly grocery shop. A woman buys tampons and a pack of paracetamol. The shop’s till system inevitably produces an itemised receipt, and (if the store uses a loyalty card) logs that transaction in her profile. From a purely formalistic, pre-SRB lens, the mere fact that this could reveal something about her reproductive status drags the whole thing into Article 9 territory. The data controller must suddenly imagine itself handling quasi-clinical health data because someone, somewhere, could infer a biological fact from a menstrual-related purchase.
Carried to its logical extreme, the shop would need explicit consent to print a receipt, plus a DPIA for the loyalty programme, because the metadata might whisper something about ovulation cycles. This is where legal formalism turns into a parody of itself, treating ordinary retail logistics as a high-risk health dossier. Once you reintroduce SRB’s contextual actor-perspective, the fog clears. The supermarket processes a purchase to take payment, manage stock, and show the customer what she bought. None of these purposes creates a meaningful intrusion. Nothing is being inferred, profiled, or actioned. There is no downstream vulnerability exploitation. So even if a “health inference” is theoretically latent, the supermarket’s handling of the receipt data poses essentially no risk, and the GDPR’s special-category machine would add no real protection.
Shift the context, though, and the legal picture changes. Imagine the same supermarket deciding to run an “early-fertility optimisation” micro-targeting programme: analysing menstrual-adjacent purchases as signals, and then pushing ads for fertility supplements, prenatal vitamins, or pregnancy-related services. This is no longer routine retail processing. This is a new purpose, an inferential leap, and a profiling use that exploits a sensitive dimension of her private life. The risk increases, autonomy is pressured, and the GDPR’s protective logic now fits the activity. The same underlying data becomes regulated because what the actor is doing with it crosses into a domain where harms become real rather than theoretical.
In other words, it is not the purchase of the tampon that matters; it is the social choreography around its use. Receipts are boring. Profiling vulnerabilities is not. The GDPR should conserve its energy for the latter. Introducing meaningful regulation of risk, rather than reflexive regulation of inputs, actually strengthens data subject rights across the entire ecosystem. When the law stops burning cycles on phantom hazards (routine receipts, inert pseudonymised datasets, clerical artefacts) it can concentrate supervisory and institutional capacity on the domains where asymmetric power, behavioural influence, and opaque inference genuinely threaten autonomy. This is not a dilution of protection but its redistribution toward the places where rights are most at stake. A system that distinguishes low-risk processing from high-risk manipulation equips regulators to intervene earlier, design obligations more intelligently, and preserve the GDPR’s legitimacy by aligning its machinery with lived reality. In that sense, SRB doesn’t shrink data protection; it clears space for it to do the job it was always meant to do.
Polycentricity Enabled: The Governance Consequence of a Disciplined Boundary
The systemic benefit of a disciplined boundary is that it reconfigures the governance landscape into a genuinely polycentric one. Instead of allowing the GDPR to function as an accidental regulatory monopoly, expanding by default into every gap left by definitional ambiguity, a clarified perimeter redistributes authority across the digital regulatory stack. The gravitational distortion caused by an unbounded personal data concept recedes, and with it the pathological tendency for every digital harm to be linguistically reframed as a privacy harm. In its place emerges a regulatory ecosystem in which the GDPR governs the domain it was designed for and adjacent regimes can exert their intended normative force without being pulled into orbit around data protection:
Consumer protection law governs manipulation, dark patterns, and behavioural exploitation.
Competition law governs data-driven dominance, exclusionary conduct, and structural power.
Platform regulation (DSA) governs systemic risks in ranking, recommender systems, and intermediated speech.
Labour law governs algorithmic management, worker surveillance, and informational asymmetry.
Sectoral regimes (financial services, mobility, health, education) govern domain-specific risks.
None of these domains requires the convenient fiction that digital harms must constantly be rerouted through the idiom of privacy. That fiction emerged only because “personal data” had swollen into a universal trigger, forcing heterogeneous problems into the narrow frame of data protection simply because no other regime could get normative traction against a concept so overextended. Restoring proportionality by shrinking the scope of the definition is therefore not a retreat from protection, but a rebalancing of the constitutional order of digital regulation. It reanimates the distributed governance architecture the EU has been deliberately constructing, from the DSA’s systemic-risk framework to the DMA’s competition remedies and the AI Act’s risk-tiered oversight, by ensuring that each instrument can operate on the terrain it was built to govern, rather than being suffocated by the conceptual spillover of an unlimited personal data category.
Systems Efficiency and Institutional Competence
From a systems perspective, SRB-aligned simplification is not a deregulatory gesture. It is a structural enhancement of regulatory capacity or an architectural correction that restores the division of labour across the digital governance ecosystem. Once the GDPR ceases to sprawl across every informational surface and no longer acts as the default venue for all digital harms, the system’s institutional roles sharpen. Each regulator can operate on the layer of the stack it is institutionally engineered to govern, rather than being forced to reinterpret privacy law to cover gaps it was never designed to fill:
DPAs focus on genuine information rights and the risks of identifiability.
Market regulators focus on power asymmetries, data-driven advantage, and exclusionary practices.
Consumer authorities tackle deception, coercion, and behavioural exploitation.
Platform regulators address systemic risks, algorithmic curation, and societal scale effects.
The governance system becomes more legible. The conceptual fog that once blurred the boundaries between regulatory domains begins to lift. Ambiguity contracts rather than metastasises. Compliance becomes more predictable because the trigger for legal intervention is no longer a matter of metaphysics but an assessment grounded in actual identifiability, meaningful risk regulation, and institutional competence. Regulators are no longer pulled into endless, low-risk, high-volume disputes that generate administrative noise without corresponding protective value. Instead, regulatory attention reaggregates around the loci where harms are structurally anchored (market power, manipulation, algorithmic opacity, workplace asymmetries, systemic risk), allowing each regime to deploy its tools with precision rather than improvisation.
Re Regulation, Not Deregulation
What emerges from this recalibration is not a thinner regulatory state but a smarter one. A system in which the GDPR is no longer treated as the law of everything becomes a system better equipped to enforce only what genuinely requires intervention. This is not deregulation. It is re-regulation - redistributing burdens so that each legal instrument governs what it is structurally, normatively, and institutionally equipped to govern. A well-defined personal data category is not the enemy of data protection. It is the precondition for its survival as a meaningful and enforceable field of law.
The implications of this realignment point directly toward the next step in the series. Part IV will shift from conceptual and governance analysis to system-engineering analysis, examining how SRB and the Digital Omnibus interact not simply as doctrinal adjustments but as mechanisms that reconfigure the GDPR’s operational logic. If Part III explains why a disciplined anchor is necessary for a polycentric system, Part IV will explain how the combination of judicial reasoning and legislative reform rewires the machinery of the GDPR itself, changing the allocation of burdens, the distribution of risk, and the practical incentives that shape behaviour across the data economy.
[1] Purtova, N. (2018). The law of everything. Broad concept of personal data and future of EU data protection law. Law, Innovation and Technology, 10(1), 40-81.
[2] Case C-413/23 P European Data Protection Supervisor v Single Resolution Board EU:C:2025:645 (Judgment of the Court (First Chamber), 4 September 2025).
[3] Black, J. (2008). Constructing and contesting legitimacy and accountability in polycentric regulatory regimes. Regulation & governance, 2(2), 137-164.
[4] Purtova, N., & Newell, B. C. (2025). Against data fixation: Why ‘data’fails as a regulatory target for data protection law and what to do about it. Oxford Journal of Legal Studies, gqaf038.
[5] Paras 75-79, SRB v EDPS.
[6] Article 3: Amendments to Regulation (EU) 2016/679 (GDPR): Article 4 is amended as follows: (a) in point 1, the following sentences are added: ‘Information relating to a natural person is not necessarily personal data for every other person or entity, merely because another entity can identify that natural person. Information shall not be personal for a given entity where that entity cannot identify the natural person to whom the information relates, taking into account the means reasonably likely to be used by that entity. Such information does not become personal for that entity merely because a potential subsequent recipient has means reasonably likely to be used to identify the natural person to whom the information relates.’



I will admit, regarding the Omnibus I have placed most of my focus on the changes related to AI and legitimate interest, which I think deserves some criticism (and I have written about). But having read the SRB judgment, and the Omnibus changes reflecting this and now your piece here, I think this change to the conceptual scope of 'personal data' under the GDPR is a really good one. And I think there is one sentence from this post that captures the core benefit really well: a "redistribution toward the places where rights are most at stake."
Great work once again.