Calibrating Scope: SRB, the Digital Omnibus, and the Engineering of GDPR’s Next Phase
From the “Law of Everything” to a Governable System of Rights and Risks
The first three posts (I, II, and III) in this series mapped the conceptual terrain. Part I traced the clash between expansive and restrained notions of personal data. At the same time, Part II argued that a disciplined, SRB-aligned conception of identifiability is a precondition for making sense of AdTech and anonymisation. Part III examined how a refined, SRB-aligned definition of personal data repositions the GDPR within the broader digital governance ecosystem, and how that shift reframes the long-standing tension between data-anchored regulation and the emerging practice-based models that target harms irrespective of data type. Part IV shifts register. Rather than asking what personal data ought to mean in the abstract, it examines how the combination of the SRB judgment and the Digital Omnibus proposal re-engineers the GDPR as a regulatory system. The question is not only whether simplification is normatively attractive, but whether this particular configuration of case law and legislative reform can make the regime more tractable, more predictable, and more capable of sharing the regulatory load with other instruments in the EU’s digital law ecosystem.
SRB as Systems Engineering, Not Just Doctrinal Pivot
The Court’s judgment in EDPS v Single Resolution Board[1] has already been analysed for its implications for pseudonymised information. For present purposes, what matters is how SRB reconfigures the allocation of burdens under the GDPR among different actors in the same processing chain. SRB formalises a relational test. Information does not acquire “personal” status in the abstract; it becomes personal in the hands of a particular actor who has, in practice and in law, the means reasonably likely to identify the person. That might look like a narrow doctrinal point, but as a piece of systems engineering, it performs three critical moves.
First, it introduces a form of modularity into the regime. A dataset can be personal for the originating institution that holds the key and non-personal for a downstream recipient that does not. That modularity matters because it allows different layers of the system to carry different compliance obligations without pretending they are in the same risk position. It is the difference between treating everyone as a potential re-identifier and acknowledging that capability is unevenly distributed.
Second, SRB changes the informational assumptions built into enforcement. Before SRB, regulators and litigants often argued about what hypothetical adversaries might do with data. The Court instead focuses on what specific actors can realistically do, given their context, resources, and legal constraints. That shift from hypothetical to situated analysis is classic systems thinking; it ties the trigger for regulation to the actual causal structure of risk rather than to thought experiments.
Third, SRB implicitly invites a redesign of data governance inside organisations. Controllers that wish to take advantage of the relational conception of personal data must be able to demonstrate that their internal separation of roles, technical measures, and contractual arrangements genuinely prevent re-identification. Contextual identifiability is not a free pass. It is an engineering challenge: design your systems so that the boundary between identifiable and non-identifiable is real, enforceable, and auditable.
Taken together, these moves point away from the image of the GDPR as a monolithic wall and toward a layered system in which duties follow from an actor’s position in the architecture. That is the landscape into which the Digital Omnibus proposal now steps.
The Digital Omnibus as Blueprint for Scoped Simplification
Where SRB operates as judicial calibration, the Digital Omnibus functions as a legislative blueprint.[2] It writes entity-specific identifiability into Article 4(1) and extends the logic of contextuality into adjacent domains, notably AI training and low-risk processing. At a design level, two aspects of the proposal are particularly significant.
The first is the attempt to tie the definition of personal data to tools and capacities. Information is not personal for a given entity if that entity lacks means reasonably likely to be used to identify the person. The controversial move here is not the notion of relativity itself, which SRB already endorsed, but the decision to codify it in a general definition rather than leave it to case law. In governance terms, codification is a commitment device: it tells controllers, DPAs, and courts that contextuality is not a marginal doctrine but a structural feature of the regime.
The second is the way the Omnibus uses that refined trigger to redesign specific compliance burdens. The headline example is AI training. By clarifying that incidental special category data in large training sets need not trigger the full panoply of GDPR obligations when those data are not used to make decisions about individuals, the proposal seeks to disentangle syntactic, large-scale modelling from semantic, person-level decision-making. From a systems perspective, this is less about favouring innovation for its own sake than about placing the heavy artillery of data protection where it can actually bite.
These moves are not risk-free, and critics are right to be suspicious of legislative rhetoric about simplification. But it is essential to see that the Omnibus is not simply “shrinking” the GDPR. It is repointing its scope, using the SRB logic as a hinge. The open question is whether the surrounding architecture is strong enough to carry the functions that fall outside that scope.
Calibration Across Regimes: What the Comparisons Really Tell Us
Any discussion of definitional reform now triggers the comparative reflex: how does this look next to the UK GDPR’s “motivated intruder” test[3], or HIPAA’s twin tracks of safe harbour and expert determination?[4] The risk is that comparative law becomes a beauty contest about who has the strictest anonymisation standard. A systems view asks a different question: how do these standards shape the system’s behaviour as a whole?
The UK’s motivated intruder test explicitly assumes an adversary with determination, resources, and a realistic chance of success. It is a vivid heuristic that has helped practitioners think more concretely about the risk of re-identification.[5] HIPAA, by contrast, wraps de-identification in either a checklist of removed identifiers or a requirement for expert statistical assessment.[6] Both regimes embed an expectation that someone will perform and document a structured analysis.
The emerging EU model will look different if the Omnibus passes in anything like its current form. Entity-specificity, contextual assessments, and implementing acts that can set sectoral or purpose-specific benchmarks together form a more distributed architecture. Instead of a single test applied everywhere, we get a configurable set of criteria that can be adjusted over time and across domains.
The trade-off is clear. The checklist and motivated intruder models offer legibility: everyone knows the game. The SRB–Omnibus model offers plasticity: the game can be altered as technology and practice evolve. Plasticity is powerful but dangerous. It requires institutions capable of setting, updating, and enforcing standards at a pace that does not lag too far behind the technologies they govern.
What matters, in other words, is not only the letter of the definition but the governance capacity that surrounds it. The EU’s advantage is its dense web of expert bodies, from the EDPB to sectoral regulators and standardisation organisations. Its challenge is coordination.
Systems-Based Efficiency: Reallocating Scarce Attention
If we take seriously the idea that regulation is a system, then the most precious resource in that system is not legal text but attention. DPAs have finite bandwidth. So do controllers’ legal and engineering teams. The promise of SRB plus Omnibus is that, by tightening the trigger for when the GDPR applies, the system can reallocate attention away from low-risk, low-impact processing and toward the practices that actually matter.
The early evidence, although anecdotal, points in that direction. Public bodies report fewer full-scale DPIAs for routine pseudonymised consultations and surveys. Ethics committees in universities and research institutes have begun to distinguish sharply between projects in which identifiability is genuinely possible and those in which technical and organisational measures render it remote. Legal advice circulated in professional networks now treats relational identifiability as a serious consideration rather than as a speculative thought experiment.
This is not to romanticise practice. There will be controllers who use contextuality as camouflage, and regulators who are tempted to accept that camouflage at face value. But from a systems standpoint, the key question is whether the overall pattern of attention shifts. If DPAs can spend less time on high-volume, low-stakes complaints and more time on systemic infringements, the regime becomes more than the sum of its parts.
A refined scope also has knock-on effects for other areas of law. When less is pulled into the gravitational field of personal data, competition authorities, consumer protection agencies, labour inspectorates, and sectoral regulators find more room to act without worrying that they are trespassing into GDPR territory. That is the deeper efficiency gain: not that any single case becomes easier, but that the architecture as a whole can specialise.
Balancing Innovation and Rights Without Manichaean Narratives
Discussions of simplification in EU data law tend to polarise into suspicious binaries: either one is “pro-innovation” and therefore cavalier about rights, or “pro-rights” and consequently hostile to any easing of burdens. A systems-of-regulation perspective undercuts that framing.
On the innovation side, the SRB–Omnibus package reduces friction for specific processing categories. The most obvious beneficiaries are AI developers and research institutions that can credibly demonstrate they operate with pseudonymised or otherwise non-identifiable data in environments where re-identification is neither technically feasible nor legally permitted. For those actors, the shift from an absolute to a contextual definition reduces the need for elaborate compliance choreography that was never well calibrated to their actual risk profile.
On the right side, the same package can be read as redistributing protection rather than diluting it. The ability to treat some processing as outside the scope of GDPR has meaning only if other instruments are equipped to address the harms that fall through that gap. That is where the parallel evolution of the AI Act, the Digital Services Act, and sectoral frameworks matters. If these instruments are enforced with seriousness, the net effect can be greater protection for individuals and groups, even if fewer operations are formally classified as personal data processing.
The hard work, then, is not to declare oneself on one side or the other of an innovation-versus-rights divide, but to ensure that the overall regulatory system does not leave pockets of risk unaddressed. That is an institutional design problem, not a definitional one.
Governance Trade-offs and the Question of Trust
A relational definition of personal data necessarily imports relational forms of trust. If a dataset is only non-personal because a recipient is constrained from re-identifying, the system’s efficacy depends on those constraints being authentic and enforceable. That raises familiar concerns about contractual governance, technical compliance, and cross-border enforcement. From a systems angle, there are at least three trade-offs to acknowledge:
The first concerns verification. Regulators will need tools and authority to audit claims about identifiability. That may mean more frequent use of on-site inspections, technical testing, and independent certification schemes. It also suggests a larger role for ex post enforcement: punishing breaches of contextual constraints rather than relying solely on ex ante categorial rules.
The second concerns fragmentation. As contextuality hardens into practice, there is a risk that different sectors develop divergent standards for when information ceases to be personal. Health data spaces, financial services, mobility ecosystems, and platform environments may each craft their own understanding. Some diversity is healthy; it allows regulation to track sector-specific risk. Too much diversity, however, creates uncertainty and incentives for forum shopping. The Omnibus’s reliance on implementing acts and EDPB input is an attempt to keep that diversity within a coordinated framework, but the test will be institutional agility.
The third concerns legitimacy. A recalibrated GDPR will only command public trust if people can see that simplification is not a euphemism for abandoning protection. That is a narrative challenge as much as a legal one. Explaining why specific flows are treated as low risk and which other instruments guard against harm will become a central task for regulators and policymakers. Silence will be read as weakness.
Toward Part IV: Simplification as Precondition for Enhanced Regulation
This fourth instalment has treated SRB and the Digital Omnibus as components of a systems update. Together, they refine the scope of the GDPR, alter how identifiability is understood in practice, and redistribute regulatory attention across the broader digital law ecosystem. They do not resolve, once and for all, the tension between expansive and restrained visions of personal data that Part I mapped, nor do they settle the sector-specific controversies explored in Part II. What they do is create the preconditions for a different style of regulation, one that is less fixated on data as such and more comfortable with practice-based, output-oriented control. The following post in the series picks up that thread directly. Part IV, “The Necessary Retreat: Why the Digital Omnibus Saves European Regulation from Itself”, develops the argument that this scoped simplification is not merely tolerable but essential. It makes the case that only by stepping back from the law of everything can European data protection and digital regulation mature into a modular, resilient, and genuinely effective system.
[1] Case C-413/23 P European Data Protection Supervisor v Single Resolution Board EU:C:2025:645 (Judgment of the Court (First Chamber), 4 September 2025).
[2] Digital Omnibus Regulation Proposal at https://digital-strategy.ec.europa.eu/en/library/digital-omnibus-regulation-proposal
[3] ICO, “How do we ensure anonymisation is effective?”, https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/data-sharing/anonymisation/how-do-we-ensure-anonymisation-is-effective/#motivatedintruder
[4] Modes of De-identification, https://pmc.ncbi.nlm.nih.gov/articles/PMC5977668/
[5] Office of National Statistics: https://www.ons.gov.uk/methodology/methodologytopicsandstatisticalconcepts/disclosurecontrol/guidanceonintrudertesting also see the guidance: “the ‘motivated intruder’ is reasonably competent, has access to resources such as the internet, libraries, and all public documents, and would employ investigative techniques such as making enquiries of people who may have additional knowledge of the identity of the data subject or advertising for anyone with information to come forward”
[6] U.S. Department of Health & Human Services, ‘Guidance Regarding Methods for De-identification of Protected Health Information in Accordance with the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule’ (HHS, 3 Feb 2025) https://www.hhs.gov/hipaa/for-professionals/special-topics/de-identification/index.html accessed 12 December 2025.



The more I read your analysis on the SRB judgment, the more convinced I am that the judgment is a really good one for data protection compliance. Like you say it has the potential to make it more realistic, efficient and properly distributes responsibilities across the processing chain.