Beyond AdTech and Anonymisation
What is the Real Aim of Data Protection?
This post continues the ongoing series that examines the conceptual and structural tensions exposed by the Digital Omnibus proposals and by the arguments advanced by Stalla-Bourdillon and Purtova/Newell. The previous instalment set out the need for a deeper interrogation of the assumptions that have shaped the modern understanding of personal data. This second entry turns explicitly to a systems-based perspective and argues that the controversies surrounding identifiability, anonymisation, and the proper scope of the GDPR cannot be resolved at the level of doctrine alone. They must be understood through the broader regulatory ecosystem in which data protection operates and through how seemingly technical definitions recalibrate the distribution of responsibilities across that ecosystem. A systems-centred approach is therefore essential for evaluating the trajectory of reform and for determining what a sustainable, coherent model of EU data governance should look like.
Theoretical Foundations: Systems of Regulation and Data Fixation
Any meaningful assessment of the Digital Omnibus proposals and their interaction with the SRB judgment[i] must begin with the theoretical foundations of regulatory design. A systems perspective refuses to treat legal definitions in isolation. It examines how rules interact with technological architectures, market incentives, and organisational behaviour to determine whether regulatory aims can be achieved efficiently and coherently. This broader frame is familiar to scholars of digital regulation. Lawrence Lessig’s modal analysis of law, norms, architecture, and markets illustrates that regulation succeeds only when the chosen lever aligns with the causal mechanisms of behaviour.[ii] Christopher Hood’s NATO model of nodality, authority, treasure, and organisation similarly highlights that states regulate through multiple instruments whose effectiveness depends on the appropriateness of the target at which they are aimed.[iii] When these frameworks are brought into conversation with contemporary data protection debates, it becomes clear that definitional choices about personal data shape the entire regulatory ecosystem. They determine not only what the law governs but also which institutional actors gain or lose responsibility.
It is in this context that Purtova and Newell’s critique of data fixation resonates.[iv] As discussed in my previous post, their paper argues that anchoring regulatory intervention to the concept of personal data introduces imprecision by conflating two analytically distinct problems. Semantic problems arise where meaning is constructed around identifiable persons and therefore connect naturally to the GDPR’s core normative concerns. Syntactic problems arise where harm emerges from structural features of information processing, such as inference, aggregation, or modelling, irrespective of whether the underlying data can be linked back to identifiable individuals. Their central claim is that the GDPR attempts to govern both domains simultaneously through a single definitional trigger and therefore ends up mismatched to the causal pathways that produce many digital harms. They propose shifting regulatory attention away from data categories and toward the practices and effects of processing.
This critique is compelling, but a systems analysis reveals that the SRB judgment[v] and elements of the Omnibus reform already represent an incremental move in precisely that direction. SRB introduces contextuality into the identifiability analysis by clarifying that pseudonymised data is not personal for entities that lack the means to re-identify, either because re-identification is prohibited by law or because it is impossible in practice.[vi] This relational model directly counters the over-inclusiveness that Purtova warned against, where the definitional expansion of personal data created what she described as a law of everything.[vii] Under SRB, identifiability turns on actual organisational position, access, and capacity, rather than speculative possibilities external to the actor. This reduces the strain on regulators, aligns the legal test with real-world incentives, and prevents the system from becoming unmanageably broad.
Stalla-Bourdillon cautions that codifying SRB within the Omnibus risks creating a déjà vu effect by weakening thresholds and neglecting state-of-the-art controls, such as motivated intruder testing.[viii] Yet this critique overstates the rigidity of the legislative architecture. The Omnibus introduces new mechanisms for adaptability, including the implementation of acts under Article 41a, which allow the Commission to integrate evolving methodologies and empirical evidence into technical standards. From a systems perspective, this dynamic capacity is essential. It enables the regime to absorb innovations in anonymisation, threat modelling, and statistical disclosure control without becoming ossified or doctrinally brittle.
Empirical evidence emerging in the post-SRB landscape already shows that contextual identifiability reduces unnecessary compliance burdens for low-risk processing, such as public consultations and non-sensitive administrative exchanges, while preserving transparency and accountability where they are needed.[ix] The Omnibus extends this rationalisation by excluding incidental special category data encountered in AI training when that data is not being used to make decisions about individuals. This addresses under-inclusiveness in syntactic scenarios like synthetic data production or bulk ingestion, where subjects are not meaningfully implicated, yet compliance obligations remain disproportionately heavy. The result aligns with Purtova and Newell’s semantic and syntactic divide. It shifts regulatory attention away from blanket assumptions about data types and toward the practices that actually generate risk, thereby improving the coherence and precision of data governance.
A more precise and more disciplined trigger for when the GDPR applies reshapes the system in ways that extend far beyond definitional housekeeping. By grounding identifiability in the real capacities and constraints of specific actors rather than in abstract hypotheticals, the law aligns its scope with the causal structures that actually produce risk. This stabilises expectations for controllers and regulators alike, reduces noise in enforcement priorities, and prevents the GDPR from being stretched across domains it was never designed to govern. At the same time, embedding adaptive mechanisms into the definitional architecture ensures that the system can evolve with emerging technologies and threat models without losing internal coherence. What emerges is a regime that can focus its normative force where personal implications truly arise, while allowing other parts of the regulatory ecosystem to govern the broader range of digital practices that do not hinge on identifiability.
Therefore, the reforms do not abandon the data anchor. They refine it. They operationalise contextuality. They incorporate systems adaptability into definitional architecture. In practical terms, this means the GDPR’s trigger becomes more tightly coupled to the real-world conditions under which identifiability emerges, rather than to hypothetical or universalised assumptions about what data might mean in abstract. A contextual and adaptive anchor enables the law to distinguish between processing that implicates people and processing that merely passes through information structures without personal consequence. It also ensures that the GDPR can absorb future technical developments through mechanisms that update interpretive standards without reopening the statute each time. In this configuration, the data anchor becomes not a rigid boundary but a calibrated regulatory hinge that opens or closes depending on the risks presented in a given setting. This is what allows the regime to move toward a more functional, practice-oriented model while preserving the conceptual core that makes data protection coherent: the commitment to regulate meaningfully personal information wherever it genuinely connects to the individual.
The systems analysis outlined above reframes the role of personal data within a broader regulatory ecosystem and reveals how SRB-aligned contextuality stabilises the definitional trigger on which the GDPR depends. Yet nowhere are the stakes of this recalibration more visible than in the AdTech ecosystem, which has long functioned as the gravitational centre of debates about identifiability, pseudonymisation, and the limits of anonymisation. AdTech is frequently treated as the acid test for any proposed reform, and it is often invoked as the reason the definition of personal data must remain as expansive as possible. The following section turns directly to this terrain, examining whether AdTech truly requires an unbounded personal data concept or whether a systems-grounded SRB approach can provide stronger, more targeted protection without perpetuating the dysfunctions of the law of everything.
Beyond AdTech and Anonymisation: What is the Real Aim of Data Protection
A systems-centred reading of EU data protection reveals something that tends to disappear in doctrinal debates about identifiability and anonymisation. The GDPR was never designed to be a general law of digital conduct. It was intended as a structural safeguard for a specific category of activity, namely the processing of information that relates to an identified or an identifiable natural person.[x] That is the terrain in which its principles, its rights, and its institutional machinery make sense. What has happened in the intervening years is that interpretive expansion, driven by a combination of CJEU jurisprudence[xi] and academic insistence that anything that is conceivably linked to a person is personal data[xii], has pushed the GDPR far beyond the functional perimeter for which it was designed.
Stalla-Bourdillon, Purtova, and Newell all recognise that the landscape is dysfunctional, but they narrate the symptoms rather than the underlying systemic failure. Stalla-Bourdillon worries that a narrow definition will limit the law’s reach in contexts such as AdTech, where identification is technically obscured, but profiling harm is real. Purtova and Newell diagnose a different pathology, namely that the GDPR has swollen into a regime so conceptually inflated that it cannot meaningfully target digital harms. Both perspectives assume that the broadness of the personal data definition is a virtue. Both assume that the Commission’s simplification risks undoing the GDPR’s protective fabric. Yet both positions falter when examined through a systems lens.
The central problem is not that the definition of personal data has become capacious. It is that the definition became conceptually unbounded. Recital 26’s open-textured reference to “all means reasonably likely to be used” evolved into an interpretive invitation to imagine increasingly remote identification scenarios. The outcome was a drift toward a quasi-metaphysical conception of personal data, in which the analytical question became not “is this data meaningfully about a person in this context” but “could someone somewhere with unknown auxiliary information and unspecified motivation treat this as linkable to a person”. That approach may satisfy academic exercises in hypotheticals, but it does not produce a stable or workable regulatory system. It produces precisely what Purtova calls the “law of everything”.[xiii] It produces a regime that, in theory, must regulate every byte of information that traverses a modern computing environment. No legal system can survive that burden without collapsing into incoherence or shallow proceduralism.
This is where the SRB judgment becomes transformative.[xiv] The Court did not dismantle the GDPR’s protective architecture. It reanchored it. SRB clarifies that identifiability must be understood in relation to the specific actor and in light of the means that are reasonably likely to be used by that actor in the concrete circumstances of processing.[xv] This is not a deregulatory gesture. It is a return to functionalism. It restores the relational character of personal data. It ensures that the concept tracks actual capacity, intention, and context, rather than theoretical possibility.
Appropriately understood, an SRB-aligned definition does not weaken protection in AdTech or similar environments. That is because some of the relevant actors in those ecosystems, the ones who are capable of targeting individuals and who are capable of causing harm to individuals, do in fact possess the means, incentives, and business models that render reidentification both technically feasible and commercially routine. The SRB framework requires that identifiability be assessed honestly rather than abstractly. It prevents actors from disclaiming identifiability where their organisational structure, data assets, or technical tools plainly contradict such claims. Far from opening loopholes, SRB closes the gap between legal doctrine and empirical reality.
Where SRB makes a decisive systemic contribution is in removing the conceptual fog that previously forced the GDPR to behave as a universal regulatory solvent. Once personal data is tied to actual identifiability in context, the GDPR returns to its proper constitutional domain. It ceases to absorb every conceivable informational activity. It ceases to stand in as the de facto law governing algorithmic accountability[xvi], competition harms[xvii], consumer deception[xviii], platform power[xix], workplace surveillance[xx], and every other digitally mediated phenomenon that scholars previously attempted to squeeze through the frame of “personal data”.
What this reveals, once you step back from doctrinal skirmishes, is that the real cost of the old, inflated definition was not merely analytical confusion but a kind of regulatory imperialism. By treating almost every digitally mediated harm as a problem of personal data, the system quietly recoded structural questions about algorithmic accountability into GDPR disputes about lawful bases and DPIAs; it translated competition harms into arguments about data portability and consent; it reframed consumer deception as transparency and information duties; it squeezed platform power into discussions of profiling and automated decision making; it tried to domesticate workplace surveillance through notice and proportionality tests designed for quite different settings. In each case, the underlying harm migrated into the privacy vocabulary, which is elegant but poorly matched to issues such as market structure, design manipulation, systemic risk, or labour exploitation. The result was not more protection but thinner, more distorted protection, because the wrong normative toolkit was being applied to the wrong layer of the stack. SRB’s contextual turn breaks that spell: by insisting that the GDPR only bites where identifiability is real, it forces other regimes to step up on their own terms instead of hiding behind the language of data protection.
A simplified and disciplined definition, therefore, strengthens the system. It does not shrink the protective perimeter. It sharpens it. It frees the GDPR to perform its function with greater intensity and with greater conceptual integrity. It shifts the regulatory burden for non-identification-based harms away from a single overextended statute and toward the broader digital governance ecosystem where those harms properly belong. AdTech remains regulated where it should be regulated. But the GDPR is no longer forced to act as an all-purpose remedy simply because the concept of personal data was allowed to float without an anchor.
The fundamental aim of data protection, when viewed through a systems lens, has never been to police every act of information processing. Its purpose is far more precise. It is to intervene only where data meaningfully attaches to an identifiable person and where the handling of that data creates risks that warrant legal constraint. Everything beyond that perimeter falls under other regulatory logics and institutional competencies. The SRB-aligned conception of identifiability clarifies this boundary. It rescues the GDPR from the gravitational pull of the law of everything and repositions it as a targeted, high-intensity regime for genuinely personal processing. In doing so, it recovers the coherence, proportionality, and normative force that the GDPR was intended to possess, and which it can only exercise once its scope is disciplined rather than universalised.
[i] Case C-413/23 P European Data Protection Supervisor v Single Resolution Board EU:C:2025:645 (Judgment of the Court (First Chamber), 4 September 2025).
[ii] Lessig, Lawrence. Code and Other Laws of Cyberspace. New York: Basic Books, 1999.
[iii] Hood, Christopher. The Tools of Government. London: Macmillan, 1983.
[iv] Nadezhda Purtova & Bryce Newell (2024), “Against Data Fixation: Why ‘Data’ Fails as a Regulatory Target for Data Protection Law and What to Do About It,” discussion draft (27 June 2024).
[v] https://curia.europa.eu/juris/documents.jsf?num=C-413/23%20P
[vii] Purtova, The Law of Everything (2018): https://www.tandfonline.com/doi/full/10.1080/17579961.2018.1452176
[viii] Sophie Stalla-Bourdillon (2025), “Déjà vu in data protection law: the risks of rewriting what counts as personal data,” Privacy & Data Protection 26(2), 9–13.
[ix] European Law Institute, Towards a Targeted Revision of EU Data Protection Law – ELI Response to the European Commission’s Call for Evidence “Simpler, fairer and more effective – strengthening the foundations of the digital transition” (14 October 2025); Unabhängiges Landeszentrum für Datenschutz Schleswig-Holstein, 42. Tätigkeitsbericht 2024 (23 April 2024) 48–49 (anticipating revised anonymisation guidance and referencing EDPS v SRB as a key case for practice) at https://www.datenschutzzentrum.de/uploads/tb/uld-42-taetigkeitsbericht-2024.pdf
[x] Article 4(1), GDPR.
[xi] CJEU Case C-582/14 (Breyer) (criteria for identifiability); Nowak v Data Protection Commissioner (C-434/16);
[xii] Purtova & Leenes (2023), “Code as personal data: Implications for data protection law and regulation of algorithms,” Int’l Data Privacy Law 13(4): 245–261
[xiii] Purtova, The Law of Everything (2018): https://www.tandfonline.com/doi/full/10.1080/17579961.2018.1452176
[xiv] CURIA, EDPS v SRB Press Release (September 2025): https://curia.europa.eu/jcms/upload/docs/application/pdf/2025-09/cp250107en.pdf
[xv] Paras 75-79, SRB v EDPS.
[xvi] Busuioc M et al, ‘Accountable Artificial Intelligence: Holding Algorithms to Public Account’ (2020) Public Administration Review 81(3) 601–615.
[xvii] Interactions between EU Competition Law and Data Protection in Digital Markets: Striving for Coherence Klaudia Majcher (2024, Research Handbook on Competition and Technology)
[xviii] Calo R, ‘Digital Market Manipulation’ (2014) 82 George Washington Law Review 995; Yeung K, ‘Hypernudge: Big Data as Behavioural Control’ (2017) 20 Information, Communication & Society 118.
[xix] Petit N, Big Tech and the Digital Economy: The Moligopoly Scenario (OUP 2020); Geradin D, ‘Platforms and the Abuse of Data Dominance’ (2021) Journal of Competition Law & Practice.
[xx] De Stefano V, ‘Negotiating the Algorithm: Automation, Artificial Intelligence and Labour Protection’ (2019) 40 Comparative Labor Law & Policy Journal 471.

I love this line:
"Stalla-Bourdillon, Purtova, and Newell all recognise that the landscape is dysfunctional, but they narrate the symptoms rather than the underlying systemic failure."
I think this is a great way of framing the issue -- many scholars, and quite a few lawmakers narrate (and imho, fantasize) about how to address the symptoms rather than how to meaningfully fix the systemic failures/problems. Or as the rationalists are known to say, they fail to engage the problem at the object level, and instead dwell on meta-level issues.
It looks like you share my opinion -- and I do think I'll be borrowing 'the law of everything' as a critique of the expansionist opinions and concerns I have.
I think the SRB decision is a step in the right direction, and I hope most of the contours of the Omnibus regs carry forward to effectuate this interpretation.