Rethinking ‘Personal Data’
A Tale of Two Visions in EU Data Protection Law
This post is the first in a series that develops a countervailing claim: that, from a systems-of-regulation perspective, simplification may actually strengthen the data protection framework rather than erode it. If the excesses of the “law of everything” are pared back by a more disciplined, SRB-aligned conception of personal data, the GDPR could emerge not weakened but more coherent, more manageable, and ultimately more enforceable. The posts to come will build on this foundation, exploring how targeted recalibration can enhance the system’s overall integrity.
Introduction
Europe’s data protection regime stands at a crossroads. On 19 November 2025, the European Commission unveiled a sweeping “Digital Omnibus” package comprising two proposals: one to amend cornerstone data protection laws (including the GDPR, the ePrivacy Directive, and others), and another to revise the EU’s nascent AI Act.[i] Marketed as a simplification drive to cut red tape[ii] and to address Drahi’s recommendation that the EU boost innovation[iii], these proposals have instead ignited intense concern among privacy advocates and civil society groups. Critics warn that the changes would flout EU case law and effectively gut the GDPR’s core protections. The Austrian privacy NGO noyb magically produced a 71-page analysis comparing the Commission’s draft with existing law, flagging numerous departures from the GDPR’s logic and from CJEU jurisprudence.[iv] Alongside Schrems’ rather provocative post suggesting that if you disagree with him, you must have gone to a rubbish law school, Noyb’s report claims that the amendments could undermine fundamental principles, thereby weakening individuals’ rights and straining enforcement systems. Schrems even blasted the Omnibus as “the biggest attack on Europeans’ digital rights in years”, likening the myriad tweaks to a “death by a thousand cuts” to privacy[vi].[vii] To gauge professional sentiment, noyb also launched a survey to collect early feedback from data protection officers, privacy lawyers, and other experts on the implications of these reforms.[viii]
In this moment of regulatory upheaval, engaging with scholarly critiques becomes especially urgent. Sophie Stalla-Bourdillon’s recent article, tellingly titled “Déjà vu in data protection law”[ix], directly addresses the Omnibus proposals and cautions that the amendments risk reviving old pitfalls. By ignoring the state of the art in anonymisation and statistical disclosure control, she warns, the reform could invite dangerously permissive interpretations of what counts as “personal data,” thereby eroding hard-won safeguards. Indeed, one of the Commission’s most striking moves is to narrow the very definition of personal data. According to Stalla-Bourdillon, under the proposal, whether data are “personal” would depend on what a specific entity claims it can reasonably do with the information.[x] In her view, this marks a sharp departure from the current standard (which looks to whether anyone could identify a person) and cherry-picks [PC1] a recent court ruling while ignoring many others.[xi] Such an entity-specific test opens a backdoor for companies to deem data “non-personal” by technicality; a shift that, as Stalla-Bourdillon suggests, feels like déjà vu to privacy experts and could significantly weaken protection in practice. Meanwhile, Nadezhda Purtova and Bryce Newell’s “Against Data Fixation”[xii] challenges an even deeper assumption: they argue that an obsession with “data” as the object of regulation leads to imprecision and ineffectiveness in data protection law.[xiii] The concept of personal data drives the GDPR’s entire framework, and Purtova & Newell contend that this focus stands in the way of addressing digital harms through other legal avenues. Their critique urges regulators to rethink the very target of regulation; a provocative stance at a time when the EU is attempting to tweak definitions and exemptions in the hope of easing compliance and fostering AI innovation.
Why critical engagement now?
The significance of this regulatory moment cannot be overstated. Stalla-Bourdillon’s and Purtova/Newell’s contributions offer vital lenses for scrutinising these developments. Stalla-Bourdillon’s argument underscores the danger of weakening conceptual and technical rigour in the law, reminding us that how we define personal data and anonymity is pivotal to preserving privacy. Purtova and Newell invite us to ask whether the Commission’s fixes are addressing the right problem at all, or simply rearranging deck chairs on a proverbial ship.
In short, a critical engagement with both positions is necessary right now to frame the debate: Will the Omnibus reforms fortify data protection principles in line with technological realities, or do they signal a step back, i.e., a retreat from the very principles that made the GDPR a global benchmark? My analysis of Stalla-Bourdillon’s and Purtova/Newell’s arguments aims to answer that question in light of the current regulatory crossroads. This piece marks the beginning of a broader series unpacking the deeper conceptual, legal, and regulatory tensions at stake in these reforms. Each post will tackle a different facet of the arguments raised by Stalla-Bourdillon, Purtova, and Newell, tracing their implications for the future architecture of EU data governance. However long the series ultimately becomes, this first instalment sets the foundation for a sustained, critical engagement with the shifting terrain of data protection law.
Competing Approaches to Defining ‘Personal Data’
What is personal data, and who decides? At the heart of both articles lies this question, but Stalla-Bourdillon and Purtova/Newell approach it from opposite directions. Stalla-Bourdillon’s piece is prompted by a very concrete development that reorganises the GDPR under the banner of “simplification and competitiveness”.[xiv] Among the changes, the Commission seeks to redefine the GDPR’s material scope by clarifying when information is (or is not) “personal data” for a given entity. In essence, the proposal would codify a relative concept of personal data:
“Information ... is not personal for a given entity where that entity cannot identify the natural person to whom the information relates, taking into account the means reasonably likely to be used by that entity. Such information does not become personal for that entity merely because a potential subsequent recipient has means reasonably likely to identify the person”.
This is presented as aligned with the Court of Justice’s case law (notably the EDPS v SRB decision from 2023[xv]) and aimed at reassuring data controllers that if they cannot identify individuals in a dataset, they need not treat it as personal data, even if someone else (now or later) could identify those individuals.
Stalla-Bourdillon views this development with deep scepticism. In her view, the Commission, driven by a “pro-innovation” agenda, is effectively trying to rewrite the GDPR’s most fundamental concept in a way that “strikes at the very heart of data protection law”. She notes a sense of déjà vu: the UK’s post-Brexit attempts to trim the GDPR’s definitions (initially floated in the UK Data Protection and Digital Information Bill) foreshadowed this move. Those UK plans were ultimately dropped from the final Data Protection Act 2025, but now the EU Commission appears to be treading a similar path. By seeking to impose its own interpretation of what counts as personal data (under the guise of codifying CJEU rulings), the Commission risks undermining the coherence of the entire framework, according to Stalla-Bourdillon.
Crucially, she argues the new definition could be read in “several ways, some considerably more radical than others”. The most concerning interpretation would significantly narrow the GDPR’s scope, making it easier for companies to claim data is “anonymous” or “non-personal” in their hands and thus escape GDPR obligations. For example, an online advertising vendor might argue that because it only has pseudonymous user IDs and no direct names or emails, it “cannot identify” the individuals; therefore, the behavioural data it holds is not personal data at all, even though another entity (say, an ad exchange or identity broker) could re-identify those individuals by linking identifiers. Stalla-Bourdillon warns that such an approach, if endorsed, would create dangerous inconsistencies: “the legal test would then fall below the threshold established by some other privacy laws”, and it “would become highly artificial to justify any form of restriction on international data transfers”. In other words, if Europe lowers its bar for what counts as personal data, it not only undercuts its own high standards but may unravel mechanisms (like cross-border data transfer rules) predicated on robust data protection.
By contrast, Purtova and Newell come from a more theoretical angle, questioning whether using “data” (and specifically the personal/non-personal data dichotomy) as the trigger for regulation is wise at all. They observe that over the past decade, Europe has seen an “avalanche of new ‘data law’” – the GDPR, the Data Governance Act, the Data Act, the AI Act, the Digital Services Act, etc. These are all premised on controlling data in various ways. The GDPR, in particular, is a “broad range” omnibus regime that attempts to tackle myriad digital issues (from privacy to security to fairness) through rules “triggered by the concept of personal data”. This approach, they argue, has led to regulatory imprecision and ineffectiveness. In their words, “framing digital problems as data problems” is a category error: it diverts attention from the actual causes of harm and “stands in the way of modernising other legal domains, such as consumer, administrative, or labour law” for the digital age. By forcing all sorts of issues into the personal data mould, we risk both over-regulating trivial or non-risky activities and under-regulating serious harms that happen to fall outside the personal data net. (As a point of note – this author has long argued that the GDPR should have had a structure akin to the UCPD with a banned practices list akin to Annex I of that law, so I’m kind of happy to see Purtova et al. come around to my thinking 😉)
But again, I digress.
Purtova’s name is already well-known for articulating the “law of everything” critique of personal data. Back in 2018, she warned that the concept of personal data had become so broad that “everything will be or will contain personal data”, turning the GDPR into an almost universal regulation of the digital world. The CJEU’s expansive interpretations (i.e., treating even dynamic IP addresses, cookie strings, or license plate numbers as personal data), coupled with Recital 26’s mandate to consider “all means reasonably likely” to identify, mean that “there is no information that by definition cannot be or become ‘personal data’” under EU law. Indeed, Purtova’s scholarship has demonstrated how even weather measurements[xvi] or computer code might meet the definition in context. For example, sensor data on local temperature could be personal data in a smart city if it is linked to identifiable household energy usage, and software code can be personal data if it “relates to” individuals (e.g. code that encapsulates someone’s behaviour or is used to make decisions about a person). Her 2023 work (with Ronald Leenes) argued that “all software is information and so, in principle, all software may become personal data” if it can be linked to an individual by content, purpose or effect.[xvii]
The overinclusive reach of “personal data” troubles Purtova and Newell not merely as a theoretical purity issue, but because it dilutes the effectiveness of data protection law. If literally everything is personal data, the GDPR’s requirements must either be applied to every digital operation (which is infeasible and would make data protection an “uneconomic exercise”), or organisations will start treating the rules as pesky formalities to be bypassed. In practice, as they note, many controllers already take a narrow, often incorrect, view of what personal data encompasses, either out of ignorance or as a strategy. They highlight phenomena like “transient data processing,” “synthetic data,” and “confidentiality computing” as techniques used to evade GDPR coverage. For instance, companies might claim that if they only process data in encrypted form or only for a split second without storing it, it’s not “personal data” subject to GDPR, a grey area some exploit. Similarly, labelling datasets as “anonymous” because direct identifiers are removed (while leaving unique profiles intact) is a common tactic to skirt the scope of the law. Purtova and Newell point out that the concept of personal data itself is riddled with “uncertainties” (terms such as “information,” “relating to,” and “identification” still lack definitive definitions from courts), which further undermines enforcement. If it’s unclear at the margins what data is in or out of scope, controllers can rationalise non-compliance, and regulators struggle to draw bright lines.
In sum, Stalla-Bourdillon champions the classic broad scope of personal data as essential to the GDPR’s protective mission, cautioning against a regulatory rollback that could create gaps. Purtova & Newell, on the other hand, critique that very breadth as a sign of misaligned regulatory design, arguing that GDPR’s identity as a catch-all data law is both overinclusive and underinclusive – too broad in theory, yet too easily dodged or not covering collective and non-identifiable harms in practice. Next, we delve deeper into each perspective: the risks of narrowing “personal data” too far versus the dangers of expanding it to cover nearly everything.
The Risks of Narrowing vs Expanding the Notion of Personal Data
Striking the right balance in defining personal data is a classic Goldilocks problem. Define “personal data” too narrowly, and harmful data practices may fall outside the law’s scope entirely. Define it too broadly, and the law either overburdens benign data uses or becomes so stretched that it loses focus. Both articles grapple with these trade-offs, albeit from different ends.
Normative and Practical Risks of Narrowing (Too Much Exclusion): Stalla-Bourdillon’s critique of the Commission’s Omnibus proposal highlights the dangers of tilting the balance toward excessive exclusion[CP2] . If organisations can easily deem data “not personal to us” because they lack direct identifiers or claim limited means, they can escape GDPR obligations by design. This raises several concerns:
Loopholes for Pseudonymisation: The GDPR currently treats pseudonymised data as still within scope (albeit subject to somewhat relaxed provisions) because pseudonyms can often be re-linked to identities. The Commission’s approach, however, suggests pseudonymised data might “no longer be considered personal data for certain entities” under certain circumstances.[xviii] Without very stringent conditions, this could become a massive loophole. Stalla-Bourdillon notes the proposal’s silence on safeguards: it “makes no explicit reference to purpose” or obligations on third-party recipients. In contrast, other regimes impose strict criteria for treating data as deidentified; for example, California’s CPRA defines “deidentified” information as that which cannot reasonably be linked to a consumer and requires the business to publicly commit not to reidentify it and to bind any recipients to the same contractually.[xix] The EU proposal, as described, would allow an entity to declare data non-personal merely by looking at its own perspective, regardless of what others could do. Stalla-Bourdillon argues this is “hard to reconcile with a threat modelling approach” that considers motivated adversaries and modern re-identification techniques. In effect, it could reward willful blindness: companies might avoid learning of any methods or auxiliary data that could identify individuals, so they can claim ignorance and treat data as exempt.
Undermining Technical Standards: By de-emphasising “state-of-the-art statistical disclosure control” (the technical and organisational measures to truly anonymise data), a narrow approach might disincentivise robust anonymisation efforts. Stalla-Bourdillon contrasts the Commission’s low-bar approach with the higher standards elsewhere. She points to the UK Information Commissioner’s Office guidance and US privacy laws as having stronger tests for when data is deemed anonymous. For instance, under HIPAA (U.S. health privacy law), health data is only considered de-identified if either an expert applies rigorous statistical methods to certify minimal re-identification risk, or if a long list of direct identifiers is removed and the entity has no actual knowledge of residual identification risk. These standards acknowledge that anonymisation is hard and contextual. The Digital Omnibus draft, as summarised by Stalla-Bourdillon, seems to assume anonymisation is a simple binary state and that identity risk can be localised to each holder alone. The “dangerous oversimplification” she warns of is that regulators will accept superficial anonymisation claims without requiring the “rigour and transparency” needed to substantiate them. Indeed, she stresses that anonymisation is always a trade-off. It can protect privacy but at the cost of data utility, and its robustness should be proportionate to the sensitivity of the data and the purposes of processing. Declaring data “not personal” too readily could short-circuit this careful balancing.
Enforcement and Coherence Risks: Narrowing the scope of personal data could hamper enforcement in areas such as online tracking and AdTech. These domains are where companies often argue that they do not really know the identities of users they track. Stalla-Bourdillon is clearly concerned that the AdTech ecosystem will seize on a relaxed definition to claim that their massive profiling databases are outside the scope of GDPR. Notably, she cites the recent CJEU ruling in IAB Europe (regarding the online advertising Transparency & Consent Framework), which held that a user’s consent preference string, stored in a cookie, is personal data because it can be tied to a user via a unique identifier and used to build a profile. In other words, even opaque identifiers can become personal data when used for “evaluating or predicting” individuals.[xx] If the law were narrowed, there is a risk that such data might be incorrectly deemed non-personal, allowing invasive profiling to continue unchecked under the GDPR. This bleeds into broader systemic concerns: data protection law, as it stands, provides baseline rules (transparency, legal basis, purpose limitation, etc.) whenever personal data is processed. Suppose large swathes of data (e.g. pseudonymised clickstream data, aggregated location trails, etc.) are declared out of scope. In that case, we might see a regulatory race to the bottom, with companies shifting practices just enough to avoid being classified as personal data and thus avoid oversight. Stalla-Bourdillon explicitly notes that endorsing the Commission’s formulation would put EU law below other frameworks and call into question restrictions on data exports. Her conclusion: the attempt to codify case law in this manner “appears rushed” and risks incoherence in pursuit of a pro-innovation agenda. In short, be careful what you cut out – narrowing definitions could invite exactly the kinds of problems the GDPR was meant to forestall.
Normative and Practical Risks of Expanding (Overinclusive)
On the flip side, Purtova and Newell illuminate the perils of an ever-expanding concept of personal data. An overbroad scope can be just as problematic, in their analysis, because it blurs the regulatory mission and imposes costs or complications without commensurate benefit. Some key risks of expansion include:
“Law of Everything” – Loss of Focus: If virtually all information is or can be linked to a person, and thus becomes subject to the GDPR, the law risks becoming a victim of its own ambition. Purtova earlier coined the phrase “the law of everything” to describe this scenario. The danger is that when a law is seen as applying to every interaction or every piece of data, it may end up effectively regulating nothing well. Resources (both for regulators and for organisations) are finite. An overinclusive scope means that trivial or low-risk processing (e.g., innocuous data about weather patterns or machine performance that only tangentially relates to individuals) formally requires the same compliance steps as high-risk processing of sensitive personal data. This can breed cynicism and compliance fatigue. Organisations may go through the motions of GDPR paperwork for harmless data, while hazardous processing doesn’t get the careful, case-by-case scrutiny it warrants. Purtova and Newell note that many commentators feel the concept of personal data has grown “too broad at the expense of the effectiveness and identity of data protection law.” If GDPR tries to be everything, it might end up being “ineffective”. In other words, a jack of all trades, master of none.
Opportunity Costs – Neglecting Other Legal Tools: A subtler, but important, point in Against Data Fixation is that the primacy of personal data in EU law may have stunted the development of other regulatory approaches. The authors argue that treating all “digital problems” as “data problems” has “stood in the way of modernising other legal domains”. For example, issues of online manipulation or discrimination could be addressed through consumer protection or anti-discrimination law; workplace surveillance matters might be better discussed in labour law; competition law might tackle abuses of data dominance. If policymakers rely on GDPR to solve everything, those domains do not get updated for the digital age. An overexpansive personal data regime can thus act as a form of regulatory overreach that paradoxically leaves gaps. Because the GDPR, even if broad, is not a panacea for problems like algorithmic bias or manipulation that only partly involve personal data. The authors specifically mention that GDPR’s focus on data may not map well to “modern data analytics and profiling [that] happen at the population level”, where harm can occur even without identifying specific individuals; for instance, an AI system could infer traits or make decisions affecting groups or anonymous profiles then the GDPR might not clearly apply if no individual is singled out. Yet, the impact on people can be tangible (think of credit scoring models or targeted ads that discriminate without using names). Over-reliance on personal data as the hook means these “group privacy” or collective harms remain underregulated – a point some scholars have raised as a weakness of the individual-centric GDPR.
Practical Under-Enforcement: Purtova and Newell also observe that in practice, controllers often do not follow the expansive letter of the law, sometimes out of confusion, sometimes intentionally. If the law says everything is personal data, but a company decides, for example, that IP addresses or device identifiers are not really personal data “in context,” they might simply not apply GDPR to those. Unless regulators catch and correct that (which is difficult at scale), the overinclusive definition may exist “on the books” but not on the ground, leading to patchy enforcement. The authors cite reports that many organisations lack guidance on when AI-related data is personal or not, leading to inconsistent application. Furthermore, the temptation to label data as “anonymous” increases when the definition is comprehensive – giving rise to what they call “undertheorized uses of information concepts in law”. Purtova has characterised some debates as a “false debate” between anonymous vs personal data, because almost any dataset can potentially be traced back to people.[xxi] Nonetheless, clinging to the idea that some data is “not personal data, therefore no harm” can be perilous; it may cause missed protections when they’re needed (the under-inclusiveness problem). In short, an overinclusive stance can prompt either overreaction (treating mundane data use as high risk) or evasion (ignoring the law due to its perceived overbreadth). Neither outcome is desirable.
In evaluating these two extremes, it’s clear there is a tension: Stalla-Bourdillon fears the erosion of data protection via narrowing, whereas Purtova/Newell fear the dilution or misapplication of data protection via overexpansion. Both perspectives agree on one thing: the way “personal data” is delineated is crucial to the efficacy of the regulatory system. The sweet spot must protect individuals’ rights without either leaving loopholes or drowning everything in red tape. How to find that balance is where their prescriptions differ markedly, as you will see in my next Substack post, in their examination of the underlying assumptions each makes about data protection’s role, especially regarding AdTech and anonymisation in one case, and the very structure of regulation in the other.
The next instalment in this series, titled “Beyond Adtech and Anonymisation: What’s the Real Aim of Data Protection?”, will begin to outline the systems-based critique in earnest. That post will explore how a more disciplined and SRB-aligned conception of personal data can counterbalance the excesses of the so-called law of everything and ultimately produce a GDPR that is more coherent, more enforceable, and better able to fulfil its intended role. By repositioning data protection within a broader regulatory ecosystem rather than treating it as a universal solution, the series will begin to explain why simplification, properly understood, may be precisely what the system requires.
[i] https://www.reuters.com/sustainability/boards-policy-regulation/critics-call-proposed-changes-landmark-eu-privacy-law-death-by-thousand-cuts-2025-11-10/#:~:text=EU%20antitrust%20chief%20Henna%20Virkkunen,Data%20Act%2C%20on%20November%2019
[ii] https://commission.europa.eu/document/download/8556fc33-48a3-4a96-94e8-8ehttps://commission.europa.eu/topics/competitiveness/draghi-report_encacef1ea18_en?filename=250201_Simplification_Communication_en.pdf
[iii] https://commission.europa.eu/topics/competitiveness/draghi-report_en
[iv] https://noyb.eu/en/digital-omnibus-first-analysis-select-gdpr-and-eprivacy-proposals-commission
[v] https://www.linkedin.com/feed/update/urn:li:activity:7401709269638709248/?originTrackingId=n76LegQAh2rGT60Bh5DbDw%3D%3D
[vi]https://iapp.org/news/a/european-commission-proposes-significant-reforms-to-gdpr-ai-act
[vii] https://www.reuters.com/sustainability/boards-policy-regulation/critics-call-proposed-changes-landmark-eu-privacy-law-death-by-thousand-cuts-2025-11-10/#:~:text=,noyb%20said%20in%20a%20statement
[viii] https://survey.noyb.eu/index.php?r=survey/index&sid=973679&lang=en
[ix] https://researchportal.vub.be/files/142369597/Deja_vu_in_data_protection_the_risks_of_rewriting_what_counts_as_personal_data_by_Sophie_Stalla-Bourdillon_Privacy_Data_Protection_Volume_26_Issue_2.pdf
[x] https://www.eff.org/deeplinks/2025/12/eus-new-digital-package-proposal-promises-red-tape-cuts-guts-gdpr-privacy-rights#:~:text=whether%20data%20is%20%E2%80%9Cpersonal%E2%80%9D%20depends,that%20have%20considered%20the%20issue
[xi] https://www.eff.org/deeplinks/2025/12/eus-new-digital-package-proposal-promises-red-tape-cuts-guts-gdpr-privacy-rights#:~:text=whether%20data%20is%20%E2%80%9Cpersonal%E2%80%9D%20depends,that%20have%20considered%20the%20issue
[xii] Purtova, N., & Newell, B. (2024). Against Data Fixation: Why ‘Data’ Fails as a Regulatory Target for Data Protection Law and What to Do About It. SSRN: ssrn.com/abstract=4878564.
[xiii] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4878564#:~:text=This%20paper%20critiques%20the%20fixation,from%20theories%20of%20regulation%20and
[xiv] https://www.whitecase.com/insight-alert/gdpr-under-revision-key-takeaways-from-digital-omnibus-regulation-proposal#:~:text=On%2019%20November%202025%2C%20the,1
[xv] https://curia.europa.eu/juris/liste.jsf?language=en&td=ALL&num=T-557/20
[xvi] Purtova, The Law of Everything (2018): https://www.tandfonline.com/doi/full/10.1080/17579961.2018.1452176
[xvii] https://academic.oup.com/idpl/article/13/4/245/7308779?login=false
[xviii] https://www.whitecase.com/insight-alert/gdpr-under-revision-key-takeaways-from-digital-omnibus-regulation-proposal#:~:text=On%2019%20November%202025%2C%20the,1
[xix] https://www.consumerprivacyact.com/section-1798-140-definitions/#:~:text=%E2%80%9CDeidentified%E2%80%9D%20means%20information%20that%20cannot,with%2C%20or%20be%20linked%2C
[xx] GDPR, Recital 30
[xxi] https://academic.oup.com/idpl/article/13/4/245/7308779

This is a good summation of the scholarship. I'm glad you did the hard work of analyzing these diverse proposals -- I honestly find it increasingly challenging to read such academic analyses because they get so bogged down in the theoretical, rather than the practical.
It's lovely to think about the theoretical bounds of what is, should be, or ought to be 'personal data'. But as a humble practitioner of data protection law, for clients acting as real-world data controllers and processors, in the end, I need a way to communicate what the law requires bounded by realistic limits (y'know, like physics and the SOTA).
These types of theoretical assessments drive me crazy because they ignore the practicalities of both what the GDPR is, and what law generally should be: an imperfect, but reasonably robust set of obligations to protect the fundamental rights and freedoms of people.
IMHO, the better question to ask is always around risk: Will the actions of a controller (intentionally or otherwise) lead to a likely risk of harm to people? Does the law work as intended (with adequate fallbacks to go after known bad-actors), or is it so overfitted (to borrow a term of art that's been floating around in the AI space a lot lately ) as to make it unworkable and ineffective to actually protect people.
In the end, people have to build and operate systems that process data. Academics and lawyers often have high goals and lofty visions of what engineers can do. Sure, it's lovely to think that the Goldilocks problem of what is 'personal data' can be solved by words alone, but that's just not how things work in the real world. You cannot develop a perfect engineering solution around vague words and fuzzy legal absolutes.
When engineers and business folks come to me for advice, they don't care what the 'ideal' is. They want to understand how to implement the law - not in order to cleverly engineer around it, but because you can't code perfection in anything, and ultimately, the laws of physics still dictate hard limits that laws and lawyers often don't consider.
Anyway, I look forward to reading your subsequent analysis on the subject.