Brokering Safety
For victims of abuse, safety means hiding. Not just hiding themselves, but also their contact details, their address, their workplace, their roommates, and any other information that could enable their abuser to target them. Yet today, no number of name changes and relocations can prevent data brokers from sharing a victim’s personal information online. Thanks to brokers, abusers can find what they need with a single search, a few clicks, and a few dollars. For many victims, then, the best hope for safety lies in obscurity—that is, making themselves and their information harder to find.
This Article exposes privacy law’s complicity in this phenomenon of “brokered abuse.” Today, victims seeking obscurity can ask data brokers to remove their online information. But a web of privacy laws props up a fragmented and opaque system that forces victims to navigate potentially hundreds of distinct opt-out processes, wait months for their information to be removed, and then repeat this process continuously to ensure their information doesn’t resurface. The status quo compels victims to manage their own privacy, placing the burden of maintaining obscurity on already overburdened shoulders.
In response, this Article proposes a new regulatory regime premised on a transformative reallocation of responsibility. In short, it proposes a techno-legal system that would enable victims to obscure their information across all data brokers with a single request, redistributing the burden away from victims and onto brokers. Such a system is justified, feasible, and constitutional. The data broker industry is eager to assert that it has a First Amendment right to exploit people’s data, but this Article develops a trio of arguments to confront this controversial claim of corporate power. By blending theory, policy, and technical design, this Article charts a path toward meaningful privacy protections for victims and, ultimately, a more empathetic legal landscape for those most at risk.
Table of Contents Show
Introduction
Ella was in college when the abuse began.[1] She dated Nick for a while before trying to end their relationship. At that point, he “turned rather obsessive, showing up at my school, then showing up at my work.” Before long, he came to her home to threaten her. She went to the police but wasn’t taken seriously. Then, things escalated. Nick showed up with a weapon. “I was almost killed.” After the incident, what stood between her and almost getting killed again? A few dollars and a few clicks on the internet. This is the plight of brokered abuse—the phenomenon of how data brokers enable and exacerbate stalking, harassment, and violence.[2]
Ella had left her abusive relationship, sought help, and fought for a restraining order. But none of that protected her when her abuser could still track her. “I’m not sure how, but [he] found information for my parents and made threatening calls to [them] as well. . . . [W]e knew it was him, but we were never able to do anything about it.” With police unwilling or unable to intervene, Ella tried to erase herself—a defense mechanism that victims of brokered abuse know all too well. She abandoned the internet, moved cities, and changed her name, number, and career. And yet every time she tried to rebuild her life, her stalker found her again. The terror of knowing that digital breadcrumbs could lead him back to her consumed her. “If I Google my name and I’m showing up on Whitepages, People Finder, Spokeo, TrueIdentity—the list goes on and on and on—it’s scary.” Scrubbing her data became a constant, exhausting necessity. “This act of shielding myself became part of my everyday life.”
Unfortunately, Ella’s story is not unique. Countless others are trapped in cycles of fear and vigilance, their safety undermined by the ruthless machinery of the data broker economy.[3] Data brokers are entities that collect personal information from public records, such as voter registrations and court filings, as well as private sources, including online purchases, social-media activity, and GPS location data.[4] They use this information to create comprehensive dossiers, detailing intimate aspects of individuals’ lives—addresses, phone numbers, financial histories, family relationships, and more.[5] Brokers then sell these dossiers to businesses, government agencies, and even individuals.[6] The entire industry thrives on eliminating obscurity by systematically dismantling the practical difficulty of accessing and compiling personal information.[7] Victims like Ella are not seeking obscurity for the sake of secrecy but rather as a matter of safety and wellbeing. Yet the data-broker economy makes personal information widely accessible, fueling cycles of interpersonal abuse.[8]
The harms inflicted by data brokers are twofold: primary and secondary.[9] Primary harms arise from the immediate danger victims face when their personal information is exposed.[10] For victims like Ella, the knowledge that their abuser can easily track their movements, find their home, or access their contact details creates a constant state of fear and vulnerability. Despite the many steps Ella took to disappear, she could not escape the reach of data brokers that repeatedly exposed her location. This persistent threat can force victims to withdraw from social, professional, and community life, sacrificing opportunities and relationships in an effort to stay safe.[11]
Beyond these primary harms, victims endure secondary harms as they attempt to protect themselves in a broken system. The fragmented and opaque processes required to remove information from broker databases are arduous and retraumatizing.[12] Victims must locate and contact hundreds of brokers, verify compliance, and continuously monitor whether their data resurfaces.[13] Each step exacts an emotional and financial toll, forcing victims to revisit their trauma and confront the very systems that profit from their exposure.[14] These secondary harms compound victims’ suffering.
The law is complicit in these harms. The American legal tradition of individual rights has left an indelible mark on U.S. privacy law.[15] Today’s privacy landscape defaults to a system of privacy self-management that Daniel Solove has long critiqued as both unrealistic and inequitable.[16] Privacy self-management assumes that individuals can and should navigate complex systems to manage their own privacy, making informed choices about how their data is collected, stored, and shared.[17] While this concept aligns with a legal tradition rooted in individual rights and personal autonomy, it falls woefully short in the face of today’s sprawling data ecosystems. Building on Solove’s work, Ella Corren has effectively shown that privacy self-management places an impossible burden on individuals; her work offers an empirical rebuttal to the presumption that people generally have the resources, expertise, and bandwidth to make meaningful decisions about their privacy.[18] For victims of brokered abuse, the framework of privacy self-management is especially harmful. It forces them to bear the weight of achieving obscurity, navigating labyrinthine systems, and negotiating with powerful data brokers—all while managing the immediate risks posed by their abusers.[19] By defaulting to privacy self-management, the legal system fails to protect the vulnerable and allows brokers to profit from the commodification of personal information.[20] For abuse victims, obscurity is not a luxury—it can be a necessity for survival. The failure of privacy self-management to deliver this obscurity underscores the urgent need for systemic reform.
At least in this narrow context, a fundamental shift in privacy law is needed to center victims and redistribute the burden of achieving obscurity. As Ella’s story demonstrates, even without the need to request the removal of personal information, victims might already be overwhelmed by the challenges of escaping and surviving abuse.[21] The law should instead shift the responsibility for achieving obscurity to the parties that create and profit from the risk: data brokers. These entities possess the resources, technology, and expertise to manage the logistical and technical challenges of obscuring sensitive information. Unlike victims, brokers are well positioned to systematically remove identifying data from their systems and prevent its reappearance.[22] Redistributing this burden would not only be fair but also represent a more effective and sustainable solution to the problem of brokered abuse. By compelling brokers to take responsibility for the risks they create, the law can begin to rectify the systemic injustices that leave victims like Ella fighting for their safety alone.[23]
Redistributing this burden requires a centralized regulatory and technical solution. A fragmented, decentralized approach has proven incapable of addressing the pervasive and evolving threats posed by the broker industry.[24] Victims should be able to invoke their right to obscurity with a single request and expect each broker to honor an ongoing responsibility to identify and remove all their relevant information across every database and ensure that information never resurfaces.[25]
This idea has already entered the regulatory imagination, at least in part. California recently passed the DELETE Act,[26] several other states have proposed their own versions,[27] and a federal DELETE Act has been proposed in Congress.[28] These efforts reflect a small but growing consensus that individuals should not have to navigate hundreds of opaque data broker opt-out processes on their own.
Yet these laws suffer from two fundamental flaws. First, they broadly regulate everyone’s data without tailoring protection to those most at risk, opening them up to viable First Amendment challenges.[29] Scholars like Robert Post, Frederick Schauer, and Amanda Shanor have documented how companies are increasingly wielding the First Amendment to serve a deregulatory agenda,[30] and data brokers have left little mystery as to their willingness to challenge privacy regulations on First Amendment grounds.[31] Accordingly, policymakers should expect fierce First Amendment opposition to privacy regulation and proactively address constitutional scrutiny. Second, the California and federal statutes delegate core technical questions of implementation to future rulemaking,[32] leaving open the risk of ineffective or even injurious compliance mechanisms, which entrench broker control rather than alleviate the burden on victims.[33] In contrast, this Article offers a detailed regulatory approach that would strengthen both the legal durability and practical efficacy of broker regulation.
To develop and justify our central proposal, this Article builds on scholarship framing obscurity as a discrete privacy interest,[34] highlighting the unique vulnerabilities of abuse victims,[35] critiquing the framework of privacy self-management,[36] and confronting the First Amendment’s deregulatory “Lochnerian” turn.[37] In so doing, this Article makes three main contributions. First, it demonstrates how the combination of decentralized broker opt-out systems and privacy self-management subjects victims to harms beyond those arising from the abuse itself: arduous and relentless vigilance to maintain their obscurity, retraumatization from repeatedly revisiting their abuse, and withdrawal from society for fear of generating more identifying data.[38]
Second, the Article proposes a novel regulatory framework that centers victims and harnesses obscurity to protect human safety. By blending theory, policy, and technical design, the Article presents and justifies a centralized system that would transfer the burden of obscurity onto the billion-dollar industry profiting from brokered abuse.[39] While lawmakers have begun flirting with this idea, the devil is in the details, and this Article answers complex questions about how regulators could implement such an approach—and why they should.[40]
Third, this Article tackles a doctrinal question often avoided by privacy and First Amendment scholars (and by many privacy laws): Can a data privacy law that covers information gathered from government records and other public sources be constitutional?[41] Through a trio of arguments, the Article anticipates and confronts any claim that the Constitution insulates the broker industry from a centralized obscurity system mandated by law. As an initial matter, the Article challenges the assumption that broker practices are covered by the First Amendment, building on emerging scholarship that questions whether the commodification of personal information serves First Amendment values.[42] It then contests arguments that data dossiers constitute noncommercial speech.[43] Finally, it details why legislating a centralized obscurity system for victims should survive strict scrutiny.[44] In doing so, it joins the work of scholars critiquing how expansive interpretations of the First Amendment undermine regulatory efforts and privilege corporate interests over human dignity.[45]
The Article proceeds in four parts. Part I outlines the mechanics of brokered abuse, detailing the inner workings of the broker industry and why its practices endanger people facing various forms of interpersonal harm. It argues that the harms of brokered abuse are not inevitable; rather, they are partly the result of a legally constructed and broken system that asks too much of the most vulnerable.
Part II then explains why a victim-centered solution demands a paradigm shift in privacy law—one that redistributes responsibility from individuals to the data brokers who profit from their exposure. This Part justifies such a redistribution by emphasizing principles of fairness and efficiency, arguing that brokers, as the least-cost avoiders, are uniquely equipped to shoulder the logistical and technical obligations of obscurity.[46] It concludes by drawing lessons from the private and public sectors to demonstrate the feasibility and efficacy of a centralized approach to obscurity.[47]
Part III provides the statutory and technical blueprint for a centralized system that would allow people like Ella to obscure their information with a single request. It begins by identifying critical gaps in the state and federal DELETE Acts that undermine their ability to deliver meaningful protection.[48] It then presents a detailed framework that integrates a centralized victim opt-out registry, rigorous compliance obligations, and advanced technical tools like cryptographic matching to ensure robust enforcement.[49]
Finally, Part IV confronts thorny First Amendment questions that such a regulatory regime would face. It casts doubt on brokers’ claims that a law like the one we propose would even trigger constitutional scrutiny. It then argues that, at most, our intervention should be assessed under intermediate scrutiny, the same standard reserved for regulations of commercial speech. Regardless, this Part concludes by detailing why a centralized obscurity system for abuse victims should nevertheless survive strict scrutiny as a narrowly tailored regulation to achieve a compelling government interest. This doctrinal analysis not only fortifies the proposal against constitutional attack but also contributes to broader debates on the role of the First Amendment in data privacy regulation.[50]
The status quo requires victims like Ella to manage their own privacy, placing the burden of maintaining obscurity on already overburdened shoulders. This Article offers a path forward that transforms obscurity from an unobtainable ideal into an enforceable reality.
I. Existing Law and Technology Puts the Obscurity Burden on Victims
I was spending hundreds of hours online just looking and searching and going through everything. . . . It’s like playing whack-a-mole . . . . And it’s frustrating because it’s such a huge waste of time as well—such a burden on your daily life.
—Ella
The responsibility of achieving personal safety through obscurity—grounded in the principle of protecting privacy by making personal information difficult to access—currently rests almost entirely on the shoulders of victims of abuse.[51] For these individuals, obscurity is not a theoretical concept; it can be their best, and often their only, defense against abusers who exploit the data-broker ecosystem to locate, surveil, and harm them.
Despite the potential life-or-death stakes, victims must navigate a fragmented and convoluted system of opt-out processes to secure this obscurity, shouldering the dual burden of primary and secondary harms.[52] Primary harms arise directly from the loss of obscurity—the stalking, harassment, and violence enabled by data brokers who make sensitive personal information easily accessible to abusers.[53] Secondary harms, by contrast, are inflicted by the legally constructed privacy system itself, which forces victims to undertake the grueling and often futile task of privacy self-management.[54]
This dual burden—the risk of exposure on one hand and the demands of self-management on the other—defines the plight of victims in the brokered-data economy. These burdens are not merely onerous; by failing to take seriously the experiences of many victims, they are devoid of empathy.[55] This regulatory failure is a systemic injustice that prioritizes corporate convenience over human safety. It is complicit in the harm perpetrated by abusers and data brokers.[56]
The following Part examines these intertwined harms in detail and critiques current privacy laws for their failure to prioritize the safety and dignity of victims.
A. The Harms of Brokered Abuse
The primary harms of brokered abuse—the stalking, harassment, and physical and psychological threats—are exacerbated by the secondary harms tied to privacy self-management. The privacy self-management system, ostensibly designed to protect privacy, is, for victims, a system that instead inflicts further injury. Obscurity is critical to victims’ safety, but achieving it has become tantamount to fighting a broker hydra while equipped with little more than desperate conviction.
1. The Primary Harms of Data Exposure
For victims of abuse, obscurity is not an abstract privacy ideal; it can be their best, and often only, defense against abusers.[57] The primary harms of brokered abuse are rooted in the destruction of obscurity.[58] By making sensitive personal information easily accessible, data brokers empower abusers to locate and target their victims.[59] Commodifying obscurity can manifest physical, psychological, financial, and social harms for those who are vulnerable or in life-threatening situations.
The broker industry is a sprawling, multibillion-dollar ecosystem that harvests and sells personal information.[60] There is little oversight or accountability.[61] These companies build their businesses by acquiring and repackaging information from both public records and private sources, often without explicit permission from the people whose lives they catalog.[62] At a minimum, brokers aggregate public documents accessible to anyone with the means to request them, such as property deeds, voter rolls, and marriage licenses.[63] Many exploit transparency tools, such as the Freedom of Information Act (FOIA), to obtain government-held information,[64] while some partner with third parties to collect data about online behaviors from apps, e-commerce, social media, and subscription services.[65] Advanced technologies, such as facial recognition and real-time geolocation tracking, carry the potential to supercharge datasets with unprecedented accuracy and granularity.[66] The proliferation of machine learning models enables brokers to infer new data points—including religion, sexual orientation, and even mental health—from existing datasets.[67] These curated dossiers form the core of the broker business model.[68] While there are arguable benefits to data brokerage, such as its use in journalism,[69] law enforcement,[70] and reconnecting with lost relatives,[71] the potential for harm is both pervasive and severe.[72]
The erosion of obscurity underlies many of these harms. In the context of interpersonal abuse, unquantifiable risk and injury stems from people exploiting brokered data and the data brokers enabling them.[73] The broker ecosystem arms malicious actors with the ability to reconstruct a target’s personal history, locate their current whereabouts, or predict their movements.[74] Through these services, abusers can gain access to information that would otherwise be difficult or impossible to acquire. For example, a victim may relocate, change phone numbers, enroll in address-confidentiality programs, and take other steps to disappear; yet all this will be in vain when a broker finds and sells their updated information.
Worse still, brokers make these dossiers cheaper and easier to access than ever,[75] paving the way for nefarious use.[76] Tracking someone once required significant financial and logistical effort—hiring private investigators, obtaining court orders, or waiting for public records to update.[77] Today, abusers can purchase detailed reports on a victim’s location, family connections, and employment history for as little as a few dollars.[78] This democratization of surveillance tools transforms victims’ lives into open books, indexed for convenience and accessible to anyone with a computer, an internet connection, and a credit card. Brokers can render even the most robust protective measures futile.[79]
The pervasive and persistent availability of brokered data undermines victims’ ability to rebuild their lives, forcing them into cycles of isolation and hypervigilance. Obscurity is not just about safety; it is often the prerequisite for healing and stability.[80] Without it, victims might live in constant fear of discovery, unable to feel secure in their surroundings or relationships. Advanced tools turn fleeting interactions into lasting vulnerabilities and loved ones into unwitting accomplices to abuse. For example, facial recognition databases can turn a single photo uploaded to social media by a friend into a surveillance data point. Similarly, the decision to kill time on Candy Crush can generate real-time location data for the taking.[81] Even attempts to adopt new routines can be thwarted by behavior-modeling tools that allow abusers to anticipate a victim’s actions or locations based on historical patterns.[82]
The erosion of obscurity forces victims to withdraw from social and professional life to avoid leaving traces that could expose them to harm. Fear of exposure can lead victims to delete social media accounts, avoid professional networking, decline opportunities that might publicly associate them with a new location, and even refrain from voting.[83] While these steps may reduce immediate risk, they can come at a steep cost, cutting victims off from the support systems and opportunities necessary to rebuild their lives.[84] This isolation might not only deepen the psychological wounds inflicted by abuse but also amplify the societal stigma that can accompany such experiences, leaving victims feeling abandoned and unsupported.[85]
Further, the economic impact of losing access to opportunities and resources exacerbates the damage inflicted by brokered abuse. Victims often face significant financial costs associated with escaping abuse, including relocation expenses, legal fees, and lost wages.[86] Forgoing opportunities to earn income or receive external help can make these costs insurmountable. For marginalized individuals, these economic challenges are even more pronounced, as systemic inequities compound the difficulties of navigating both abuse and the exploitative practices of data brokers.[87]
2. The Secondary Harms of Privacy Self-Management
The secondary harms of brokered abuse arise from the expectation that victims are wholly responsible for managing their own privacy to achieve safety, a framework referred to as “privacy self-management.”[88] This approach, rooted in the belief that individuals should control how their personal data is collected, shared, and accessed, assumes that individuals are best positioned to make decisions about their privacy and can protect themselves by asserting their rights.[89] In practice, however, self-management places an overwhelming and unfair burden on people who are poorly equipped to bear it, particularly in the context of brokered abuse.[90] The cumulative toll of this dynamic is profound, and it encompasses logistical, psychological, and financial costs that victimize individuals anew.[91]
Privacy self-management demands that individuals navigate a convoluted system of data brokers, each with unique and burdensome opt-out procedures.[92] First, victims must scour the internet to identify the brokers that hold their personal information.[93] Then, victims must generally contact each broker individually to request removal of their data.[94] The removal process often involves submitting detailed forms, verifying their identity, and in many cases, providing sensitive personal documentation, such as copies of government-issued IDs.[95] In a cruel irony, the opt-out processes can force victims to hand brokers more sensitive information than brokers had in the first place.[96]
From a technological perspective, brokers’ opt-out systems often lack uniformity or automation, preventing scalable privacy self-management.[97] Some brokers require physically mailed requests, while others mandate the use of proprietary online portals with arcane navigation.[98] Many broker review processes are manual, relying on individual contractors to process opt-out requests.[99] This lack of technical sophistication not only makes the process unpredictable but also ensures that it is both labor intensive and error prone.[100]
After victims successfully submit requests, there is still no guarantee of obscurity. Even when victims comply with every byzantine requirement, brokers may refuse to act on requests, citing legal exemptions or internal policies.[101] For victims lucky enough to succeed, the same data can still resurface, or new identifying information might emerge.[102]
For some individuals, third-party services offer a partial reprieve. These services, such as DeleteMe or Privacy Bee, act as intermediaries, navigating the complex web of brokers on behalf of their clients.[103] By consolidating the opt-out process, they reduce victims’ direct interaction with brokers, allowing victims to work through a single point of contact. However, these services come with significant shortcomings.[104] They can be prohibitively expensive.[105] Further, third-party removal processes, too, can require the disclosure of sensitive personal information, like proof of address. Additionally, even the best-intentioned services cannot guarantee permanent removal of data because brokers can exploit legal loopholes to retain as much information as possible and poorly enforce their own removal policies.[106]
The demands of privacy self-management force victims into an unrelenting cycle of labor and stress. Victims must monitor the internet, follow up on pending requests, submit new ones, and continually search for additional brokers.[107] These efforts consume significant time, money, and emotional bandwidth—resources many victims lack.[108] Opting out diligently often requires taking time off work, incurring costs such as postage fees, and reliving the trauma of abuse through repeated interactions with brokers.[109] Forcing victims to become full-time stewards of their own obscurity creates a two-tiered system where only those with the means to pay can access meaningful protection.[110]
This systemic burden is compounded by the inherent power imbalance between individuals and the broker industry. Brokers operate vast, interconnected networks that aggregate and sell personal information with minimal oversight or accountability.[111] Victims, by contrast, are left to navigate this labyrinthine system alone, often without guidance or assurance that their efforts will result in meaningful protection.[112] The asymmetry of information, resources, and power sets victims up to fail by leaving them exposed and disempowered—robbed of agency over their own safety.[113]
The retraumatization caused by engaging with these processes can compound the psychological harm victims endure.[114] To protect themselves, victims must repeatedly reopen old wounds for brokers, entities that are neither trauma-informed professionals nor concerned with victims’ dignity.[115] Some brokers even demand detailed documentation of abuse, such as restraining orders, police reports, or affidavits. Each form submitted, each identity verified, and each explanation of abuse can drag victims back into the shadows of their trauma.[116] The very act of putting these experiences together into words can be deeply triggering, confronting victims with the fear, pain, and humiliation they might hope to leave behind.[117] The demand to hand over personal information can feel eerily reminiscent of the privacy invasions they experienced at the hands of their abusers.[118]
By leaving victims with no option other than pursuing obscurity through privacy self-management, the law neglects the unique vulnerabilities and lived experiences of abuse victims. This framework prioritizes corporate convenience over human safety, creating systemic barriers that retraumatize and disadvantage victims while failing to provide meaningful or lasting protection. Addressing this injustice requires a paradigm shift away from placing the burden of safety on victims and toward holding brokers accountable for the risks their practices create.
B. The Inadequacy of Existing Laws
The patchwork of privacy laws in the United States fails to adequately address the primary and secondary harms of brokered abuse, often to the point of complicity.[119] These laws either curb brokered abuse narrowly and indirectly[120] or craft interventions that impose new burdens on victims.[121] Together, they amount to a system where victims must shoulder the overwhelming responsibility of managing their own privacy, while abusers and brokers exploit the gaps with impunity.[122]
Laws addressing abuse fall into two categories: those targeting abusers directly, by criminalizing stalking, harassment, or violence; and those aimed at brokers that knowingly facilitate abuse, such as through doxing.[123] While vital in theory, both types are deficient in practice. Laws targeting abusers directly require waiting until brokered data is used to perpetuate harm. On the other hand, while laws targeting broker activities are closer to addressing the root cause of the problem, they sometimes rely on scienter requirements, such as proving intent or knowledge of harm.[124] This effectively immunizes brokers that sell data dossiers at scale.[125] For instance, California’s antidoxing provisions prohibit sharing registered stalking victims’ data with intent to incite harm, but brokers can evade liability by disclosing data without vetting purchasers.[126]
Anti-abuse laws are impractical for more than just their substance. Their legal processes typically require victims to interact with police, prosecutors, lawyers, or judges, which could deter many from pursuing claims due to intimidation, financial barriers, or distrust of institutions. Moreover, proceedings are often too slow to address the immediate dangers of brokered abuse, and even successful cases fail to address the systemic issue of brokers continually replenishing their data stockpiles. Victims would need to file repeated claims against brokers, new and old, whenever their information reemerges online, creating an untenable cycle of litigation that does little to disrupt the larger ecosystem.
In contrast to anti-abuse laws, laws targeting data brokers often fall into three categories: (1) transparency laws, (2) laws restricting data collection, and (3) laws restricting data disclosure. Transparency laws aim to address harms caused by the broker industry by shedding light on broker practices to inform regulators (administrative transparency) or empower individuals (popular transparency).[127] For example, Vermont and California require brokers to register with state agencies and disclose information about brokers’ data sources and practices,[128] while California’s “right to know” laws allow individuals to access data brokers hold about them.[129] However, more information about the mechanisms of brokered abuse does little to protect victims from it. And, in addition to being ineffective, these laws might even sap political will from stronger proposals, allowing brokers to hide harmful practices under the veneer of compliance.[130]
Another approach involves stemming the tide of personal data at the source. Longstanding laws prohibit deceptive practices, hacking, and unauthorized scraping,[131] while newer laws, such as the California Consumer Privacy Act (CCPA), seek to limit nonconsensual data collection.[132] However, these measures are riddled with loopholes.[133] The CCPA, for example, exempts “publicly available information” and “lawfully obtained, truthful information that is a matter of public concern”[134]—categories encompassing “vast troves of brokered data.”[135] Brokers need not resort to illegal practices when a plethora of information is available legally.
While restricting data disclosure is perhaps the most promising approach, it remains fraught with challenges. Some regulations, such as tort liability for disclosing sensitive information or bans on selling location data, address the issue indirectly and incompletely.[136] More direct measures, like California’s right to opt out, allow individuals to prevent businesses from selling their data.[137] For abuse victims, California’s Safe at Home program provides more robust protections, requiring brokers to conceal registered victims’ home addresses and phone numbers for four years.[138] Victims can also seek damages for intentional violations.[139] However, victims must still approach brokers individually, submit forms repeatedly, and monitor compliance over time.[140]
Ultimately, by focusing narrowly on isolated aspects of data brokerage, the existing regulatory responses fail to disrupt the systemic features of brokered abuse.[141] Worse, they impose an untenable burden on victims, making the law complicit in the harm it purports to address.
II. Protecting Safety Through Obscurity Demands Redistributing Responsibilities
I felt like it was my responsibility to do the opting-out. . . . [I]t was this thought that if I left any kind of stone unturned, that would cause harm to me or my family. . . . Why should this be a responsibility that I need to bear?
—Ella
This Part contends that addressing brokered abuse requires a paradigm shift in privacy law that prioritizes meaningful protections for victims by redistributing responsibility to those who profit from exposing a victim’s information. The consequences of inaction are dire: If we wait for an elusive privacy panacea to cure all the ills of “informational capitalism,”[142] abusers and data brokers will continue to exploit the gaps in current law, exposing victims to primary harms like stalking and harassment as well as the retraumatizing secondary harms of navigating a fractured system to protect themselves. Recasting the pursuit of privacy as the pursuit of safety underscores the need for a centralized obscurity system for victims.[143] Drawing on models from the private and public sectors, this Part illustrates the feasibility and urgency of holding brokers accountable while relieving victims of unsustainable and unjust burdens.
A. Justifying Redistribution
To address the systemic failures of brokered abuse and privacy self-management, we propose reframing victim privacy as a shared responsibility to promote safety and redistributing the labor of achieving obscurity from victims to data brokers. The law’s overreliance on privacy self-management is, at least in this context, an indefensible abdication of policymaking responsibility.[144] This framework is poorly suited to serving anyone’s privacy, let alone the especially vulnerable among us—people fleeing violence, harassment, and exploitation.[145] A safety-focused lens for obscurity demands a paradigm shift.
The concept of privacy self-management assumes that victims can and should be responsible for interacting with thousands of data brokers, each with their own processes, policies, and pitfalls.[146] It demands vigilance, technical sophistication, and both time and resources that are rare even among the most privileged.[147] For victims of abuse, this model is not just burdensome; it is retraumatizing.[148] Many victims will need to play some role in their own protection, and asserting the right to obscurity might be a necessary initial step. However, this invocation of their right to obscurity should mark the end—not the beginning—of their involvement.
Principles of fairness and efficiency support this reallocation of responsibility. From a fairness perspective, data brokers are the most appropriate entities to bear this burden. These brokers profit directly from the dissemination of data dossiers that put vulnerable populations at risk. Holding brokers accountable for alleviating brokered abuse aligns with societal norms that require industries to mitigate the risks they create, much like environmental regulations that compel polluters to bear cleanup costs. Danielle Citron has made an analogous fairness argument in the context of justifying strict-liability regulations when companies’ private databases are hacked.[149] As she contends, “[w]hen an organization engages in reasonable risky behavior—that is, nonwrongful conduct where an injurer’s freedom to impose the risk is more valuable than a victim’s forgone security . . . —fairness requires that the injurer pay for the victim’s harm.”[150] While our proposal is concerned more with obscurity than compensation, the same point rings true when applied to the broker industry.
Moreover, brokers are the least-cost avoiders: the entities best positioned to implement systemic solutions.[151] With centralized databases, established processes for managing data, and advanced technological capabilities, brokers can integrate obscurity protections far more efficiently than individual victims. Citron again provides some theoretical scaffolding for our proposal, observing that operators of corporate databases “constitute the cheapest cost avoiders vis-à-vis individuals whose information sits in a private entity’s database,” in part because they “have distinct informational advantages about the vulnerabilities in their computer networks.”[152] While individuals lack the knowledge and ability to assess a company’s cybersecurity, Citron argues that “database operators can most efficiently spread the costs of data leaks” and are “best situated to make the optimal choice of either taking additional security precautions or insuring against security-breach losses.”[153] A similar efficiency argument can be applied to the broker industry. The cost of implementing a centralized obscurity process could be modest for an industry already thriving on the commodification of personal data, while the cost to victims of managing their own obscurity is immense. For victims, this redistribution could be a lifeline; for brokers, it would be little more than a reasonable cost of doing business.
B. Required Features of Redistribution
This Section argues that a safety-focused approach to obscurity also demands both the imposition of ongoing broker obligations to keep victim data offline and a centralized governance and enforcement mechanism to ensure brokers comply.
1. Ongoing Duties
Obscurity calls for more than a one-time response to an opt-out request. Redistributing responsibility to brokers requires imposing ongoing obligations on them to keep victim information perpetually offline. The reality of data brokerage is that information constantly flows through partnerships, secondary markets, and automated data scrapers.[154] Without robust mechanisms to prevent and deter the re-collection and redistribution of data, any initial removal will be rendered meaningless.[155]
Brokers should be compelled to implement systems that proactively guard against re-exposure. This could include closing loopholes that allow data to re-enter their networks, monitoring compliance through periodic audits, and collaborating to eliminate weak points in the broader ecosystem.[156] Treating obscurity as a one-time obligation ignores the nature of the threat; victims’ safety depends on sustained vigilance.
2. Centralized Governance and Oversight
Even if brokers were forced to bear greater responsibility for obscuring victims’ data, effective protection for victims requires centralized governance to coordinate and enforce compliance. Individual brokers cannot be trusted to regulate themselves in a decentralized system riddled with gaps and inconsistencies.[157] A centralized framework, overseen by government regulators, would provide the necessary structure to ensure that brokers fulfill their obligations.
This system would shift the burden of oversight away from victims, who are currently forced to monitor their own exposure and pursue opt-out processes individually. Instead, the government would take on the responsibility of systemic oversight, creating mechanisms for victims to report noncompliance and for regulators to periodically audit brokers. By centralizing these functions, the framework would provide victims with a single point of recourse, relieving them of the painful and painstaking task of managing their own privacy across a fragmented landscape.
Centralized governance could also ensure accountability at a systemic level, addressing gaps in enforcement that allow brokers to evade meaningful consequences.[158] By integrating oversight into a unified framework, policymakers can create a cohesive system that reinvigorates online obscurity as a meaningful protection for victims while streamlining compliance for brokers.
C. The Case for a Centralized, Coordinated Intervention
A centralized, coordinated system would most effectively reallocate the burden of maintaining obscurity away from victims and onto data brokers, while also ensuring the system has ample oversight. Such a system could enable victims to reclaim their obscurity with a single request, reducing the labor currently required to achieve even temporary relief. Once a request is submitted, brokers—not victims—would bear the responsibility for obscuring personal information promptly and permanently. Existing models in both the private and public sectors suggest that this approach is feasible and effective.
The private sector has already experimented with using centralized coordination to address systemic threats to privacy. A compelling example is Stop Non-Consensual Intimate Image Abuse (StopNCII), a global initiative that helps individuals prevent the spread of intimate images that have been shared without their consent on online platforms.[159] StopNCII empowers individuals to leverage the technological capabilities and informational advantages of online platforms to proactively prevent nonconsensual intimate images from circulating across participating platforms.[160] Through a centralized system, individuals can generate hash values—unique digital fingerprints—of their intimate images without sharing the images themselves.[161] Platforms such as Facebook, Instagram, and TikTok use this database to identify and block these images before they are further distributed, mitigating the harm that people might otherwise experience due to the repeated appearance of their images.[162] This approach minimizes the labor required to protect people’s privacy, eliminating the need for individuals to request takedowns across multiple platforms and shifting some responsibility onto the companies to prevent harm.[163]
Policymakers could learn lessons from StopNCII as they craft a centralized mechanism to address brokered abuse. Just as StopNCII enables individuals to take preemptive steps to protect themselves, a centralized obscurity system could allow abuse victims to submit a single request to remove personal information across all covered data brokers. Brokers would then bear the responsibility of ensuring that the flagged data is removed and does not reappear on their platforms. Similarly, StopNCII exemplifies how an empathetic system should be sensitive to how people can invoke its protections. While StopNCII allows people to generate hashes of the photos they want to obscure instead of sharing the actual photos, a system addressing brokered abuse could require victims to submit only the minimum amount of personal information required by brokers to identify data points to obscure.
StopNCII relies on users to generate hashes for the images they seek to take down, but victims of abuse cannot be expected to identify every single piece of information that puts them at risk and requires removal.[164] However, other examples from the private sector show that companies can coordinate to identify potentially harmful content without relying entirely on individual submissions. For example, companies like Pinterest,[165] Instagram,[166] and YouTube[167] collaborate to detect and remove self-harm and suicide-related material.[168] These platforms use centralized tools such as machine-learning algorithms to identify patterns, such as flagged keywords, imagery, and behavior, and share insights across platforms to ensure that content removed from one site is unlikely to reappear on another.[169] Instead of requiring victims to identify every piece of data that endangers them, data brokers could similarly use pooled technological resources to identify and suppress information related to particular individuals.
The public sector also provides precedents for centralized frameworks that redistribute responsibility from individuals to entities better equipped to manage systemic risks. The National Center for Missing & Exploited Children (NCMEC) operates a centralized database of hashed child sexual abuse material (CSAM), and online platforms are legally required to report such content to NCMEC promptly upon discovery.[170] The NCMEC database then feeds back into tools that companies use to detect and block CSAM proactively across cooperating platforms.[171] This system alleviates some of the burden on victims by shifting monitoring and reporting responsibilities onto the companies that host or distribute CSAM. Similarly, the Federal Trade Commission’s (FTC) Do Not Call Registry enables consumers to invoke their right to opt out of telemarketing calls by registering their phone numbers in a central database once.[172] The onus of compliance then transfers to telemarketers, who must design and implement systems that continuously monitor the database and ensure that the registered numbers are not contacted, requiring no further action by consumers invoking the right.[173]
The General Data Protection Regulation’s (GDPR) “right to be forgotten” also offers valuable lessons for a centralized obscurity system.[174] European Union residents may invoke this right to request the deletion of their personal data from a specific company. Once an individual submits a request, the GDPR mandates that the company delete the individual’s data and notify any third parties to whom the data has been disclosed to do the same.[175] Although this process is decentralized, it creates a network effect of data deletion. A centralized obscurity system can harness the same benefits by mandating that brokers notify upstream suppliers and downstream customers that they may be illegally disseminating victim information.
The existence of these initiatives from the private and public sectors suggests that centralized systems can provide a feasible way to address systemic harms. But these examples also suggest that legal mandates might be necessary when private companies fail to act. Data brokers seem unlikely to develop centralized harm-reduction tools on their own initiative. Their profit model thrives on the mass aggregation, sale, and dissemination of personal information, and they operate with minimal interaction with or visibility to the individuals affected by their practices.[176] Brokers also face less reputational risk because their operations are largely opaque to the public, and their incentives are fundamentally misaligned with user safety.[177] Unlike some online platforms that might be more sensitive to public backlash, brokers have continually profited despite the harmful consequences their data sales have had on individuals. This absence of market-driven incentives makes voluntary coordination among brokers highly unlikely, necessitating regulatory intervention to enforce harm-reduction practices. The next Part addresses how such an intervention should be constructed.
III. Designing a Centralized Obscurity System
I’m like, “Why?”—so much unwanted contact and just more headaches, more calling companies, more procedures to just go through. . . . Are you going to have 200 bookmarks of data brokers?
—Ella
In a world where data brokers exist, greater protection for abuse victims could be achieved through a centralized system designed to ensure their personal information remains inaccessible to those who aim to exploit it. This Part evaluates existing and proposed state and federal regulations to create a version of this system, highlighting their gaps and inefficiencies. Building on these insights, we then outline a statutory framework specifically tailored to safeguard abuse victims. Unlike broad policy prescriptions that neglect practical implementation, this Part emphasizes the necessity of aligning regulatory design with operational feasibility and the policy’s overarching goals. To that end, it explores the technical architecture of our proposed centralized system, illustrating how it can effectively shift the burden of managing obscurity from victims to data brokers while promoting robust oversight and accountability.
A. Limitations of Current and Proposed Interventions
Regulatory proposals to protect abuse victims from the risks of broker industry need not start from scratch. Initiatives at the state and federal levels—especially California’s DELETE Act[178] and the proposed federal DELETE Act[179]—represent important attempts to streamline data removal processes and recognize the burdens placed on individuals.[180] However, both fall short in critical ways, either due to express provisions, omissions, or uncertainties left to future rulemaking. By examining these gaps, this Section lays the groundwork for designing a harmonized, comprehensive statutory framework to offer victims greater protection and more effectively redistribute the burden of achieving obscurity from individuals to data brokers.
1. Common Strengths
The California DELETE Act and the proposed federal DELETE Act aim to address the privacy challenges posed by data brokers by creating centralized systems that simplify how individuals manage their personal information.[181] Both bills provide consumers with a streamlined process to request the deletion or cessation of the sale of their personal data, replacing the current fragmented and burdensome approach of contacting multiple data brokers individually. These efforts acknowledge the need to reduce logistical barriers to achieving obscurity in a complex and pervasive data ecosystem.
Under both acts, data brokers—defined as entities that collect personal information from third-party sources and sell or license it—are the primary covered entities. This excludes first-party data collectors who use information solely for their own business purposes, limiting the scope of regulation. In California, the DELETE Act builds on the definitions and obligations established under the CCPA and the California Privacy Rights Act (CPRA). Brokers must register annually with the California Privacy Protection Agency (CPPA), which administers the state’s centralized deletion portal.[182] Similarly, the federal DELETE Act proposes a nationwide broker registry and centralized opt-out system managed by the FTC.
Both bills share several features aimed at improving consumer privacy and accountability in the data broker industry. They provide a centralized portal for consumers to submit a single request for data deletion or cessation of data sales, shifting some responsibility away from individuals.[183] Additionally, both require brokers to maintain compliance records and undergo audits every three years.[184] Separately, they require the FTC to verify the identity of requesters to guard against fraudulent deletions.[185] These provisions acknowledge the systemic nature of data broker harms and represent a partial shift toward holding brokers accountable.
2. Differences
The California DELETE Act and the proposed federal DELETE Act take different approaches to regulating data brokers, revealing critical strengths and weaknesses when evaluated against the twin goals of protecting victims of brokered abuse and shifting the responsibility for achieving obscurity from individuals to brokers.
One major difference lies in the treatment of public information. The California DELETE Act explicitly excludes publicly available data—such as property ownership records, voter registration, or court filings—from the scope of personal information that must be removed. This exclusion, based on definitions established by the CCPA and CPRA, leaves significant holes that undermine protections for victims.[186] By contrast, the proposed federal DELETE Act does not categorically exclude publicly available information, leaving room for future rulemaking by the FTC to include such data within its scope.[187] This difference could make the federal approach significantly more protective, depending on how the FTC defines the boundaries of “covered” data.
The exclusion of publicly available information under California law is particularly problematic for abuse victims. Abusers frequently exploit publicly accessible records to locate or stalk victims, leveraging details like addresses, phone numbers, or workplace information.[188] Although California’s law provides safeguards for certain types of personal information, excluding publicly available data allows brokers to continue amplifying sensitive details, putting victims at risk.[189] Closing this gap is essential for achieving meaningful obscurity and addressing the systemic exploitation of public records by abusers.
A second difference between the two acts is the compliance timeline for data brokers—neither of which adequately protects victims. The California DELETE Act requires brokers to check the registry every forty-five days,[190] while the federal DELETE Act mandates a thirty-one-day compliance period.[191] Although both laws establish ongoing obligations, these timelines are excessively long for individuals in danger, giving abusers ample time to exploit personal information before it is removed. These delays fail to account for the urgency victims face, particularly in situations of imminent threat.
Such long timelines are technologically unnecessary. Data brokers already operate advanced systems capable of processing vast quantities of information quickly.[192] The centralized registries envisioned by these laws are designed to simplify compliance, and brokers could act on deletion requests within far shorter timeframes—potentially within days, if not hours. By allowing brokers such extended leeway, both laws dilute their effectiveness and maintain the burden on victims to remain vigilant in the interim. Shortening these timelines would not only enhance protections for abuse victims but also hold brokers accountable for leveraging their technological capabilities to ensure privacy and safety.
3. Limitations
Although both the California DELETE Act and the proposed federal DELETE Act make strides in regulating data brokers, they each suffer from critical limitations that undermine their effectiveness for abuse victims. Key gaps—such as the lack of a private right of action,[193] the absence of an appeals process for denied requests, [194] and the failure to impose ongoing duties on brokers to ensure deleted data remains off their systems[195]—leave individuals with limited protection and perpetuate the burden of achieving obscurity.
One of the most significant omissions is the lack of a private right of action. Both laws delegate enforcement to government agencies.[196] This setup forces victims to rely on slow and resource-intensive government investigations to address noncompliance, delaying relief for individuals who might face imminent risks.[197] For victims of brokered abuse, whose safety often depends on immediate action, this reliance can result in prolonged exposure to harm.[198] Allowing individuals to directly sue noncompliant brokers would provide an immediate remedy and serve as a stronger deterrent, encouraging brokers to prioritize compliance.[199]
Another critical omission is the lack of an appeals process for denied deletion requests. Broad exceptions—covering legal obligations, fraud prevention, and First Amendment–protected activities—give brokers considerable discretion to deny people’s requests.[200] But, at the same time, neither bill establishes a clear and accessible mechanism for individuals to challenge such denials.[201] Instead, they defer the issue to future rulemaking by the CPPA and FTC, leaving victims with little recourse in the meantime.[202] This gap is especially damaging for abuse victims, as it forces them to navigate a system where unjustified denials can leave their sensitive information exposed indefinitely. A robust appeals mechanism, one complete with defined timelines and requirements for brokers to justify denials, could empower individuals to contest decisions, reducing delays and enhancing accountability.
4. First Amendment Vulnerabilities
Due to the broad scope of their regulatory frameworks, the California DELETE Act and the proposed federal DELETE Act face substantial First Amendment challenges.[203] These vulnerabilities arise from a combination of the Acts’ universal scope, treatment of publicly available information, and selective targeting of data brokers, which collectively weaken the Acts’ ability to withstand constitutional scrutiny.
One critical issue is the universal application of both Acts to all individuals, regardless of their unique need for protection. While this broad scope is intended to promote consumer privacy, it risks overreach by regulating the dissemination of truthful, at times public, information without distinguishing between individuals facing significant risks—such as abuse victims—and those with lesser privacy concerns. Courts might find this lack of tailoring problematic under the First Amendment, as the laws could be deemed more speech restrictive than necessary to achieve a compelling governmental interest.[204]
Additionally, both Acts selectively target data brokers while excluding other entities that make similar commercial sales of identifiable information, such as social media platforms and e-commerce companies. The statutes narrowly define a “data broker” as “a business that knowingly collects and sells to third parties the personal information of a consumer with whom the business does not have a direct relationship.”[205] This definition excludes platforms like Facebook or Google that collect personal information directly from their users and sell identifiable versions of this information to third parties—a gap that allows entities engaged in significant privacy-compromising activities to evade the law’s mandates.[206] The omission is particularly troubling given that these platforms’ sale of user data can contribute toward the same harms the statutes aim to address, such as enabling harassment, stalking, or other forms of abuse.[207]
This selective targeting raises concerns about underinclusivity, as the laws impose obligations on traditional data brokers while allowing other companies that engage in comparable privacy-compromising behaviors to operate without restriction. Courts have previously scrutinized such regulatory disparities under the First Amendment, particularly when they involve the dissemination of truthful information.[208] By failing to cover entities that compile and sell data dossiers, the statutes risk undermining their own objectives and inviting legal challenges.
Together, these issues highlight the DELETE Acts’ vulnerability to First Amendment challenges. While their broad application reflects a commitment to consumer privacy, their lack of precision and gaping exceptions exposes them to legal risks. By contrast, a more narrowly tailored approach—such as the centralized obscurity system proposed in this Article—that closes problematic gaps can address the harms of brokered abuse more effectively while passing constitutional scrutiny.
B. Regulatory Design
This Section proposes a new statutory scheme to address the pernicious harms posed by data brokers and to protect abuse victims more effectively than existing legislative efforts. While the California DELETE Act and the proposed federal DELETE Act represent meaningful progress, their shortcomings leave abuse victims vulnerable.[209] This proposal builds on their strengths while tailoring protections to the specific needs of abuse victims. By narrowing the scope and integrating technical feasibility into regulatory design, this approach reduces First Amendment vulnerabilities, redistributes the burden of obscurity from individuals to data brokers, and offers more effective and enforceable protections. After introducing the centralized system, this Section explores how to design a system with vulnerable populations in mind—from how it should be accessed, to which entities it should cover, how it should be monitored, and how it should be implemented.
1. The Centralized System: An Overview
This statutory intervention creates a centralized system that requires brokers to take on the technical and logistical burden of complying with victim requests to obscure their personal information. At the heart of the system is a central registry maintaining records of individuals who have invoked their obscurity rights. Data brokers must query this database and take immediate, proactive steps to remove or deidentify covered data. This eliminates the fractured, piecemeal nature of existing mechanisms, replacing them with a single point of coordination and enforcement.
Given the interstate nature of the data broker industry and the reality that victims often cross state lines to escape abusers, a federal framework would be preferable to promote comprehensive and uniform protections.[210] Federal legislation avoids jurisdictional gaps and creates consistency across states, preventing brokers from exploiting discrepancies in state laws. Additionally, tying the framework to existing federal statutes like the Violence Against Women Act (VAWA) and the Safe Connections Act could leverage established definitions and enforcement mechanisms, offering a cohesive legal landscape while enhancing support for victims. This approach could also allow victims to take advantage of the system seamlessly while engaging with other victim-protection services, such as changing their address through a state protection program, obtaining a court order, filing a police report, or seeking support from a domestic-abuse hotline or shelter. These points of interaction provide practical opportunities for victims to assert their rights under this statute without additional procedural burdens.
However, in the face of federal inaction, states could still adopt similar statutory proposals to protect their constituent victims. By tailoring this model to fit within state-level programs—such as existing Safe at Home initiatives or domestic-violence protections—states can achieve meaningful reform and establish localized solutions that address the urgent safety needs of their residents.
2. Invoking the Obscurity Right: Whom & How
The statute should endeavor to protect individuals whose safety and well-being are endangered by data brokers disseminating their personal information. The goal is to craft a narrowly tailored yet inclusive framework that prioritizes the needs of the most vulnerable victims while avoiding unnecessary exclusions. Although sound reasons exist to extend similar protections to other groups, our proposal focuses on people who have experienced particular forms of interpersonal abuse.[211]
By drawing on established federal protections, and perhaps filling gaps through independent definitions, the statute could offer both clarity and flexibility to address emerging forms of abuse. Federal law already creates special protections for vulnerable individuals who experience abuse in intimate or familial relationships, while occasionally extending protections to include interpersonal abuse between strangers. VAWA, for example, targets physical, sexual, or psychological harm by intimate partners,[212] while the Safe Connections Act addresses domestic violence, dating violence, sexual assault, stalking, and sex trafficking.[213] Frameworks like these provide definitions that a statute creating a centralized obscurity process could leverage and incorporate.
However, gaps remain. VAWA’s focus on intimate partner violence excludes victims abused by coworkers, acquaintances, or strangers,[214] while both VAWA and the Safe Connections Act arguably neglect newer forms of interpersonal abuse like doxing and nonconsensual sharing of intimate images, at least in certain circumstances.[215] To address gaps and protect victims in diverse and nontraditional abuse contexts, the statute could define additional qualifying acts that fall outside the purview of existing federal laws. By grounding eligibility in established federal definitions and supplementing them with independently enumerated harms, the statutory intervention could provide both consistency and flexibility.
Once eligibility is defined, the next question is how individuals demonstrate that they qualify for the right to opt out. The statute should adopt self-attestation as the preferred method, whereby victims submit a sworn statement affirming their eligibility without further evidentiary requirements.[216] This approach minimizes barriers to access this protection and empowers victims to invoke their rights without requiring documentation that may be difficult, dangerous, or retraumatizing to obtain. It therefore aligns with the statute’s goal of avoiding the major bureaucratic and emotional burdens that obtaining documentation places on the shoulders of vulnerable victims.
While requiring documentation—such as police reports, protective orders, or affidavits from counselors—might theoretically add a layer of verification, it is not an ideal solution. Victims often face significant hurdles in obtaining these materials,[217] whether due to distrust of law enforcement, safety concerns, or the sheer difficulty of navigating bureaucratic systems while coping with trauma. Imposing documentation requirements would create an inequitable system where only those with the resources and ability to produce proof are protected. Furthermore, documentation requirements disproportionately exclude individuals in emergency situations or those who fear retaliation for seeking help. Privacy protections for abuse victims should minimize, not compound, the labor these individuals must undertake.
Self-attestation does not pose a significant risk of misuse in this context for several reasons. First, if an ineligible individual invokes an obscurity right, this does not prejudice anyone else’s legal interests (unlike scenarios involving shared property like vehicles, where granting access to one party necessarily deprives the other). Second, nonvictims are unlikely to exploit this system at scale because the process still requires submitting a sworn statement attesting to their eligibility, which serves as a deterrent to frivolous claims or insufficiently motivated individuals.
Even if there are some risks of misuse, the risks of imposing barriers to access are significant. Raising the threshold for eligibility could disqualify people who do not fit traditional or clear-cut narratives of abuse. The psychology and dynamics of relational abuse—where roles of victim and abuser may shift or overlap—demand a nuanced approach that errs on the side of accessibility. By allowing self-attestation, the statute would ensure that people in complex or nontraditional abuse situations can still seek obscurity. If concerns about misuse arise after implementation, they can be addressed through periodic reviews of the system’s operations, as is done for other comparable interventions.[218] However, the statute should prioritize removing barriers for victims rather than preemptively creating hurdles based on hypothetical concerns.
3. Covered Brokers & Data
The statute should aim to shift the labor of protecting vulnerable individuals onto the entities profiting from the collection and sale of personally identifiable data. To achieve this goal, the statute should employ broad definitions of “data broker” and “covered data” and impose carefully crafted compliance obligations.
Under this statute, a “data broker” should be defined as “any entity that sells or licenses personally identifiable information (PII) to third-parties in a non-deidentified form, regardless of whether the entity has a direct relationship with the individual from whom the data was initially collected.”[219] This definition ensures coverage of platforms like Google, which collect data directly from users but sell it in various formats to third parties.[220] A narrow definition would create dangerous gaps, allowing entities to avoid accountability by operating under alternative business models or by selling smaller quantities of data.[221] Information that undermines people’s obscurity must be comprehensively purged from brokered datasets to ensure safety.[222]
The statute’s definition of “covered data” should take a similarly expansive approach to include both direct and indirect information. Direct PII encompasses traditional identifiers like names, addresses, phone numbers, emails, and Social Security numbers.[223] However, the statute should also cover indirect data that could endanger victims by providing alternative avenues for harm. For example, records related to family members, employment locations, or roommates can enable abusers to locate or target victims indirectly.[224] To address these risks, brokers should be required to use clustering techniques to identify and act on indirect data,[225] guided by thresholds and methodologies developed by experts at agencies like the FTC or the National Institute of Science and Technology. This would reduce the risk that abusers could target victims through more tangential connections, aligning with the statute’s goal of providing comprehensive protection. [226]
Including indirect data in the statute’s scope would impose a minimal burden on third parties, such as family members, whose information may also be removed. The statute should place the responsibility on brokers, streamlining the process and shifting the burden away from victims and their families.[227] Given that brokers already use clustering techniques for commercial purposes,[228] such as creating consumer profiles and linking datasets,[229] this requirement should be feasible. The statute would simply compel brokers to repurpose their existing tools and expertise toward protecting vulnerable individuals, rather than solely pursuing profit.
Covered data should also include publicly available data to seal gaps that would undermine the statute’s protective goals.[230] Information that is technically public can be weaponized by abusers to locate or harm victims. While public records like voter registrations or property deeds might not be inherently harmful, brokers exacerbate the risks they pose by aggregating and centralizing this information, making it instantly accessible at scale.[231] The statute would not erase public records but rather restore the practical obscurity that once limited their accessibility to abusers.
4. Adherence to a Standard of Care
The statute should impose rigorous obligations on data brokers to enhance protections for abuse victims. Key obligations could include prohibitions on dissemination, proactive monitoring, supply chain accountability, and robust compliance measures.
As a threshold requirement, covered data brokers should be compelled to register with the agency tasked with overseeing the statute’s implementation. This builds on models like those in Vermont[232] and California[233] and could provide regulators with insights into the broker ecosystem, facilitating enforcement and the potential for future regulation.
Brokers should be explicitly prohibited from publishing, selling, or disseminating identifiable data tied to registered victims. Like the Do Not Call Registry,[234] this prohibition would not require outright deletion of data; it would mandate that covered data not be disclosed in any identifiable form. Even pseudonymous data, which risks reidentification, should be restricted, with dissemination permitted only in fully deidentified formats.
The statute should also impose a continuing obligation on brokers to monitor their systems to prevent reemergence of covered data. Automated processes should compare newly acquired data against records of registered victims, removing any flagged information before further dissemination. These monitoring obligations should extend beyond previously deleted data to include new details like updated addresses, phone numbers, or employment information tied to registered victims. Such mandates could provide victims long-term, dynamic protection, which would go far beyond the temporary relief offered by one-time removals.
In addition to refraining from disseminating non-deidentified victim data, brokers should also be required to notify entities in their data supply chain—both those they acquire data from and those they sell data to—when a dataset contains records about a registered victim. Inspired by GDPR principles,[235] this would promote compliance throughout the data ecosystem. For example, if a downstream buyer unknowingly receives covered data, the seller would be legally compelled to inform them to prevent further circulation. This would create a cascading effect that reduces the risk of victim information persisting in the ecosystem.
While the statute could allow certain exceptions for lawful obligations, fraud prevention, or protected First Amendment activities, brokers should be forced to maintain an appeals process for disputes over opt-out requests. In contested cases, brokers should default to concealing the data, notifying the victim of their intent to invoke an exception, and providing an opportunity for the victim to challenge the exception’s applicability. Critically, the burden of proof shifts to the broker to justify the exception, reducing the procedural burden on victims.
Finally, brokers should be required to self-attest to compliance, make their books and systems available for impromptu inspection by governing agencies, and submit regular compliance reports detailing actions taken to honor opt-out requests, including records of flagged, deleted, or deidentified data. These measures would allow agencies such as the FTC or Consumer Financial Protection Bureau (CFPB) to audit broker activities, identify patterns of noncompliance, and enforce penalties where necessary.
5. Implementation
The successful implementation of this statutory regime depends on a robust and well-funded infrastructure, overseen by a competent government agency capable of managing its many responsibilities.[236] The agency should handle critical tasks such as maintaining central registries, verifying opt-out requests, monitoring broker compliance, and adapting to emerging risks in the data broker ecosystem. Two central registries are necessary: one for data brokers and another for individuals opting out of data dissemination. The agency should monitor compliance by requiring brokers to regularly query the registry, auditing their activity logs to detect anomalies, and addressing complaints from victims and brokers reporting noncompliance. Additionally, the agency could invest in ongoing research to refine key processes like hash matching, clustering, and deidentification, ensuring the system evolves alongside advancements in the broker ecosystem.
Given the wide-ranging responsibilities of the implementing agency, substantial funding will be essential. However, relying on government appropriations is impractical in the current political climate, especially at the federal level. To address this challenge, the statute should require brokers to pay tiered registration fees, scaled by their size or the volume of data they handle. These fees could create a sustainable revenue source to support the agency’s work, including audits, enforcement, public education campaigns, and even grants for domestic violence shelters or legal aid organizations that assist victims in navigating the opt-out process. Penalties collected from noncompliant brokers could also supplement this fund. This fee-based funding model, inspired by the Universal Service Fund in telecommunications,can support a comprehensive regulatory framework without relying on direct congressional appropriations.[237]
The statute’s penalty scheme should seek to protect victims and promote compliance from brokers. Civil penalties should escalate with the frequency and severity of violations, ensuring that repeat offenders face increasingly harsher consequences. Equally important is the inclusion of a private right of action, which could empower victims to hold brokers directly accountable. A private right of action should ideally allow victims to recover compensation from brokers for the harm they suffer due to the broker’s noncompliance. Even if victims are unable to sue for damages, however, a private right of action could allow victims to seek injunctive relief from noncompliant brokers—a faster avenue than waiting for agency action and cheaper than a civil suit for damages. If a private right of action proves to be politically or legally unfeasible, the tort system could offer an alternative mechanism for accountability. Though this system is imperfect, brokers who fail to meet their obligations might still be held liable under common-law tort doctrines for breaching a duty of care to victims.
C. Technical Design
This proposed statutory scheme provides the foundation for redistributing the burden of achieving obscurity from individuals to brokers, but its success hinges on the implementation of proper technical infrastructure. Without a reliable and carefully designed system to operationalize these rights and protections, even the best-intentioned law risks regulatory impotence. This Section outlines the technical design considerations of a centralized obscurity system predicated upon a government-maintained database of registered victims and interoperable standards to ensure uniform compliance across a fragmented broker ecosystem. The proposed system aims to avoid the pitfalls of a piecemeal approach to privacy self-management, offering a scalable and resilient centralized pathway to meaningful reform.
1. The Need for a Prescriptive Technical Solution
Allowing brokers to design and implement compliance systems independently would almost certainly lead to inconsistency, inefficiency, and opportunities for brokers to circumvent these rules in bad faith. Data brokers operate within a competitive market where incentives to comply rigorously with privacy protections often clash with motives to maximize profits.[238] Historically, self-regulation in industries with significant public interest has resulted in systems designed with inefficiencies—intentional or not—that become entrenched over time.[239] In some cases, companies purposefully design systems that are cumbersome and opaque to use.[240] Once these systems are in place, the architecture ossifies, and companies exploit the narrative that compliance is expensive and burdensome as a way to resist further regulation or even challenge existing mandates.[241]
A striking example is the ongoing failure to mandate true interoperability in health data across electronic health record (EHR) systems.[242] Despite laws like the Health Information Technology for Economic and Clinical Health Act[243] and subsequent interoperability initiatives, the EHR industry has built fragmented systems full of incompatible standards and data silos.[244] This lack of seamless interoperability is not accidental; it is a calculated feature of self-regulation, aimed at maintaining vendor lock-in and avoiding competition.[245] Similarly, the rollout of the CCPA[246] saw companies implement patchwork compliance mechanisms that confused consumers and created obstacles for consumers exercising their rights under the CCPA.[247] These systems were later cited by industry advocates as evidence of compliance being too complicated or costly,[248] fueling lobbying efforts to water down subsequent enforcement or legislative expansion of the CCPA and other privacy statutes.
By learning from these examples, the proposed statutory scheme prioritizes a centralized technical framework to ensure consistency, eliminate inefficiencies, and prevent evasion. A legislated framework for implementation avoids the pitfalls of industry-designed systems, promoting compliance mechanisms that are transparent, effective, and resistant to weaponized inefficiency.
2. Central Victim Opt-Out Registry
The centralized database lies at the heart of this system, maintaining records of individuals who have opted out of having their personal information distributed by data brokers. Its design must prioritize storing only the minimal information necessary, balancing functionality with privacy and security. The database should hold only the information necessary for brokers to identify and act on relevant records in their own systems—such as hashed combinations of names, dates of birth, Social Security numbers, and aliases—rather than a comprehensive repository of every data point a victim wishes to be removed. By limiting the scope of stored data, the database would be less attractive to hackers while still enabling brokers to meet their obligations. The system should also employ prevailing cybersecurity best practices to further secure information in the database.[249]
3. Data Broker Queries to a Centralized Database
Data brokers would interact with the centralized database via a secure API endpoint, matching records to minimize strain on the central system and protect victim privacy. The challenge lies in allowing brokers to see whether the data they possess matches the data in the central registry without either learning the contents of the central registry or sharing all the personal information they possess with the central registry. Application programming interface (API) queries can fix this issue. API queries can employ advanced cryptographic techniques that would allow brokers to compare the contents of their databases with the content in the central victim registry without exposing the information they possess or accessing information from the registry they do not already have.[250] Some techniques could enable brokers to identify matches without the central victim registry seeing their datasets,[251] while others could ensure the database comparisons occur only in encrypted form.[252] These methodologies would enable brokers to fulfill their compliance obligations without further compromising victims’ privacy.
To accommodate the variability in personal data records—such as nicknames, typos, or alternate spellings—the database should support approximate matching techniques.[253] Algorithms like fuzzy hashing[254] and Levenshtein distance[255] could allow brokers to identify close matches rather than relying on exact matches, promoting comprehensive compliance without forcing victims to list all possible variations of their data. Importantly, these approximate matching methods are compatible with the advanced cryptographic protocols the database could use to ensure API queries do not further compromise victim privacy. Brokers could use approximate matching locally with hashed data to identify variations without revealing their full dataset or accessing unrelated records in the central database. This would ensure that variability in data formats does not impede compliance while maintaining robust privacy protections.
Each interaction between a broker and the central database would be logged, capturing timestamps, query metadata, and the broker’s unique identifier. These activity logs would create transparency and accountability, enabling the central database to monitor compliance. Alongside these logs, brokers should also submit compliance reports detailing queries conducted, matches identified, and actions taken, such as records deleted or deidentified. These broker-generated compliance reports would allow the central database to audit broker activities and identify discrepancies or patterns of noncompliance to reinforce the integrity of the framework and the protection of victims’ data.
To streamline the process of pushing registry updates to data brokers, the central database could offer webhook integration.[256] Brokers could subscribe to receive notifications when a registered victim updates or expands their covered data. These notifications would not disclose sensitive information but instead include a broker-specific reference ID and a directive to re-query the database. This approach would foster efficient compliance without exposing unrelated victim data.
4. Identifying Covered Victim Data
Brokers, upon receiving hashed identifiers for individuals who have opted out, should use these hashes to locate and remove or deidentify records containing direct PII, such as Social Security numbers, names, email addresses, and dates of birth. The ability to pinpoint an individual often arises from a combination of elements, such as a name paired with a date of birth or an email address tied to a Social Security number.[257] Using the provided hashed values, brokers could deploy automated matching algorithms to accurately locate and expunge these direct identifiers, ensuring victims’ key identity markers are no longer accessible within their systems.
However, obscurity cannot be achieved by removing direct PII alone. Abusers are often intimately familiar with their victims and can exploit otherwise vague or innocuous data to harm them.[258] To provide meaningful victim obscurity, brokers should also be compelled to identify and address indirect data points that, while not explicitly identifying an individual, could still expose them to harm.[259] Indirect data might include records tied to family members, roommates, or frequent contacts—information that an abuser could exploit to track or target a victim.[260] For instance, even if a victim’s personal address is removed, their residential location could be revealed through records associated with a roommate.
To achieve this kind of comprehensive coverage, brokers could employ the hashed identifiers from the central database as anchor points in their datasets to locate and address indirect or nonobvious data risks. By analyzing patterns and associations, such as shared addresses, linked phone numbers, or overlapping network connections, brokers could identify records indirectly tied to victims who have opted out. For example, if a hashed email address corresponds to a victim, brokers could identify other accounts registered at the same physical address or other individuals linked through shared data points. The government agency should also establish clear, actionable thresholds for clustering proximity, ensuring that brokers strike a balance between privacy and technical feasibility without overreaching into unrelated data.
Placing the burden on brokers to identify and remove indirect data is both practical and justified. Brokers have unparalleled access to vast quantities of data, advanced analytical tools, and the technical expertise required to perform this task.[261] Victims, in contrast, lack both the resources and the visibility into the complex networks of data maintained by brokers, which limits their ability to achieve meaningful obscurity on their own. Moreover, brokers already use sophisticated clustering techniques for commercial purposes, such as building consumer profiles and linking related data across datasets.[262] Applying similar methods to identify indirect information tied to victims is not only feasible but also fair given brokers’ role in undercutting victim obscurity.
Brokers must also prevent obscured data from reentering their systems. Newly ingested datasets should be automatically compared against hashed identifiers already in their possession. If a match with previously removed information is detected, the system should trigger automatic obscurity workflows and notify the central database. This would promote ongoing compliance and protect victims from reemerging risks over time.
5. Deidentification Standards
While deletion of personal data is a powerful tool for achieving victim obscurity, it is not the only means of protecting individuals. In some cases, deidentification could serve as an alternative that balances the need to conceal victims’ data with brokers’ business interests. To be a viable alternative to deletion, deidentification must ensure that data is irreversibly unable to be linked to any individual and incapable of reidentification through direct or indirect methods.[263] This requires adhering to rigorous benchmarks, such as differential privacy standards, which introduce controlled randomness to obscure individual data points while maintaining the statistical integrity of datasets. In reality, though, especially with machine learning, deidentification is often imperfect. The question then becomes: How much deidentification should the law require?
By providing enforceable guidance on acceptable deidentification practices, government agencies could promote consistency across the industry and account for technological advancements that might otherwise render older techniques obsolete. Given that abusers often possess intimate knowledge of their victims,[264] the standards must be designed with the utmost care. Well-informed abusers may be able to reidentify information with fewer specific data points than the average person.[265] So, brokers must ensure that deidentification is robust enough to foil the most dedicated and sophisticated abusers and the government should periodically audit broker deidentification methodologies to ensure brokers are employing the latest state of the art practices.[266] To further streamline the process and improve compliance, the FTC or a similar state agency could offer a centralized validation tool or API that brokers could use to test their deidentification methods against established benchmarks. This tool could promote deidentification practices that are robust, consistent, and aligned with regulatory expectations, providing both accountability and operational clarity.
6. Standards Development Process
The development of technical standards for this framework is just as important as the implementation of the framework itself. A well-structured standards development process not only promotes the system’s technical efficacy but also lends legitimacy and trust to its implementation. To this end, convening a diverse and knowledgeable standard-setting body is paramount. This body should include technical experts, privacy advocates, industry representatives, and government staff, cultivating a balanced approach that reflects the interests of all stakeholders while prioritizing victim protection and privacy. The technical community’s work to mitigate the misuse of Apple AirTags for stalking highlights the importance of involving subject-matter experts.[267] These experts would bring critical insights into how decisions involving technical design have real-world outcomes and can also anticipate potential risks and challenges.
Open standards should be adopted for API protocols, data formatting, clustering, deidentification, and cryptography to ensure cross-industry interoperability. The stakes for this process are particularly high given the immense resources and coordination required to build such a system. Once implemented, the framework will likely become entrenched, making significant redesigns or reversals exceedingly difficult. This reality underscores the importance of designing a solution that is robust and future-proof. An open, transparent, and inclusive standards development process safeguards against industry capture or arbitrage while ensuring that the system’s design is robust, fair, and adaptable to future challenges.
IV. Negotiating First Amendment Challenges
I believe that our concept of records and what needs to be public is not quite keeping up with the pace of technology. What these brokers are offering is not just something that you could go to the courthouse and get; it’s like an aggregation of everything that I didn’t necessarily provide . . . .
—Ella
Implementing a centralized obscurity system for abuse victims entails not only legislative and technical challenges but also constitutional ones. Even if legislators aspire to address brokered abuse, they might fear that constitutional doctrine will thwart their efforts. The regulation of information flows inevitably awakens the First Amendment Balrog.
Data brokers would have lawmakers and the public believe that laws—like the one proposed in this Article—regulating publicly available information face wholesale invalidation, or at the very least must face strict scrutiny.[268] While this Part ultimately maintains that a centralized obscurity proposal should survive even under strict scrutiny, along the way it also challenges the purportedly unavoidable assumption that the First Amendment even requires laws regulating brokers’ use of publicly available information to clear such a hurdle.
First Amendment analysis can be broken into two cascading inquiries: coverage and protection.[269] The coverage inquiry determines whether the First Amendment is even in play, while the protection inquiry subsequently assesses the law’s constitutionality when coverage is established.[270] Asserting that the First Amendment “covers” particular conduct means that First Amendment analysis is required to determine the constitutionality of a law regulating such conduct.[271] Asserting that the First Amendment “protect[s]” such conduct means that the law is unconstitutional.[272] This Part begins by evaluating the coverage inquiry, casting skepticism on the presumption of First Amendment coverage for the regulation of brokers’ sale of publicly available information. It then moves to the protection inquiry to contemplate whether such speech, if covered, is commercial or noncommercial—that is, whether it warrants protection under intermediate or strict scrutiny. Finally, even if subject to strict scrutiny, this Part argues that the proposed centralized obscurity system would survive that standard despite its regulation of publicly available information.
A. Constitutional Coverage: Data Brokers as Navigational Maps?
While the Supreme Court has explained that First Amendment coverage should adapt to evolving media of communication,[273] the data economy raises new questions about what activities the First Amendment covers.[274] As Robert Post theorizes, “First Amendment coverage is triggered by those forms of social interaction that realize First Amendment values . . . [and] extends to [media] that realize First Amendment values.”[275] Scholars have spilled much ink over the animating values of free speech, often centering the protection on three general ideals:[276] (1) marketplace of ideas,[277] (2) individual autonomy,[278] and (3) participatory democracy.[279] Any constitutional challenge to our proposal should begin by assessing whether data brokerage represents a medium that realizes First Amendment values, and thus warrants coverage.[280]
Regardless of where one locates free-speech values, “listener-based educative theory underlies much First Amendment doctrine.”[281] In the context of regulating brokers, listeners’ rights are particularly salient. Listeners’ rights go hand in hand with access to information,[282] and data brokers market themselves as the keyholders to the Library of Alexandria.[283] However, marketplace-of-ideas and participatory-democracy theories of free speech view listeners’ rights in meaningfully different ways that affect the coverage inquiry.
Jane Bambauer contends that data’s potential to inform justifies its classification as speech.[284] According to Bambauer, the coverage question is not whether data is speech in a metaphysical sense, but rather whether the regulation “deliberately interferes with an individual’s effort to learn something new . . . .”[285] In her view, First Amendment coverage should extend to laws that “target[] information-gathering for the very purpose of disrupting it.”[286] While some courts have effectively adopted this view,[287] this coverage analysis arguably privileges a particularly expansive marketplace-of-ideas theory,[288] often to the detriment of public discourse.[289]
Bambauer’s scientific-method framing offers a compelling take on a marketplace-of-ideas theory,[290] but contemporary courts might be increasingly concerned that this will lead to coverage creep and sanitize First Amendment values. Regulating the public’s access to information might not always trigger First Amendment scrutiny under a participatory-democracy view of the First Amendment. Post stresses the importance of the relationship between speaker and listener.[291] To truly serve First Amendment values, he argues, media of communication must do more than “facilitate the communication of particularized messages,” and “the facilitation of communication is not by itself a sufficient reason for social conventions to be valued by the First Amendment.”[292] Under a Postian participatory-democracy theory of free speech, data dossiers might not receive First Amendment protection. Akin to how navigation charts communicate “monologically to their audience,” data brokers’ dossiers speak monologically to their clientele of private parties.[293] Rote conveyance of personal data functions in a similar fashion to a map or other reference source. The consumer, or audience, “assume[s] a position of dependence” and relies on the data as unadulterated fact.[294] Facts, or information, alone do not necessarily constitute constitutionally salient speech.[295] Unless a speaker imbues such facts with an expressive or communicative “use” to express a message, facts alone might not constitute covered speech.[296] Data dossiers, like navigation charts, arguably function as products that lack the kinds of social interactions that realize First Amendment values.[297]
Rather than focusing narrowly on information flows, Post emphasizes the constitutional salience of public discourse.[298] This notion, too, could affect the coverage analysis for data brokerage. Drawing on Supreme Court doctrine,[299] Post raises the “paradox of public discourse,”[300] which posits that public discourse can only perform its constitutional function “if it is conducted with a modicum of civility.”[301] Although demanding civility may constrain speech, sufficiently abusive and alienating public discourse could lead individuals to recoil from engaging in public discourse to influence the construction of public opinion.[302] If incivility is left to fester, public discourse will fail to foster a sense of legitimacy and participation, and the rationale for safeguarding the principle will wane.[303] It is precisely this line of thought that leads Post to the conclusion that the “right to be forgotten” is compatible with the democratic function of public discourse.[304]
Sometimes when you wield a constitutional hammer, everything looks like a nail. And no constitutional right possesses more social and rhetorical power than the First Amendment and freedom of speech.[305] Frederick Schauer refers to this phenomenon as First Amendment magnetism.[306] First Amendment magnetism characterizes the “accelerating attempt to widen the scope of First Amendment coverage to include actions and events traditionally thought to be far removed from any plausible conception of the purposes of a principle of free speech.”[307] However, in an age of rapid First Amendment expansionism, some courts might scrutinize the coverage question to avoid First Amendment creep serving as a tool of deregulation.[308]
It is precisely these deregulatory “perils of Volokhner” that underpin Neil Richards’s contention that privacy regulation and speech regulation need not be in tension.[309] Richards challenges the assumption that information flows constitute speech and therefore fall within the ambit of the First Amendment.[310] In his view, such an absolutist approach to First Amendment coverage fails to adequately question the “constitutional metaphysics of ‘speech.’”[311] Calling “things ‘speech’ or ‘not speech’” might spike judicial anxiety,[312] but courts might be persuaded by the chorus of scholars calling on them to police the boundaries of coverage given the First Amendment’s deregulatory expansion.[313] While privacy must be squared with First Amendment interests, privacy often gets the short end of the stick.[314]
First Amendment questions raised by the digital age invite us to set aside our casebooks and let more elemental constitutional inquiries come to the fore.[315] Even if some contemporary doctrine suggests that data dossiers might be covered, courts should interrogate whether such a conclusion serves the First Amendment’s animating values. Laws like the one proposed in this Article force us to reckon with the costs of First Amendment expansionism, yet they might also provide an opportunity to pump the brakes and demand greater introspection on how constitutional coverage reflects socio-constitutional values.
B. Constitutional Protection: The Commerciality Conundrum
Given the expansion of First Amendment coverage,[316] courts may well extend coverage to data brokers’ sites. Presuming coverage, the protection inquiry begins. Before assessing the constitutionality of a law regulating the dissemination of abuse victims’ data, courts would need to determine the proper level of constitutional scrutiny. If they deem the dissemination of abuse victims’ data to be noncommercial speech, the law must survive strict scrutiny rather than the intermediate scrutiny applied to commercial speech.[317]
1. Commercial Speech: Dossiers v. News
Historically, the First Amendment did not protect commercial speech.[318] However, the Court determined in Virginia State Board of Pharmacy v. Virginia Citizens Consumer Council, Inc. that commercial speech warrants constitutional protection, albeit lesser protection than noncommercial speech.[319] The Court laid out the contours of this diminished protection in Central Hudson, articulating a four-part test.[320] First, commercial speech “must concern lawful activity and not be misleading.”[321] If the speech clears this initial threshold, then the state may only regulate if (1) the “government interest is substantial,” (2) the regulation “directly advances the governmental interest asserted,” and (3) the regulation is “not more extensive than is necessary to serve that interest.”[322] While the Court in Sorrell v. IMS Health Inc. hinted at heightened scrutiny for content-based and viewpoint-based regulation of data flows,[323] the Court’s silence on the contours of this heightened scrutiny has often led lower courts to continue to apply a version of the Central Hudson test.[324]
Meanwhile, noncommercial speech is at the core of canonically protected speech, and once information has entered the public sphere, the First Amendment generally precludes the government from restricting its subsequent use.[325] Once information had been “publicly revealed”[326] and “widely disseminated”[327] to the general public, it became unconstitutional to “restrain its dissemination”[328] and retract it from the “public domain.”[329]
Cox Broadcasting Corp. v. Cohn[330] and Florida Star v. B.J.F.[331] present two key cases in this regard.[332] In Cox, the television station broadcast a rape and murder victim’s name that court records had already “publicly revealed,”[333] while the newspaper in Florida Star published a rape victim’s name derived from a “publicly released police report.”[334] In both cases, the Court held that after the information entered “the public domain,” the First Amendment protected the use of that information.[335]
The Court came to a similar conclusion in Smith v. Daily Mail Publishing Co.,[336] even though the information came from a nongovernmental source. In Smith, two reporters learned the name of a teenage boy who killed his classmate from individuals present at the crime scene.[337] Following the airing of the boy’s name by several radio stations, newspapers printed the name and were indicted under a state law that made it a crime to publish the names of juvenile arrestees without a court’s written approval.[338] The Court held that the First Amendment prohibited the state from punishing the publication of the information.[339] Despite recognizing that prior cases involved the governmental release of information, the Court downplayed this distinction, explaining that the public “cannot be made to rely solely upon the sufferance of government to supply it with information.”[340] The information’s source did not matter as much as the fact that the information had already entered the public domain.[341]
Applying this doctrinal backdrop to this Article’s central proposal, does data brokers’ commercial dissemination of personal data warrant protection as commercial speech or noncommercial speech? At least according to one court, the answer might hinge on whether data brokers serve a newsgathering function that informs the public.[342]
On the one hand, data brokers collect information from diverse sources, collate it, and share it publicly. They charge for access to this information, but the New York Times and countless other publications also disseminate information for profit. Framed in this way, data brokers’ sale of personal information might seem eligible for protection as noncommercial speech, akin to that enjoyed by newspapers. Courts might doubt, however, that the private sale of data dossiers represents a journalistic endeavor. While private dossiers have the potential to inform, they are rarely (if ever) “an effort to engage public opinion.”[343]
The court in Brooks v. Thomson Reuters Corp.[344] expressed these very doubts in a case related to brokered dossiers:
Thomson Reuters is not a journalist performing a “public benefit” by making Plaintiffs’ personal information available to the public. Rather, the company’s dissemination of this information only benefits the private parties who purchase the [company’s] dossiers. All the other cases cited by Thomson Reuters to suggest that there is no privacy right in speech derived from public records are similarly inapposite because they involve journalists disclosing publicly available information to the general public.[345]
The Brooks court draws an appealing distinction between brokers selling data dossiers to private parties and journalists disclosing information to the general public.[346] The Supreme Court drew a similar distinction in Dun & Bradstreet, Inc. v. Greenmoss Builders, Inc.,[347] where a nonmedia information distributor sought the same First Amendment protections as media defendants in defamation actions.[348] The Court found that the sale of credit reports was “speech solely in the individual interest of the speaker and its specific business audience” and was “solely motivated by the desire for profit.”[349] Accordingly, the speech did not address a matter of public concern and received only the diminished protection afforded to commercial speech.[350] Likewise, data brokers—the largest of which are credit reporting agencies[351]—sell dossiers purely from a place of financial self-interest.[352]
Data brokers’ central practice is to sell a product to private parties, not to engage public opinion as a journalistic purveyor of information. Common sense counsels us to look past any journalistic façade that brokers might suggest. Despite the industry’s claims to the contrary, it is anything but clear that brokers should receive noncommercial speech protection rather than the diminished protection afforded to commercial speech.
2. Noncommercial Speech: Passing Strict Scrutiny
Even if courts determine that data brokers’ dissemination of abuse victims’ information—especially publicly collected information—should undergo strict scrutiny, this Article’s central proposal meets this constitutional bar.
To pass strict scrutiny, the government must first demonstrate a compelling government interest.[353] Public health and safety is a classic example of a compelling government interest,[354] and the protection of abuse victims from the primary harms of brokered abuse certainly fits within the ambit of these core governmental concerns.[355]
The government also has a compelling interest in protecting victims from the secondary harms of brokered abuse, including the pressure they feel to withdraw from the public sphere. The privacy and expressive interests of abuse victims too often go unnoticed, as with other marginalized communities.[356] While our proposal might implicate the First Amendment rights of data brokers and those who benefit from information capitalism, obscurity rights might also empower abuse victims to engage in First Amendment activity, rather than suffer the chilling effect of withdrawing from society in hopes of safety.[357]
For abuse victims, privacy might be a necessary precondition for self-expression.[358] While First Amendment doctrine does not generally employ balancing tests,[359] we need not ignore the competing First Amendment interests at play, including their relation to democratic self-governance.[360] Indeed, abuse victims might even refrain from voting—the fundamental right undergirding participatory democracy—for fear that data brokers will scrape their information from public voting rolls and make them readily accessible to their abusers.[361] There is surely a formidable governmental interest in ensuring all citizens feel safe exercising their right to vote. Ultimately, a strict scrutiny analysis will likely hinge on whether the regulation is narrowly tailored rather than whether a compelling government interest exists,[362] but it is vital to foreground the stakes at play here.
Turning to whether our proposal is narrowly tailored, brokers would likely challenge it as both overinclusive and underinclusive.[363] The former will likely rest on a claim that our proposal restricts more speech than necessary to advance its aim of protecting victims from brokered abuse. Brokers might point to the broad definitions of “data broker” and “covered data” to demonstrate the overinclusive sweep of the regulation. While it is true that the proposal includes capacious definitions, it does so to effectively achieve the aim of protecting victims from brokered abuse. To promote safety through obscurity, the regulation must cover the entire supply chain to minimize the potential for data leakage, especially given the potentially severe consequences of such a leak for abuse victims. The same logic justifies expansive coverage of data, given how indirect data might allow determined abusers to locate victims through a proxy. Narrower measures would leave abuse victims at continued risk.
The sweep of our proposed statute is significantly limited by its verification requirement and aggregated, deidentified data carveout. Brokers have raised the lack of verification requirements to argue existing nondisclosure laws are overinclusive.[364] Our proposal, however, restricts access to the centralized obscurity system remedy in two ways. First, the proposal would limit protected persons to those who have experienced specific forms of abuse.[365] Second, the proposal implements a self-attestation regime where victims submit sworn statements affirming their eligibility to access the centralized obscurity system. This dual-layered approach balances the competing need to provide abuse victims unencumbered access to the system’s protections while also ensuring the system adequately limits this obscurity remedy to abuse victims. The proposal also limits covered data to PII, carving out aggregated, deidentified data entirely. Bulk, deidentified transactions do not meaningfully implicate abuse victims’ safety, and they are core to the lucrative marketing and advertising data economy. Therefore, the proposal covers data that meaningfully implicates abuse victim safety while balancing brokers’ business interests.
Brokers have also argued that laws solely regulating commercial disclosures of personal data are underinclusive because public agencies can often still disclose the same personal data.[366] While our proposed statute would not restrict all governmental disclosures of covered personal data, a regulation need not be perfectly tailored to pass strict scrutiny.[367] Here, the functional aim of the regulation is practical obscurity for abuse victims.[368] Data brokers provide frictionless personal data dossiers as a service. The same publicly available personal data may be accessed through FOIA requests, but such processes require tailored requests—often requiring specification of the desired data and the agency that should field the request—and take time to process. Convenience is as central to the product as the data itself. The regulation does not strive to prevent all access to abuse victims’ data but rather stem the tide of abuse, and re-abuse, that arises from instantaneous digital access to troves of frequently refreshed personal data at the click of button for a nominal expense.
In a related domain, the Supreme Court has recognized the significance of practical obscurity, coining the term itself in United States Department of Justice v. Reporters Committee for Freedom of the Press.[369] Writing for the Court, Justice Stevens echoed Ella’s concern about a new age of instantaneous access to collated public information:
Plainly there is a vast difference between the public records that might be found after a diligent search of courthouse files, county archives, and local police stations throughout the country and a computerized summary located in a single clearinghouse of information.[370]
In recognizing this, the Court seemed to consider the medium of dissemination to be as important as the information itself when determining whether “the compilation of otherwise hard-to-obtain information alters the privacy interest implicated by disclosure of that information.”[371] The question is less about the limitation of access to public information and more about the ease of access. Similarly, here, while the regulation would still allow for targeted data requests from public agencies, this reversion to practical-obscurity status quo—where discrete information must be accessed through FOIA requests to specific public agencies—materially advances the government’s compelling interest in protecting victims from abuse.
At bottom, under any application of the First Amendment, this Article’s proposal passes constitutional muster. The First Amendment does not serve as an impenetrable shield for an industry whose business model amplifies danger rather than democratic discourse. When regulation targets the near-frictionless compilation and dissemination of information that places abuse victims in harm’s way, it vindicates rather than violates core First Amendment values.
Conclusion
Brokered abuse represents a fundamental failure of privacy law—an abdication of policymaker responsibility to prioritize human safety over corporate profit. Victims of abuse should not have to navigate an insurmountable maze of data broker opt-out processes to achieve the basic security of online obscurity. This Article underscores the urgent need for an enforceable, centralized obscurity system that redistributes the burden of achieving obscurity from the victims to the data brokers profiting off their vulnerability. By mandating an obscurity system that leverages corporate insights into the informational ecosystem and sophisticated data-processing technologies, regulatory intervention can provide a sustainable solution that promotes victim safety without retraumatizing them.
However, any regulatory intervention must be designed with constitutional resilience in mind, particularly in the face of inevitable First Amendment challenges. The broker industry would surely argue that restrictions on the dissemination of data dossiers, particularly their publicly available components, violate brokers’ right to free speech. To ensure a legally durable regulatory solution to brokered abuse, policymakers must craft a system that is narrowly tailored to achieve the compelling government interest of protecting individuals from stalking, harassment, and violence.
Looking ahead, privacy law should do more to center victims of abuse. If lawmakers fail to consider their unique vulnerabilities, the cycle of harm will only deepen, leaving countless people at risk. Our proposal is no panacea, but it offers regulators a concrete measure to begin addressing the harms of brokered abuse.
Copyright © 2026 Chinmayi Sharma*, Thomas E. Kadri** & Sam Adler***
* Associate Professor of Law, Fordham Law School.
** Associate Professor, University of Georgia School of Law; Affiliate Faculty, University of Georgia Institute for Women’s & Gender Studies; Legislative & Policy Director, Clinic to End Tech Abuse at Cornell University.
*** J.D. Candidate, Fordham Law School. For feedback on earlier versions of this project, we thank Ariana Aboulafia, RonNell Andersen Jones, Jane Bambauer, Elettra Bietti, Hannah Bloch-Wehba, Ryan Calo, Ignacio Cofone, Julie Cohen, Nicki Dell, Amy Gajda, Michael Goodyear, Yael Grauer, Nikolas Guggenberger, Woodrow Hartzog, Mike Hintze, Leigh Honeywell, Ido Kilovaty, Anne Klinefelter, Mark Lemley, Lyrissa Lidsky, Andrew Miller, Christopher Morten, Mark Nottingham, Paul Ohm, Natalia Pires de Vasconcelos, Robert Post, Chris Riley, Ani Satz, Evan Selinger, Scott Skinner-Thompson, Olivier Sylvain, Eugene Volokh, Rachel Vrabec, Ari Waldman, George Wang, Rebecca Wexler, Felix Wu, and Carly Zubrzycki, as well as other participants at the Consumer Law Scholars Conference at Boston University School of Law, Freedom of Expression Scholars Conference at Yale Law School, Privacy Law Scholars Conference at UCLA School of Law and at Northeastern University, UGA-Emory Faculty Workshop, UGA Institute for Women’s Studies Lunch Series, University of North Carolina School of Law Faculty Workshop, Southeastern Junior/Senior Legal Scholars Conference, Journal of Free Speech Law Symposium on Media and Society After Technological Disruption, Junior Law & Tech Scholars Workshop, and UCLA School of Law Institute for Technology Law & Policy Panel on Anonymity and Tech. Authors are listed from most to least Kafkaesque.
[1]. Interview with Ella (May 31, 2022) [hereinafter Ella Interview]. All subsequent quotations and statements related to Ella’s story are from this interview and will not be cited repeatedly for readability. To protect her anonymity, Ella is a pseudonym.
[2]. See Thomas E. Kadri, Brokered Abuse, 3 J. Free Speech L.137, 138–39 (2023); Sam Adler, Thomas E. Kadri & Chinmayi Sharma, Brokered Violence: Safety for Sale in the Free Marketplace of Data, Lawfare (Aug. 8, 2025), https://www.lawfaremedia.org/article/brokered-violence--safety-for-sale-in-the-free-marketplace-of-data [https://perma.cc/EE9C-3APW].
[3]. See, e.g., Danielle Keats Citron, Mainstreaming Privacy Torts, 98 Calif. L. Rev. 1805, 1817–19, 1834–35 (2010) (discussing the physical harms that can be associated with information disclosures made by data brokers and other online platforms); Kaveh Waddell, How FamilyTreeNow Makes Stalking Easy, Atlantic (Jan. 17, 2017), https://www.theatlantic.com/technology/archive/2017/01/the-webs-many-search-engines-for-your-personal-information/513323 [https://perma.cc/854H-SBBL] (reporting on how brokered data can facilitate stalking).
[4]. See Danielle Keats Citron, A New Compact for Sexual Privacy, 62 Wm. & Mary L. Rev. 1763, 1788–89 (2021);Margaret B. Kwoka, FOIA, Inc., 65 Duke L.J. 1361, 1376–1401 (2016); David E. Pozen, Transparency’s Ideological Drift, 128 Yale L.J. 100, 125 (2018); Theodore Rostow, What Happens When an Acquaintance Buys Your Data?: A New Privacy Harm in the Age of Data Brokers, 34 Yale J. on Regul., 667, 669 (2017).
[5]. Amy Gajda, Seek and Hide: The Tangled History of the Right to Privacy 231–41 (2022); Andy Z. Wang, Network Harms, 91 U. Chi. L. Rev. 2093, 2094–95 (2024); Danielle Keats Citron, Reservoirs of Danger: The Evolution of Public and Private Law at the Dawn of the Information Age, 80 S. Cal. L. Rev. 241, 246–51 (2007) (discussing the relationship between private-sector databases and commercial data brokers); Citron, supra note 4, at 1788.
[6]. Chris Jay Hoofnagle, Big Brother’s Little Helpers: How ChoicePoint and Other Commercial Data Brokers Collect and Package Your Data for Law Enforcement, 29 N.C. J. Int’l L. & Com. Regul. 595, 595 (2004); Citron, supra note 4, at 1789.
[7]. See generally Woodrow Hartzog & Evan Selinger, Surveillance as Loss of Obscurity, 72 Wash. & Lee L. Rev. 1343 (2015) (exploring how obscurity as a privacy interest rests on the difficulty and probability of discovering or understanding information). See also Ignacio N. Cofone & Adriana Z. Robertson, Privacy Harms, 69 Hastings L.J. 1039, 1053 (2018) (reminding us that “informational privacy is really about levels of privacy,” rather than “about having privacy or not”); infra Part I.
[8]. See Ignacio Cofone, Privacy Standing, 2022 U. Ill. L. Rev. 1367, 1403–07 (2022) (interrogating the connection between privacy and physical harm); seealso Danielle Keats Citron & Daniel J. Solove, Privacy Harms, 102 B.U. L. Rev. 793, 831–34 (2022) (offering examples of information disclosures by data brokers and other entities that resulted in violence).
[9]. See Kadri, supra note 2, at 138.
[10]. See infra Part I.A.1; Kadri, supra note 2, at 150. See generally Cofone, supra note 8, at 1401–07 (outlining how privacy invasions can cause “a distinct set of harms in addition to privacy harms,” including reputational, financial, discriminatory, bodily, and autonomy harms); Danielle Keats Citron, Sexual Privacy, 128 Yale L.J. 1870 (2019) (discussing how networked technologies have facilitated various forms of interpersonal abuse).
[11]. See Daniel J. Solove & Danielle Keats Citron, Risk and Anxiety: A Theory of Data-Breach Harms, 96 Tex. L. Rev. 737, 763 (2018).
[12]. Mara Hvistendahl, I Tried to Get My Name off People-Search Sites. It Was Nearly Impossible.,Consumer Reps. (Aug. 20, 2020), https://www.consumerreports.org/electronics/personal-information/i-tried-to-get-my-name-off-peoplesearch-sites-it-was-nearly-a0741114794 [https://perma.cc/BF54-Q4KD].
[13]. Id.
[14]. Kadri, supra note 2, at 153. Danielle Citron and Daniel Solove have offered a similar argument about economic harms resulting from privacy violations. See Citron & Solove, supra note 8, at 835–86 (discussing how some information disclosures can result in a “loss of productivity or time to deal with privacy violations”). There is a subtle difference between the privacy harm in Citron and Solove’s taxonomy and our conception of secondary harms from brokered abuse. While they rightly focus on how privacy violations can cause financial injuries and then also cause a “loss of quality time” as people struggle to deal with the fallout from those privacy violations, we highlight the emotional and financial burdens caused by the legally constructed process of vindicating one’s privacy rights and otherwise engaging in privacy self-management. Compare id., withinfra Part I.A.2.
[15]. See Ari Ezra Waldman, Privacy’s Rights Trap, 117 Nw. U. L. Rev. Online 88, 91–92 (2022).
[16]. Daniel J. Solove, Privacy Self-Management and the Consent Dilemma, 126 Harv. L. Rev. 1880, 1880–83, 1888–93 (2013).
[17]. Id. at 1880.
[18]. Seegenerally Ella Corren, Gaining or Losing Control? An Empirical Study on the Real Use of Data Control Right and Policy Implications, 109 Iowa L. Rev. 2017 (2024). See also Solove, supra note 16, at 1883–93; Ella Corren, The Consent Burden in Consumer and Digital Markets, 36 Harv. J.L. & Tech. 551, 564–67 (2023) [hereinafter Corren, The Consent Burden].
[19]. See infra Part I.
[20]. See Woodrow Hartzog, What is Privacy? That’s the Wrong Question, 88 U. Chi. L. Rev. 1677, 1683 (2021) (lamenting that few privacy laws “are aimed at disrupting power disparities between people and companies” or “protecting individuals from harassment”).
[21]. Seegenerally Danielle Keats Citron, Hate Crimes in Cyberspace (2014) (documenting the myriad burdens experienced by victims of technology-enabled abuse).
[22]. See infra Part II; cf. Citron, supra note 5, at 283–87 (discussing how the operators of private databases are the least-cost avoiders to protect people’s information from hacks and leaks). For foundational work on the law-and-economics concept of least-cost avoiders, see generallyGuido Calabresi, The Costs of Accidents: A Legal and Economic Analysis (1970) andGuido Calabresi & Jon T. Hirschoff, Toward a Test for Strict Liability in Torts, 81 Yale L.J. 1055 (1972).
[23]. It is important to stress that our proposal is not the sole legal or technological measure that could disrupt the broker industry or address its harms. Other regulatory measures can—and likely should—be pursued in tandem. See, e.g., Danielle Keats Citron, Intimate Privacy in a Post-Roe World, 75 Fla. L. Rev. 1033, 1062–71 (2023) (outlining reforms to curb the corporate collection and sale of intimate data); Helen Nissenbaum, Katherine Strandburg & Salomé Viljoen, The Great Regulatory Dodge, 37 Harv. J.L. & Tech. 1231, 1261–64 (2023) (proposing approaches to creating more contextually sensitive and comprehensive privacy laws). We will return to this point at the very end of this Article.
[24]. See generally Fed. Trade Comm’n, Data Brokers: A Call for Transparency and Accountability (2014) (examining the lack of transparency and regulation to address risks posed by the broker industry). See alsoinfra Part I.A.1.
[25]. Other scholars have proposed centralized processes to regulate information flows in related contexts. Danielle Citron, for example, has suggested that a “one-stop shop for deletion” could apply more narrowly to “intimate data” and any company “holding intimate data.” Danielle Keats Citron, The Fight for Privacy: Protecting Dignity, Identity, and Love in the Digital Age 164–65 (2022); see also Citron, supra note 4, at 1768 (defining “intimate privacy” as being concerned with “information about, and access to, the body, particularly the parts of the body associated with sex, gender, sexuality, and reproduction”). Lauren Willis, meanwhile, has raised the idea of a do-not-track default rule that would bar websites from tracking consumer internet use unless the consumer had signed up for a government-run “Track Me” registry. See Lauren E. Willis, When Nudges Fail: Slippery Defaults, 80 U. Chi. L. Rev. 1155, 1218–19 (2013).
[26]. S.B. 362, 2023 Leg., Reg. Sess. (Cal. 2023) (California’s DELETE Act); see also infra Part III.A.
[27]. Legis. B. 602, 109th Leg., 1st Sess. (Neb. 2025) (proposed Data Elimination and Limiting Extensive Tracking and Exchange Act); S.B. 2121, 89th Leg., Reg. Sess. (Tex. 2025); S.B. 1343, 89th Leg., Reg. Sess. (Tex. 2025); H.B. 4, 88th Leg., Reg. Sess. (Tex. 2023); H.B. 121, 2024 Gen. Assemb., Reg. Sess. (Vt. 2024) (vetoed by Vermont Governor June 2024); see also Suzanne Smalley, Delete-Your-Data Laws Have a Perennial Problem: Data Brokers Who Fail to Register, Record (Oct. 17, 2023), https://therecord.media/state--registries-california-vermont [https://perma.cc/3MXN-J6ZD].
[28]. H.R. 4311, 118th Cong. § 2(b) (2023); see also infra Part III.A.
[29]. See infra Parts III.A.4, IV.
[30]. See Frederick Schauer, The Politics and Incentives of First Amendment Coverage, 56 Wm. & Mary L. Rev. 1613, 1617 (2015); Amanda Shanor, First Amendment Coverage, 93 N.Y.U. L. Rev. 318, 322 (2018); Robert Post & Amanda Shanor, Commentary, Adam Smith’s First Amendment, 128 Harv. L. Rev. F. 165, 166–67 (2015) (“Across the country, plaintiffs are using the First Amendment to challenge commercial regulations, in matters ranging from public health to data privacy.”).
[31]. See Letter from Philip Recht, Partner, Mayer Brown LLP, to Kesha Ram Hinsdale, Sen., Vt. Gen. Assemb. 4–7 (Apr. 2, 2024).
[32]. See Cal. Civ. Code § 1798.99.86 (West 2025); H.R. 4311 § (2)(a)(1)(A).
[33]. See generally Ari Ezra Waldman, Industry Unbound: The Inside Story of Privacy, Data, and Corporate Power (2021) (interrogating the corporate influence that can weaken privacy legislation).
[34]. See, e.g., Evan Selinger & Woodrow Hartzog, Obscurity and Privacy, in Spaces for the Future: A Companion to Philosophy of Technology 119 (Joseph C. Pitt & Ashley Shew eds., 2018); Woodrow Hartzog & Frederic Stutzman, Obscurity by Design, 88 Wash. L. Rev. 385 (2013) [hereinafter Hartzog & Stutzman, Obscurity by Design]; Woodrow Hartzog & Frederic Stutzman, The Case for Online Obscurity, 101 Calif. L. Rev. 1 (2013) [hereinafter Hartzog & Stutzman, Case for Online Obscurity].
[35]. See generally, e.g., Thomas E. Kadri, Networks of Empathy, 2020 Utah L. Rev. 1075; Kadri, supra note 2; Janet X. Chen, Allison McDonald, Yixin Zou, Emily Tseng, Kevin Roundy, Acar Tamersoy, Florian Schaub, Thomas Ristenpart & Nicola Dell, Trauma-Informed Computing: Towards Safer Technology Experiences for All, 2022 Procs. CHI Conf. on Hum. Factors Computing Sys. 1; Diana Freed, Jackeline Palmer, Diana Minchala, Karen Levy, Thomas Ristenpart & Nicola Dell, “A Stalker’s Paradise”: How Intimate Partner Abusers Exploit Technology, 2018 Procs. CHI Conf. on Hum. Factors Computing Sys. 1.
[36]. See, e.g., Neil Richards & Woodrow Hartzog, Taking Trust Seriously in Privacy Law, 19 Stan. Tech. L. Rev. 431, 444 (2016) (arguing that people cannot adequately make choices to protect their information); Solove, supra note 16, at 1882–83.
[37]. Genevieve Lakier, The First Amendment’s Real Lochner Problem, 87 U. Chi. L. Rev. 1241, 1241 (2020); see also Evelyn Douek & Genevieve Lakier, Lochner.com?, 138 Harv. L. Rev. 100, 103 (2024); Amanda Shanor, The New Lochner, 2016 Wis. L. Rev. 133.
[38]. See infra Part I.A.2.
[39]. See infra Part II; see also Citron, supra note 25, at 14 (asserting that the broker industry generates $200 billion annually).
[40]. See infra Parts II, III.B–C; see also Citron, supra note 25, at 164–65.
[41]. Often, but not always. See generally Thomas E. Kadri, Platforms as Blackacres, 68 UCLA L. Rev. 1184, 1222–49 (2022) (discussing First Amendment doctrine governing information that has entered the public sphere); Daniel J. Solove, Access and Aggregation: Public Records, Privacy and the Constitution, 86 Minn. L. Rev. 1137, 1200–17 (2002); Molly Cinnamon, You Have the Right to Be Deleted: First Amendment Challenges to Data Broker Deletion Laws, 9 Geo. L. Tech. Rev. 492 (2025). Courts, too, are no longer avoiding these questions, in part because legislators are calling this a First Amendment question. Seegenerally Kratovil v. City of New Brunswick, 336 A.3d 201 (N.J. 2025); Adler, Kadri & Sharma, supra note 2.
[42]. See infra Part IV.A.
[43]. See infra Part IV.B.
[44]. See infra Part IV.C.
[45]. See id.
[46]. See Citron, supra note 5, at 283–87; sources cited supra note 22.
[47]. See infra Parts II.A.4, II.A.5.
[48]. See infra Parts III.A.1–A.4.
[49]. See infra Part III.C.
[50]. See infra Part IV.
[51]. This Article focuses on the concept of obscurity within broader privacy law discourse. Obscurity offers victims a more expansive and operational remedy to the harms of data broker enabled surveillance. Seegenerally Hartzog & Stutzman, Case for Online Obscurity, supra note 34; Hartzog & Stutzman, Obscurity by Design, supra note 34; Hartzog & Selinger, supra note 7; Woodrow Hartzog & Evan Selinger, Obscurity: A Better Way to Think About Your Data Than ‘Privacy,’ Atlantic (Jan. 17, 2013), https://www.theatlantic.com/technology/archive/2013/01/obscurity-a-better-way-to-think-about-your-data-than-privacy/267283/ [https://perma.cc/PJ5Y-VME4].
[52]. See generally Waldman, supra note 15 (critiquing the law’s reliance on individualistic privacy protections); Solove, supra note 16, at 1880 (raising concerns about privacy self-management).
[53]. See Cofone & Robertson, supra note 7, at 1049–55; Cofone, supra note 8, at 1367; Citron & Solove, supra note 8, at 830–61; see also Scott Skinner-Thompson, Agonistic Privacy & Equitable Democracy, 131 Yale L.J.F. 454, 456 (2021).
[54]. See Solove, supra note 16, at 1881.
[55]. See Kadri, supra note 35, at 1078–80, 1118–19 (arguing that empathy should be a guiding principle in regulating tech-enabled abuse).
[56]. See Frank Pasquale, Opinion, The Dark Market for Personal Data, N.Y. Times (Oct. 16, 2014), https://www.nytimes.com/2014/10/17/opinion/the-dark-market-for-personal-data.html [https://perma.cc/M4JQ-CHA5].
[57]. See Kadri, supra note 2, at 142.
[58]. Kadri, supra note 2, at 138; see also Hartzog & Selinger, supra note 7, at 1355–69 (building out the connection between privacy and obscurity).
[59]. Citron, supra note 3, at 1817–19, 1834–35; Kadri, supra note 2, at 150.
[60]. See Salomé Viljoen, A Relational Theory of Data Governance, 131 Yale L.J. 573, 588 n.19 (2021); see also Citron, supra note 25, at 14 (asserting that “[t]he data-brokerage industry generates 200 billion dollars annually”). For important early scholarship on brokers, see generally Hoofnagle, supra note 6; Daniel J. Solove & Chris Jay Hoofnagle, A Model Regime of Privacy Protection, 2006 U. Ill. L. Rev. 357, 367 (2006); Citron, supra note 5; Citron, supra note 3. For more contemporary reporting, see Adi Robertson, The Long, Weird History of Companies That Put Your Life Online, Verge (Mar. 21, 2017), https://www.theverge.com/2017/3/21/14945884/people-search-sites-history-privacy-regulation [https://perma.cc/Z9J8-HU9G]; Yael Grauer, What Are ‘Data Brokers,’ and Why Are They Scooping Up Information About You?, Vice (Mar. 27, 2018), https://www.vice.com/en/article/what-are-s-and-how-to-stop-my-private-data-collection [https://perma.cc/2WRK-7KB7].
[61]. Emile Ayoub & Elizabeth Goitein, Closing the Data Broker Loophole, Brennan Ctr. for Just. (Feb. 13, 2024), https://www.brennancenter.org/our-work/research-reports/closing-data-broker-loophole [https://perma.cc/DZC6-Q734].
[62]. See Justin Sherman, People Search Data Brokers, Stalking, and ‘Publicly Available Information’ Carve-Outs, Lawfare (Oct. 30, 2023), https://www.lawfaremedia.org/article/people-search-s-stalking-and-publicly-available-information-carve-outs [https://perma.cc/EE4V-2XQR]; see also Ashley Kuempel, The Invisible Middleman: A Critique and Call for Reform of the Data Broker Industry, 36 Nw. J. Int’l L. & Bus.207, 210 (2016).
[63]. See Sherman, supra note 62.
[64]. See Kwoka, supra note 4, at 1379–414.
[65]. Gajda, supra note 5, at 231–41; Ayoub & Goitein, supra note 61.
[66]. See Amanda Levendowski, Resisting Face Surveillance with Copyright Law, 100 N.C. L. Rev. 1015, 1018, 1022–35 (2022); Neil Richards & Woodrow Hartzog, The Pathologies of Digital Consent, 96 Wash. U. L. Rev. 1461, 1485 (2019); Woodrow Hartzog & Evan Selinger, Opinion, Why You Can No Longer Get Lost in the Crowd, N.Y. Times (Apr. 17, 2019), https://www.nytimes.com/2019/04/17/opinion/data-privacy.html [https://perma.cc/TL4A-VS2E].
[67]. See Justin Sherman, How Shady Companies Guess Your Religion, Sexual Orientation, and Mental Health, Slate (Apr. 26, 2023), https://slate.com/technology/2023/04/data-broker-inference-privacy-legislation.html [https://perma.cc/45VZ-5SRL]. See generally Alicia Solow-Niederman, Information Privacy and the Inference Economy, 117 Nw. U. L. Rev. 357 (2022) (calling for privacy law to adapt considering how machine learning enables such inferences).
[68]. Gajda, supra note 5, at 231–41.
[69]. See Kadri, supra note 41, at 1184–87; Thomas E. Kadri, Digital Gatekeepers, 99 Tex. L. Rev. 951, 977–82 (2021).
[70]. See Andrew Wade, Note, The Clocks are Striking Thirteen: Congress, Not Courts, Must Save Us from Government Surveillance via Data Brokers, 102 Tex. L. Rev. 1099, 1106 (2024).
[71]. See Dave, What Are Data Brokers?, DeleteMe, https://help.joindeleteme.com/hc/en-us/articles/8319769261203-What-are-Data-Brokers [https://perma.cc/3F88-HTND].
[72]. Urbano Reviglio, The Untamed and Discreet Role of Data Brokers in Surveillance Capitalism: A Transnational and Interdisciplinary Overview, 11 Internet Pol’y Rev. 1, 15 (2022) (figure illustrating eight different risks of data brokers being under-regulated). On the possible benefits of some types of data brokerage, see generallyJennifer Barrett Glasgow, Data Brokers: Should They Be Reviled or Revered?, in TheCambridge Handbook of Consumer Privacy 25 (Evan Selinger, Jules Polonetsky & Omer Tene eds., 2018) (surveying ostensible benefits that brokers bring to the economy, innovation, and consumers).
[73]. Cf. Selinger & Hartzog, supra note 34, at 120, 123–25 (describing how technological infrastructure and innovation remove friction to access personal information and corrode obscurity).
[74]. See The Amy Boyer Case, Elec. Priv. Info. Ctr. (June 15, 2006), https://archive.epic.org/privacy/boyer [https://perma.cc/LS52-FW3J]; Sherman, supra note 62; Adler, Kadri & Sharma, supra note 2.
[75]. See generally Eugene Volokh, Cheap Speech and What It Will Do, 104 Yale L.J. 1805 (1995) (predicting that the “cheap” speech enabled by digital technologies will alter information flows).
[76]. See Citron, supra note 10.
[77]. Gajda, supra note 5, at 52–66.
[78]. Seeid. at 231–41; Sherman, supra note 62.
[79]. Hoofnagle, supra note 6, at 595.
[80]. See Chen et al., supra note 35, at 1.
[81]. Joseph Cox, Candy Crush, Tinder, MyFitnessPal: See the Thousands of Apps Hijacked to Spy on Your Location, Wired (Jan. 9, 2025), https://www.wired.com/story/gravy-location-data-app-leak-rtb [https://perma.cc/KNN2-YXQ7].
[82]. See Justin Sherman, Credit Reporting Agencies Don’t Just Report Credit Scores, Duke Sanford Tech Pol’y Program (Nov. 9, 2022), https://techpolicy.sanford.duke.edu/blogroll/credit-reporting-agencies-dont-just-report-credit-scores [https://perma.cc/R675-DV89].
[83]. See Scottie Andrew, For Abuse Victims, Registering to Vote Brings a Dangerous Tradeoff, CNN (Oct. 27, 2020), https://www.cnn.com/2020/10/27/us/domestic-violence-voting-election-privacy-trnd/index.html [https://perma.cc/M6NZ-46XA].
[84]. Nicole Froio, Should Abuse Survivors Have to Disappear from the Internet?, Verge (Dec. 6, 2021), https://www.theverge.com/22812890/domestic-abuse-survivors-online-presence-spyware-recommendations [https://perma.cc/KFA3-WZQV].
[85]. See Kadri, supra note 2, at 151.
[86]. Seeid. at 143.
[87]. See Hvistendahl, supra note 12.
[88]. Solove, supra note 16, at 1880–83; seealso Waldman, supra note 15, at 89–90; Kadri, supra note 2, at 151.
[89]. See Solove, supra note 16, at 1880–83.
[90]. See id. at 1888; Corren, The Consent Burden, supra note 18, at 564–67.
[91]. Kadri, supra note 2, at 143, 151–54; Solove, supra note 16, at 1880–81.
[92]. See Solove, supra note 16, at 1888.
[93]. Hvistendahl, supra note 12 (“No two of these convoluted procedures seem to be alike. People who track the problem estimate that it can take from six business days to two weeks of full-time work to delete your data from data brokers’ sites.”).
[94]. Id. (“Some sites asked me to enter a current phone number or email address to remove my data, which felt like extortion. Others asked me to register and create a password to ‘control’ my information, without giving me the option to delete it entirely. A few even required me to pick up the phone, send snail mail, or—get this—fax in my request. Where do you even find a fax machine these days?”).
[95]. See Kejsi Take, Kevin Gallagher, Andrea Forte, Damon McCoy & Rachel Greenstadt, “It Feels Like Whack-a-Mole”: User Experiences of Data Removal from People Search Websites, 3 Procs. on Priv. Enhancing Techs. 159, 166–69 (2022).
[96]. See id. at 167 (noting that by requiring victims to provide information like their legal names, opt-out processes “disproportionately affect[] some people more than others, for example, those who change their name to better fit their gender identity”).
[97]. See Hvistendahl, supra note 12.
[98]. Id.
[99]. See id.
[100]. See id. (“I found my information reappearing online, too. Five months after opting out from one data broker, my profile reappeared. When I clicked on my name, the page showed a satellite photo of a house where I had once lived.”); Yael Grauer, Victoria Kauffman & Leigh Honeywell, Consumer Reps., Data Defense: Evaluating People-Search Site Removal Services 10 (2024) (“As a whole, people-search removal services are largely ineffective. . . . [W]ithout exception, information about each participant still appeared on some of the 13 people-search sites at the one-week, one-month, and four-month intervals.”).
[101]. See Kadri, supra note 2, at 153.
[102]. See Grauer, Kauffman & Honeywell, supra note 100, at 5; Hvistendahl, supra note 12; Take et al., supra note 95, at 170.
[103]. See Privacy Bee, https://privacybee.com [https://perma.cc/BB7H-6STT]; DeleteMe Plans, DeleteMe, https://joindeleteme.com/privacy-protection-plans [https://perma.cc/HA77-3JWC].
[104]. See Grauer, Kauffman & Honeywell, supra note 100; Hvistendahl, supra note 12.
[105]. For example, the yearly price for DeleteMe starts at $129 per year. See DeleteMe Plans, supra note 103.
[106]. See Hvistendahl, supra note 12; Kadri, supra note 2, at 153.
[107]. See Grauer, Kauffman & Honeywell, supra note 100, at 5; Hvistendahl, supra note 12.
[108]. Hvistendahl, supra note 12.
[109]. See Kadri, supra note 2, at 153.
[110]. See Hvistendahl, supra note 12.
[111]. See Brittany A. Martin, The Unregulated Underground Market for Your Data: Providing Adequate Protections for Consumer Privacy in the Modern Era, 105 Iowa L. Rev. 865, 867 (2020).
[112]. See Take et al., supra note 95, at 166–70.
[113]. See id. at 171–72.
[114]. See Kadri, supra note 2, at 153.
[115]. See Hvistendahl, supra note 12 (reporting how one victim “started her quest” to remove her data “hoping to distance herself from a traumatizing situation, but instead she was continually forced to relive it” and reshare her story).
[116]. Seeid.
[117]. See Kadri, supra note 2, at 153.
[118]. See Hvistendahl, supra note 12.
[119]. See Corren, The Consent Burden, supra note 18, at 551 (arguing that various privacy laws’ reliance on self-managed consent “enables and legitimizes digital surveillance and other consumer exploitations”). See generally Daniel J. Solove, A Brief History of Information Privacy Law, in Proskauer on Privacy 1-1 (Kristen J. Mathews ed., 2d ed. 2016) (discussing the development of the patchwork of privacy laws in the United States).
[120]. See Kadri, supra note 2, at 142–48; Michael Kans, Data Brokers and National Security, Lawfare (Apr. 29, 2021), https://www.lawfaremedia.org/article/data-brokers-and-national-security [https://perma.cc/8QHR-BMXH].
[121]. See Kadri, supra note 2, at 152.
[122]. See Corren, The Consent Burden, supra note 18, at 551.
[123]. See, e.g., Ga. Code Ann. § 16-5-90 (criminalizing the offense of “stalking”); Cal. Penal Code § 653.2 (criminalizing doxing).
[124]. See Kadri, supra note 2, at 142.
[125]. Id. at 142–43.
[126]. Cal. Gov’t Code § 6208.1; see Kadri, supra note 2, at 142–43. Without scienter requirements, however, these laws risk running afoul of constitutional protections such as the First Amendment. See Kadri, supra note 2, at 142–43.
[127]. Kadri, supra note 2, at 144–45.
[128]. See Vt. Stat. Ann. tit. 9, § 2446; Cal. Civ. Code § 1798.99.82.
[129]. Cal. Civ. Code § 1798.115.
[130]. See Waldman, supra note 15, at 88 (arguing that “the history of using individual [privacy] rights to solve structural problems proves how rights crowd out necessary reform”); Pozen, supra note 4, at 135–41 (contending that soft-touch and targeted transparency mandates have “evolved into a stock substitute for more robust and direct regulation” to protect consumers).
[131]. See, e.g., Cal. Penal Code § 502 (outlawing “[u]nauthorized access to computers, computer systems, and computer data”).
[132]. See Cal. Civ. Code §§ 1798.100–.199.100; Cal. Code Regs. tit. 11, §§ 7000–7304 (collectively, CCPA).
[133]. See Cal. Civ. Code § 1798.140.
[134]. Id.
[135]. Kadri, supra note 2, at 146.
[136]. See, e.g., Citron, supra note 3, at 1826–28 (discussing the limits of the four major privacy torts to address broker disclosures); Neil M. Richards & Daniel J. Solove, Prosser’s Privacy Law: A Mixed Legacy, 98 Calif. L. Rev. 1887, 1919 (2010) (explaining that disclosing a person’s home address would likely fail to satisfy the “highly offensive to a reasonable person” requirement of the disclosure tort); see alsoGM Agrees to 5-Year Ban on Selling Drivers’ Location Data, Reuters (Jan. 17, 2025), https://www.reuters.com/business/autos-transportation/ftc-bans-gm-disclosing-driver-consumer-data-consumer-reporting-agencies-2025-01-16 [https://perma.cc/G9WB-MCBR].
[137]. Cal. Civ. Code § 1798.120.
[138]. See Cal. Gov’t Code § 6208.1(b)(1); see also Kadri, supra note 2, at 148–49.
[139]. See Cal. Gov’t Code § 6208.1(a)(2)(B); see also Kadri, supra note 2, at 148–49.
[140]. See Kadri, supra note 2, at 148–49.
[141]. See Kadri, supra note 2, at 142, 153; Solove & Hoofnagle, supra note 60, at 367.
[142].For key conceptual and critical work on informational capitalism, seegenerally Julie E. Cohen, Between Truth and Power: The Legal Constructions of Informational Capitalism (2019); see also Waldman, supra note 33; Amy Kapczynski, The Law of Informational Capitalism, 129 Yale L.J. 1460 (2020).
[143]. To be clear, we are far from the first to propose connecting privacy and safety. See generally, e.g., Citron, supra note 21; A. Michael Froomkin & Zak Colangelo, Privacy as Safety, 95 Wash. L. Rev. 141 (2020); A. Michael Froomkin, Phillip J. Arencibia & P. Zak Colangelo-Trenner, Safety as Privacy, 64 Ariz. L. Rev. 921 (2022). We hope to build on prior scholarly foundations for this connection and propose a legislative implementation of it.
[144]. See Kadri, supra note 2, at 154.
[145]. Seeid.
[146]. See Solove, supra note 18, at 1881;Corren, The Consent Burden, supra note 18, at 551; Take et al., supra note 95, at 166–70.
[147]. See Take et al., supra note 95, at 166–70.
[148]. Kadri, supra note 2, at 152–54.
[149]. See Citron, supra note 5, at 290–92.
[150]. Citron, supra note 5, at 291.
[151]. See sources cited, supra note 22.
[152]. See Citron, supra note 5, at 283–87 (supplementing the fairness argument described in supra text accompanying note 150 with one sounding in economic efficiency).
[153]. Citron, supra note 5, at 285–86.
[154]. See Citron, supra note 25, at 11–14; Rostow, supra note 4, at 674.
[155]. See Reviglio, supra note 72, at 12 (warning that “once personal information has been packaged, sold and resold, it may live indefinitely in the servers run by the data broker industry”).
[156]. See S.B. 362, 2023 Leg., Reg. Sess. (Cal. 2023) (“Beginning January 1, 2028, and every three years thereafter, a data broker shall undergo an audit by an independent third party to determine compliance with this section.”).
[157]. See generally Fed. Trade Comm’n, supra note 24.
[158]. See id. at 53.
[159]. SeeHow StopNCII.org Works, StopNCII.Org, https://stopncii.org/how-it-works/ [https://perma.cc/DZM5-WRHH]; see also Brenda Dvoskin & Thomas E. Kadri, Safe Sex in the Age of Big Tech Feminism, 39 Harv. J.L. & Tech. 59 (2025); Thomas E. Kadri, Juridical Discourse for Platforms, 136 Harv. L. Rev. F. 163, 200–01 (2022).
[160]. See About Us, StopNCII.Org, https://stopncii.org/about-us [https://perma.cc/6L27-936F].
[161]. How StopNCII.org Works, supra note 159.
[162]. See Industry Partners, StopNCII.Org, https://stopncii.org/partners/industry-partners [https://perma.cc/ZS6W-4LQV].
[163]. See How StopNCII.org Works, supra note 159.
[164]. See id.
[165]. SeeSuicide, Self-Harm, and Domestic Violence Prevention,Pinterest: Help Ctr., https://help.pinterest.com/en/article/suicide-and-self-harm-prevention [https://perma.cc/3JQ3-MQ9T].
[166]. See Adam Mosseri, Changes We’re Making to Do More to Support and Protect the Most Vulnerable People Who Use Instagram, Instagram(Feb. 7, 2019), https://about.instagram.com/blog/announcements/supporting-and-protecting-vulnerable-people-on-instagram [https://perma.cc/36X6-X5KD].
[167]. See Suicide, Self-Harm, and Eating Disorders Policy,YouTube Help, https://support.google.com/youtube/answer/2802245?hl=en [https://perma.cc/7VZK-J3XP].
[168]. Kalhan Rosenblatt & Maya Eaglin, Meta Teams up with Snap and TikTok to Address Self-Harm Content, NBC News (Sep. 12, 2024),https://www.nbcnews.com/tech/social-media/meta-teams-snap-tiktok-address-self-harm-content-rcna170838 [https://perma.cc/3LNQ-2BBD].
[169]. Suicide Prevention, Meta: Safety Ctr.,https://about.meta.com/actions/safety/topics/wellbeing/suicideprevention [https://perma.cc/9WYZ-KJB6].
[170]. See 18 U.S.C. § 2258A; see also United States v. Keith, 980 F. Supp. 2d 33, 37–39 (D. Mass. 2013).
[171]. See The Tech Coalition Empowers Industry to Combat Online Child Sexual Abuse with Expanded PhotoDNA Licensing, Tech Coal. (Jan. 27, 2025), https://technologycoalition.org/news/the-tech-coalition-empowers-industry-to-combat-online-child-sexual-abuse-with-expanded-photodna-licensing [https://perma.cc/6PLA-ES4W].
[172]. National Do Not Call Registry, Fed. Trade Comm’n, https://www.donotcall.gov [https://perma.cc/N4AU-2B2P]; Lauren E. Willis, Why Not Privacy by Default?, 29 Berkeley Tech. L.J. 61, 108 (2014).
[173]. Chris Jay Hoofnagle, Privacy Self-Regulation: A Decade of Disappointment, in Consumer Protection in the Age of the ‘Information Economy’ 379, 380–83 (Jane K. Winn ed., 2006).
[174]. Regulation 2016/679, of the European Parliament and the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data and Repealing Directive 95/46/EC (General Data Protection Regulation), 2016 O.J. (L 119) ch. 3, art. 17 (EU) [hereinafter GDPR].
[175]. Id.
[176]. Fed. Trade Comm’n, Protecting Consumer Privacy in an Era of Rapid Change 68 (2012), https://www.ftc.gov/sites/default/files/documents/reports/federal-trade-commission-report-protecting-consumer-privacy-era-rapid-change-recommendations/120326privacyreport.pdf [https://perma.cc/KY6E-W39M].
[177]. See id.
[178]. S.B. 362, 2023 Leg., Reg. Sess. (Cal. 2023).
[179]. H.R. 4311, 118th Cong. (2023).
[180]. Cal. S.B. 362; H.R. 4311.
[181]. H.R. 4311; Cal. Civ. Code § 1798.99.86 (added by California’s DELETE Act).
[182]. Cal. Civ. Code § 1798.99.82(a).
[183]. Cal. Civ. Code § 1798.99.86(a)(2), (b)(1); H.R. 4311 § 2(b)(1)(A)(ii); H.R. 4311 § 2(b)(1)(B)(i).
[184]. Cal. Civ. Code § 1798.99.86(e)(1) (added by California’s DELETE Act); H.R. 4311 § 2(b)(2)(C)(i).
[185]. Cal. Civ. Code § 1798.99.85 (added by California’s DELETE Act); H.R. 4311 § 2(b)(2)(A)(i).
[186]. See Cal. Civ. Code § 1798.140(v)(2).
[187]. See H.R. 4311 § 2(b)(2)(A)(ii).
[188]. Sherman, supra note 62.
[189]. See Cal. Civ. Code § 1798.140(v)(2).
[190]. Id. § 1798.99.86(c)(1)(A).
[191]. H.R. 4311 § 2(b)(1)(C)(i).
[192]. See Kuempel, supra note 62, at 219–21.
[193]. See Wade, supra note 70, at 1129–30 (“Because the Delete Act lacks a private cause of action, residents cannot hold non-compliant brokers accountable themselves; they must trust that the California Privacy Protection Agency will do it for them—a needlessly risky bet.”).
[194]. See generally S.B. 362, 2023 Leg., Reg. Sess. (Cal. 2023); H.R. 4311.
[195]. See generally Nicole A. Ozer, Golden State Sword: The History and Future of California’s Constitutional Right to Privacy to Defend and Promote Rights, Justice, and Democracy in the Modern Digital Age, 39 Berkeley Tech. L.J. 963, 1069 (2024) (arguing for more restrictive data protection laws).
[196]. Specifically, the CPPA and the California Attorney General for the DELETE Act, and the FTC and potentially state attorneys general for the federal DELETE Act. Cal. Civ. Code § 1798.99.82; H.R. 4311 § 2(c); see alsoAnalysis of the California Delete Act (SB 362) – Signed by Governor Newsom into Law, Tom Kemp (Oct. 10, 2023), https://www.tomkemp.ai/blog/2023/10/10/analysis-of-the-california-delete-act-sb-362-signed-into-law [https://perma.cc/J8EU-SB43].
[197]. See, e.g., Citron & Solove, supra note 8, at 822 (“The main benefit of a private right of action in a law is to encourage private enforcement of that law because government agencies often lack the resources to enforce a law rigorously and consistently enough.”); see also Ozer, supra note 195, at 1071 (“[G]overnment enforcers have limited bandwidth and sometimes-conflicting internal interests related to government surveillance and consumer privacy.”).
[198]. See Kadri, supra note 2, at 152.
[199]. When California considered the CCPA in 2018, Attorney General Xavier Becerra wrote to then Assembly member Ed Chau and Senator Robert Hertzberg emphasizing the need for a private right of action. See Letter from Xavier Beccera, Att’y Gen., to Ed Chau, Assemb., Cal. St. Assemb. & Robert Hertzberg, Sen., Cal. St. Senate (Aug. 22, 2018); see also Peter C. Ormerod, A Private Enforcement Remedy for Information Misuse, 60 B.C. L. Rev. 1893, 1941–46 (2019) (describing advantages of state-law private enforcement remedy for data misuse).
[200]. H.R. 4311 § 2(b)(2)(A)(ii); Cal. Civ. Code § 1798.99.86.
[201]. See H.R. 4311 § 2(b)(2)(A)(ii); Cal. Civ. Code § 1798.99.86.
[202]. See Cal. Civ. Code§ 1798.99.87.
[203]. See generally Ozer, supra note 195, at 1034–35 (stating that technology companies use First Amendment claims to challenge privacy laws).
[204]. See, e.g., NetChoice, LLC v. Bonta, 113 F.4th 1101, 1121 (9th Cir. 2024) (finding a provision of the California Age-Appropriate Design Code Act likely to fail strict scrutiny, as the state could have “employed less restrictive means to accomplish its protective goals”).
[205]. Cal. Civ. Code § 1798.99.80; H.R. 4311 § 2(f)(3)(B) (excluding entities with direct relationships to individuals whose data they sell from the definition of data broker).
[206]. See Amnesty Int’l, Surveillance Giants: How the Business Model of Google and Facebook Threatens Human Rights 10 (2019), https://www.amnesty.org/en/documents/pol30/1404/2019/en [https://perma.cc/Q4HN-VPC7]; Fed. Trade Comm’n, A Look Behind the Screens: Examining the Data Practices of Social Media and Video Streaming Services 37 (2024), https://www.ftc.gov/system/files/ftc_gov/pdf/Social-Media-6b-Report-9-11-2024.pdf [https://perma.cc/6KU7-8SWZ]; Kurt Knutsson, How the Delete Act Misses Big Tech Culprits in a Law Designed to Protect Consumers, Fox News (Oct. 19, 2023), https://www.foxnews.com/tech/delete-act-misses-big-tech-culprits-law-designed-protect-consumers [https://perma.cc/E27U-KJGC] (“[T]he concerning culprits of social media companies like Meta’s Facebook and Instagram were given a pass and not included in the Delete Act signed into law by Gov. Gavin Newsom.”).
[207]. Seegenerally Fed. Trade Comm’n, supra note 24.
[208]. See, e.g., Sorrell v. IMS Health Inc., 564 U.S. 552, 564 (2011); see also G.S. Hans, No Exit: Ten Years of “Privacy vs. Speech” Post-Sorrell, 65 Wash. U. J.L. & Pol’y 19, 32–37 (2021) (collecting cases considering challenges to privacy laws post-Sorrell).
[209]. See supra Part III.A.
[210]. See Catherine Stupp, Patchwork of State Privacy Laws Remains After Latest Failed Bid for Federal Law, Wall St. J. (Aug. 27, 2024), https://www.wsj.com/articles/patchwork-of-state-privacy-laws-remains-after-latest-failed-bid-for-federal-law-2a1a020d [https://perma.cc/J5HX-8YN3].
[211]. See Citron, supra note 25, at 164–65 (suggesting that a centralized data-deletion system could protect people when companies collect their “intimate” information related to sex, gender, sexuality, and reproduction).
[212]. Violence Against Women Act of 1994, 42 U.S.C. §§ 13925–14045d.
[213]. Safe Connections Act of 2022, 47 U.S.C. § 345.
[214].Immigrant Legal Res. Ctr., Community Explainer: Who Is Eligible For VAWA? 1–3 (2022), https://www.ilrc.org/sites/default/files/2023-02/Who%20is%20Eligible%20for%20VAWA%3F.pdf [https://perma.cc/ZMH2-LD62].
[215]. See 42 U.S.C. §§ 13925–14045d; 47 U.S.C. § 345.
[216]. See, e.g., N.Y. Pub. Serv. Law § 48-A (providing an example of how self-attestation works in a similar statutory context).
[217]. See Calling the Police Shouldn’t Be Another Barrier, DomesticShelters.org (Nov. 7, 2016), https://www.domesticshelters.org/articles/escaping-violence/calling-the-police-shouldn-t-be-another-barrier [https://perma.cc/R3ML-NW74].
[218]. See, e.g., National Do Not Call Registry, supra note 172 (featuring a feedback provision if you still receive unwanted calls).
[219].This proposed definition reflects the findings of FTC reports concerning commercial surveillance and data brokers. See Fed. Trade Comm’n, supra note 176, at 68; Fed. Trade Comm’n, supra note 24, at 2–3, 5.
[220]. See Chris J. Hoofnagle & Jan Whittington, Free: Accounting for the Costs of the Internet’s Most Popular Price, 61 UCLA L. Rev. 606, 628 (2014) (explaining Google’s business model of exchanging a free search engine for information); see also Alexander Tsesis, The Right to Be Forgotten and Erasure: Privacy, Data Brokers, and the Indefinite Retention of Data, 48 Wake Forest L. Rev. 101, 105 (2014) (“Popular companies like Facebook, Amazon, and Google can retain users’ data indefinitely and sell it to other companies.”).
[221]. See Samuel W. Buell, Good Faith and Law Evasion, 58 UCLA L. Rev. 611, 614 (2011) (“Narrow and hard-edged rules of law create space for evasion.”).
[222]. See Kadri, supra note 2, at 149–54.
[223]. See Jules Polonetsky, Omer Tene & Kelsey Finch, Shades of Gray: Seeing the Full Spectrum of Practical Data De-Identification, 56 Santa Clara L. Rev. 593, 605 (2016); see also Lauren A. Di Lella, Comment, Accept All Cookies: Opting-in to a Comprehensive Federal Data Privacy Framework and Opting-out of a Disparate State Regulatory Regime, 68 Vill. L. Rev. 511, 513–14 (2023) (defining PII as the focus of data privacy protections).
[224]. See Paul M. Schwartz & Daniel J. Solove, The PII Problem: Privacy and a New Concept of Personally Identifiable Information, 86 N.Y.U. L. Rev. 1814, 1841–43, 1847 (2011) (explaining how identified data can often be re-identified, especially as more data points become available).
[225]. For a background on data clustering, see generallyData Clustering: Algorithms and Applications (Charu C. Aggarwal & Chandan K. Reddy eds., 2014).
[226]. See, e.g.,Sherman, supra note 62 (explaining how aggregating data facilities stalking and abuse).
[227]. See Kuempel, supra note 62, at 221–23.
[228]. See Tal Z. Zarsky, “Mine Your Own Business!”: Making the Case for the Implications of the Data Mining of Personal Information in the Forum of Public Opinion, 5 Yale J.L. & Tech. 1, 9–11 (2003).
[229]. See Kuempel, supra note 62, at 219–21.
[230]. See Rostow, supra note 4, at 674, 670–72.
[231]. See Fed. Trade Comm’n, supra note 24, at 3, 48–49.
[232]. See VT. Stat. Ann. tit. 9, § 2446.
[233]. See Cal. Civ. Code § 1798.99.82.
[234]. National Do Not Call Registry, supra note 172.
[235]. See generally GDPR, supra note 174, art. 17 (exploring a similar idea within the GDPR’s right to erasure).
[236]. The FTC and the CFPB are well positioned to oversee a federal statutory scheme protecting personal and sensitive data. The FTC, as the primary agency protecting consumers from deceptive and unfair business practices, has already brought multiple enforcement actions against data brokers for abusive data practices. See Press Release, Fed. Trade Comm’n, FTC Takes Action Against Mobilewalla for Collecting and Selling Sensitive Location Data (Dec. 3, 2024), https://www.ftc.gov/news-events/news/press-releases/2024/12/ftc-takes-action-against-mobilewalla-collecting-selling-sensitive-location-data [https://perma.cc/7W2D-FY7M] (announcing an action against a data broker to prohibit the sale of sensitive location data); Press Release, Fed. Trade Comm’n, FTC Cracks Down on Mass Data Collectors: A Closer Look at Avast, X-Mode, and InMarket (Mar. 4, 2024), https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2024/03/ftc-cracks-down-mass-data-collectors-closer-look-avast-x-mode-inmarket [https://perma.cc/X5SZ-HJP2] (announcing actions against data brokers for the sale of sensitive location data). Additionally, the CFPB, an agency with a specialized focus on protecting consumers’ financial information, recently proposed a rule to protect the public’s personal and financial information. See Press Release, Consumer Fin. Prot. Bureau, CFPB Proposes Rule to Stop Data Brokers from Selling Sensitive Personal Data to Scammers, Stalkers, and Spies (Dec. 3, 2024), https://www.consumerfinance.gov/about-us/newsroom/cfpb-proposes-rule-to-stop-data-brokers-from-selling-sensitive-personal-data-to-scammers-stalkers-and-spies [https://perma.cc/778Y-XC4J].
[237]. However, constitutional challenges to such fee-based regimes might undercut their feasibility moving forward. See generally Consumers’ Rsch. v. FCC, 109 F.4th 743 (5th Cir. 2024), cert. granted sub nom., Schs., Health & Librs. Broadband Coal. v. Consumers’ Rsch., 145 S. Ct. 587 (2024).
[238]. See Rostow, supra note 4, at 670; Data Brokers, Elec. Priv. Info. Ctr., https://epic.org/issues/consumer-privacy/data-brokers [https://perma.cc/3W9C-F2F4].
[239]. See Alyssa Wong, Regulatory Gaps and Democratic Oversight: On AI and Self-Regulation, U. of Toro.: Schwartz Reisman Inst. for Tech. & Soc’y (Sep. 21, 2023), https://srinstitute.utoronto.ca/news/tech-self-regulation-democratic-oversight [https://perma.cc/9FW9-HXW2]; Corporate Self-Regulation Is a Global Crisis, Hum. Rts. Watch (Nov. 14, 2017), https://www.hrw.org/news/2017/11/14/corporate-self-regulation-global-crisis [https://perma.cc/GN9H-UAU9]; Andreja Marusic & Madelynne Grace Wagner, How Companies Like Yum! Brands Can Improve Compliance Through Self-Regulation, World Bank Blogs (Feb. 20, 2018), https://blogs.worldbank.org/en/psd/how-companies-yum-brands-can-improve-compliance-through-self-regulation [https://perma.cc/AE25-5HYU].
[240]. See Marusic & Wagner, supra note 239.
[241]. Douglas C. Michael, Admin. Conf. of the U.S., Federal Agency Use of Audited Self-Regulation as a Regulatory Technique 21 n.81 (1993) (citing Eugene Bardach & Robert A. Kagan, Going by the Book: The Problem of Regulatory Unreasonableness 234–38 (1982)), https://www.acus.gov/sites/default/files/documents/1994-01%20The%20Use%20of%20Audited%20Self-Regulation%20as%20a%20Regulatory%20Technique.pdf [https://perma.cc/RX32-UZM2].
[242]. Carleen M. Zubrzycki, The Abortion Interoperability Trap, 132 Yale L.J.F. 197, 212 (2022).
[243]. Health Information Technology for Economic and Clinical Health Act, Pub. L. No. 111-15, 123 Stat. 115, 226 (2009).
[244]. See Julia Adler-Milstein & Eric Pfeifer, Information Blocking: Is It Occurring and What Policy Strategies Can Address It?, 99 Milbank Q. 303, 123 (2021) (“Among the 8 specific forms of information blocking in which EHR vendors may engage, 49% of respondents reported that vendors routinely or often deploy products with limited interoperability.”); U.S. Gov’t Accountability Off., GAO-15-817, Electronic Health Records: Nonfederal Efforts to Help Achieve Health Information Interoperability 1–2, 5 (2015) (explaining that data standards dictate technical specifications for system design and data transfers that are necessary for interoperability and identifying the lack of health data standardization as a key barrier to achieving EHR interoperability).
[245]. William J. Gordon & Kenneth D. Mandl, The 21st Century Cures Act: A Competitive Apps Market and the Risk of Innovation Blocking, 22 J. Med. Internet Rsch. 1, 1–2 (2020) (describing anticompetitive vendor behavior that might be taken to circumvent interoperability mandate); Bryan Cleveland, Note, Using the Law to Correct the Market: The Electronic Health Record (EHR) Incentives Program, 29 Harv. J.L. & Tech. 291, 311 (2015) (describing vendor lock-in by design).
[246]. Cal. Civ. Code §§ 1798.100–.199.100.
[247].Philip N. Yannella & Timothy W. Dickens, New State Privacy Laws Creating Complicated Patchwork of Privacy Obligations, BlankRome (June 7, 2024), https://www.blankrome.com/publications/new-state-privacy-laws-creating-complicated-patchwork-privacy-obligations [https://perma.cc/M8BZ-NQCE]; Andrew Blustein, With CCPA Looming, Publishers Are Confused and Consumers Are Unlikely to Share Their Data, Drum (Oct. 3, 2019), https://www.thedrum.com/news/2019/10/03/with-ccpa-looming-publishers-are-confused-and-consumers-are-unlikely-share-their [https://perma.cc/ZW84-L58Z].
[248]. See Lauren Feiner, California’s New Privacy Law Could Cost Companies a Total of $55 Billion to Get in Compliance, CNBC (Oct. 5, 2019), https://www.cnbc.com/2019/10/05/california-consumer-privacy-act-ccpa-could-cost-companies-55-billion.html [https://perma.cc/5PUW-9LBV].
[249]. To enhance security further, all stored identifiers can be cryptographically hashed with a process known as “salting,” where each value is combined with a unique random string before hashing. This ensures that even if the database is compromised, the hashed data cannot be reverse engineered into its original form without the “salt,” or the unique random string associated with the record. Andrew Hughes, Encryption vs. Hashing vs. Salting - What’s the Difference?, PingIdentity (Dec. 19, 2024), https://www.pingidentity.com/en/resources/blog/post/encryption-vs-hashing-vs-salting.html [https://perma.cc/DJN4-LWE7].
[250]. Advanced cryptographic methods such as Private Set Intersection, homomorphic encryption, and Bloom filters ensure privacy-preserving queries. In these processes, the database and brokers compare hashed identifiers without revealing any additional data that the broker did not already possess. See generally Mike Rosulek, Or. State Univ., A Brief Overview of Private Set Intersection (Apr. 19, 2021), https://csrc.nist.gov/presentations/2021/stppa2-psi (on file with the California Law Review); What Is Homomorphic Encryption?, IBM, https://www.ibm.com/think/topics/homomorphic-encryption [https://perma.cc/Z48A-KNLT]; Tristan Garwood, Saving Money and Protecting Privacy with Bloom Filters, Localytics Eng (Aug. 27, 2018), https://eng.localytics.com/saving-money-protecting-privacy-with-bloom-filters [https://perma.cc/R2D6-3FV7].
[251]. Ulf Mattsson, Privacy-Preserving Analytics and Secure Multiparty Computation, ISACA (Mar. 17, 2021), https://www.isaca.org/resources/isaca-journal/issues/2021/volume-2/privacy-preserving-analytics-and-secure-multiparty-computation [https://perma.cc/9ZLW-E85N].
[252]. What Is Homomorphic Encryption?, Supermicro, https://www.supermicro.com/en/glossary/homomorphic-encryption [https://perma.cc/ZJD5-UURH].
[253]. See generally Madhurima Nath, Fuzzy Matching Algorithms, Medium (Jan. 8, 2024), https://medium.com/@m.nath/fuzzy-matching-algorithms-81914b1bc498 [https://perma.cc/GVP7-G8RC].
[254]. See id.
[255]. See Saurabh Gupta, Piyushank Gupta, Anup Kumar & Mohd. Wasim, Privacy Preserving Optimized Fuzzy Like Search over Encrypted Data Using Phonology, 184 Int’l J. Comput. Applications 45, 46 (2022) (explaining that privacy can be preserved in large databases of sensitive information by searching for particular records with algorithms that “look for holistic patterns in data segregated from various sources” to identify a close, approximate (fuzzy) match rather than an exact match); id. at 48 (“Fuzzy matching using edit distance approach compares two strings to quantify how similar or dissimilar two strings are. Levenshtein distance technique was selected to achieve the fuzzy search which is most widely used edit distance algorithm.”).
[256]. For an explainer on webhooks, see What is a webhook?, Red Hat, https://www.redhat.com/en/topics/automation/what-is-a-webhook [https://perma.cc/Z44F-AX3F] (last updated Feb. 1, 2024) (explaining that webhooks are a method by which developers can connect two systems and trigger an activity, including download of new information, when a predefined event happens such as new entries to a database).
[257]. Jake Frankenfield, Personally Identifiable Information (PII): Definition, Types, and Examples, Investopedia (Aug. 1, 2025), https://www.investopedia.com/terms/p/personally-identifiable-information-pii.asp [https://perma.cc/76RE-DUZT].
[258]. See Kadri, supra note 2, at 138–39.
[259]. See generally Solon Barocas & Karen Levy, Privacy Dependencies, 95 Wash. L. Rev. 555 (2020) (exploring how a person’s privacy often depends on the decisions and disclosures of other people).
[260]. See Myra Luna Lucero & Kailee Kodama Muscente, Understanding Identifiable Data, Tchrs. Coll.: Colum. Univ. (June 29, 2020), https://www.tc.columbia.edu/institutional-review-board/irb-blog/2020/understanding-identifiable-data- [https://perma.cc/N6BG-Q9TQ].
[261]. Protecting Americans From Harmful Data Broker Practices,89 Fed. Reg.101402 (proposed Dec. 13, 2024) (to be codified at 12 C.F.R. pt. 1022) (describing the advanced technological capabilities brokers possess); Elijah Greisz, Transparency Without Teeth: An Empirical Understanding of Data Broker Regulation, 92 U. Chi. L. Rev. 1077, 1085–87 (2025) (describing the sophisticated and technologically advanced data broker economy); Pauline T. Kim & Erika Hanson,People Analytics and the Regulation of Information Under the Fair Credit Reporting Act,61 St. Louis U. L.J. 17, 27 (2016) (describing the harvesting of large datasets for the creation of personal profiles);Danielle Keats Citron & Frank Pasquale, The Scored Society: Due Process for Automated Predictions,89 Wash. L. Rev. 1, 22 (2014) (describing the broker practice of making inferences about individuals based on data).
[262]. See Off. of Oversight & Investigations Majority Staff, Comm. on Com., Sci. & Transp., A Review of the Data Broker Industry: Collection, Use, and Sale of Consumer Data for Marketing Purposes 23 (2013).
[263]. See, e.g.,Simson L. Garfinkel, U.S. Dep’t of Com., Nat’l Inst. Standards & Tech., De-Identification of Personal Information iii, 1 (2015) (defining de-identification as a “collection of approaches” to remove “identifying information from a dataset so that individual data cannot be linked with specific individuals”).
[264]. See Kadri, supra note 2, at 138, 140–41 (showing how data brokers can fuel abuse with personal information, such as home addresses and even intimate images).
[265]. See, e.g., Ira S. Rubinstein & Woodrow Hartzog, Anonymization and Risk, 91 Wash. L. Rev. 703, 710–11 (2016) (demonstrating that entities with auxiliary information can link datasets to individuals more easily).
[266]. See, e.g., Information for Data Brokers, Cal. Priv. Prot. Agency, https://cppa.ca.gov/data_brokers [https://perma.cc/75K5-P5SV].
[267]. Apple and Google Deliver Support for Unwanted Tracking Alerts in iOS and Android, Apple: Newsroom (May 13, 2024), https://www.apple.com/newsroom/2024/05/apple-and-google-deliver-support-for-unwanted-tracking-alerts-in-ios-and-android/ [https://perma.cc/N8G8-TU9E].
[268]. Recht, supra note 31, at 4–7.
[269]. See generally Frederick Schauer, Categories and the First Amendment: A Play in Three Acts, 34 Vand. L. Rev. 265, 267 (1981) (describing the category of coverage); Shanor, supra note 30, at 324–30.
[270]. Robert Post, Encryption Source Code and the First Amendment, 15 Berkeley Tech. L.J. 713, 714 (2000).
[271]. Id.
[272]. Id.
[273]. Brown v. Ent. Merchs. Ass’n, 564 U.S. 786, 790 (2011).
[274]. See Neil M. Richards, Reconciling Data Privacy and the First Amendment, 52 UCLA L. Rev. 1149, 1150–51 (2005).
[275]. Post, supra note 270, at 716.
[276]. See generally AlexanderTsesis, FreeSpeechConstitutionalism, 2015 U. Ill. L. Rev. 1015 (2015) (exploring the three dominant normative rationales for free speech in the United States).
[277]. Born from Justice Oliver Wendell Holmes’s canonical dissent in Abrams v. United States, the “marketplace of ideas” theory of free speech champions the “free trade in ideas” as the premier driver of truth in a society predicated upon democratic self-government. 250 U.S. 616, 630 (1919) (Holmes, J., dissenting). While some scholars support Justice Holmes’s view, others raise concerns about the theory’s blind spots. See Eugene Volokh, In Defense of the Marketplace of Ideas / Search for Truth as a Theory of Free Speech Protection, 97 Va. L. Rev. 595, 596–97 (2011); Robert Post, Participatory Democracy and Free Speech, 97 Va. L. Rev. 477, 478–80 (2011).
[278]. See C. Edwin Baker, Human Liberty and Freedom of Speech 47–69 (1989) (arguing that speech is protected because it “promotes both the speaker’s self-fulfillment and the speaker’s ability to participate in change”); Thomas I. Emerson, Toward a General Theory of the First Amendment, 72 Yale L.J. 877, 879 (1963) (explaining freedom of expression’s role in “the achievement of self-realization”).
[279]. This Article focuses on a participatory democracy theory of free speech rather than the broader democratic self-governance theory. Robert Post complicates the democratic self-governance theory by introducing his own nuanced offshoot of the theory rooted in participatory democracy. See Post, supra note 277, at 478. While democratic self-governance and participatory democracy may seem interchangeable, Post distinguishes between a Meiklejohnian view of democratic self-governance and his own theory of participatory democracy. See Robert Post, Reconciling Theory and Doctrine in First Amendment Jurisprudence, 88 Calif. L. Rev. 2353, 2368–69 (2000) [hereinafter Post, Reconciling Theory and Doctrine].
[280]. Why scrutinize First Amendment values rather than simply examine and apply First Amendment doctrine? First Amendment doctrine is notoriously incoherent, and many view this incoherence as a product of doctrinal divergence from animating First Amendment values. See Robert Post, Recuperating First Amendment Doctrine, 47 Stan. L. Rev. 1249, 1249–50 (1995); Post, Reconciling Theory and Doctrine, supra note 279, at 2365; Shanor, supra note 30, at 322–23. A more elemental inquiry is warranted before presuming First Amendment doctrine extends coverage to brokers’ platforms as new media of communication. See Reno v. ACLU, 521 U.S. 844, 885 (1997); Packingham v. North Carolina, 582 U.S. 98, 107–08 (2017).
[281]. Thomas E. Kadri, Drawing Trump Naked: Curbing the Right of Publicity to Protect Public Discourse, 78 Md. L. Rev. 899, 917 (2019).
[282]. See id. at 912–17; Leslie Kendrick, Are Speech Rights for Speakers?, 103 Va. L. Rev. 1767, 1789 (2017); Meir Dan-Cohen, Freedoms of Collective Speech: A Theory of Protected Communications by Organizations, Communities, and the State, 79 Calif. L. Rev. 1229, 1233 (1991); Morgan N. Weiland, Expanding the Periphery and Threatening the Core: The Ascendant Libertarian Speech Tradition, 69 Stan. L. Rev. 1389, 1451 (2017).
[283]. See Matthew Crain, The Limits of Transparency: Data Brokers and Commodification, 20 New Media & Soc’y 88, 90 (2018).
[284]. Jane Bambauer, Is Data Speech?, 66 Stan. L. Rev. 57, 61 (2014).
[285]. Id. at 60.
[286]. Jane R. Bambauer, The Empirical First Amendment, 78 Ohio St. L.J. 947, 955 (2017).
[287]. See Ashutosh Bhagwat, Sorrell v. IMS Health: Details, Detailing, and the Death of Privacy, 36 Vt. L. Rev. 855, 862–63 (2012); see also, e.g., IMS Health Inc. v. Sorrell, 630 F.3d 263, 271–72 (2d Cir. 2010).
[288]. Bambauer’s coverage analysis privileges an expansive view of the “marketplace of ideas” theory. Meiklejohn’s approach to democratic self-governance is “quite analogous to the theory of the marketplace of ideas.” Post, Reconciling Theory and Doctrine, supra note 279, at 2369. But Meiklejohn’s approach foregrounds the aphorism that “[w]hat is essential is not that everyone shall speak, but that everything worth saying shall be said.” Alexander Meiklejohn, Political Freedom: The Constitutional Powers of the People 26 (1960). A Meiklejohnian approach aims to distinguish “between cognitive and noncognitive aspects of speech” and to extend “less constitutional protection” to the latter. Cass R. Sunstein, Pornography and the First Amendment, 1986 Duke L.J. 589, 603.
[289]. For background on the concept of “public discourse” as an animating principle of the First Amendment, see Robert C. Post, Citizens Divided: Campaign Finance Reform and the Constitution 49 (2014) (“I shall use the term publicdiscourse to describe the communicative processes by which persons participate in the formation of public opinion.”); Robert C. Post, Constitutional Domains: Democracy, Community, Management 7 (1995) (defining public discourse as “an open structure of communication” in which there can be “reconciliation of individual and collective autonomy”); Robert C. Post, The Constitutional Status of Commercial Speech, 48 UCLA L. Rev. 1, 7 (2000) [hereinafter Post, Constitutional Status of Commercial Speech] (“Public discourse is comprised of those processes of communication that must remain open to the participation of citizens if democratic legitimacy is to be maintained.”); Robert Post, Meiklejohn’s Mistake: Individual Autonomy and the Reform of Public Discourse, 64 U. Colo. L. Rev. 1109, 1115–16 (1993) (using the term “public discourse” to refer to the “communicative processes sufficient to instill in citizens a sense of participation, legitimacy, and identification”).
[290]. See Bambauer, supra note 286, at 948–50.
[291]. See Post, supra note 280, at 1254–55.
[292]. Id. at 1254.
[293]. See id. (invoking navigation charts as an example of media that communicate particularized messages that do not get First Amendment protection). For an analysis and rejection of First Amendment protection for software navigation and digital map programs, see Tim Wu, Machine Speech, 161 U. Pa. L. Rev. 1495, 1525 (2013) (“I believe that these technologies are still unprotected tools, because their communications perform a function unrelated to the communication of ideas, namely, telling someone how to get from A to B.”).
[294]. See Post, supra note 280, at 1254.
[295]. See Raymond Shih Ray Ku, Free Speech and Abortion: The First Amendment Case Against Compelled Motherhood, 43 Cardozo L. Rev. 2105, 2126–27 (2022) (articulating that, generally, selling information is not considered speech and that the Sorrell Court relied upon a “simplistic and reductionist interpretation of the First Amendment” which “ignores the reality that the Amendment does not protect all speech or apply simply because an activity may be labeled as speech”).
[296]. See Spence v. Washington, 418 U.S. 405, 409–10 (1974).
[297]. Cf. Post, supra note 280, at 1253–55 (1995) (explaining that First Amendment values presuppose a dialogic and independent relationship between speaker and audience and navigation charts, for example, lack First Amendment protection because they speak monologically to their audience); see also Ashutosh Bhagwat, Details: Specific Facts and the First Amendment, 86 S. Cal. L. Rev. 1, 40 (2012) (“[W]hile personal details sometimes play a key role in forms of self-governance, the relationship is often far more distant.”).
[298]. See Post, Reconciling Theory and Doctrine, supra note 279, at 2371–72.
[299]. Boos v. Barry, 485 U.S. 312, 322 (1988) (plurality opinion).
[300]. See Robert C. Post, Data Privacy and Dignitary Privacy: GoogleSpain, The Right to Be Forgotten, and the Construction of the Public Sphere, 67 Duke L.J. 981, 1009 (2018) [hereinafter Post, Data Privacy and Dignitary Privacy] (citing Robert C. Post, The Constitutional Concept of Public Discourse: Outrageous Opinion, Democratic Deliberation, and Hustler Magazine v. Falwell, 103 Harv. L. Rev. 601, 640–44, 680–84 (1990) [hereinafter Post, The Constitutional Concept of Public Discourse]); Post, The Constitutional Concept of Public Discourse, supra,at 624–26, 640–44, 680–84 (elucidating the concept of public discourse and civility norms present in First Amendment doctrine).
[301].Post, Data Privacy and Dignitary Privacy, supra note 300, at 1009.
[302]. Kadri, supra note 281, at 948–49.
[303]. Id.
[304]. See Post, Data Privacy and Dignitary Privacy, supra note 300, at 1008–09.
[305]. See Frederick Schauer, Boundaries of the First Amendment, 117 Harv. L. Rev. 1765, 1790 (2004).
[306]. Id. at 1789.
[307]. Schauer, supra note 30, at 1617.
[308]. See Shanor, supra note 30, at 322.
[309]. See Richards, supra note 274, at 1165–66 (criticizing the deregulatory effect of First Amendment arguments advanced by Eugene Volokh).
[310]. Id. at1169.
[311]. Id. at1168.
[312]. Id. at 1171.
[313]. See Shanor, supra note 30, at 322; Post & Shanor, supra note 30, at 166–67.
[314]. Richards, supra note 274, at 1179.
[315]. See Schauer, supra note 305, at 1777–78.
[316]. See Shanor, supra note 30, at 322.
[317]. See Cent. Hudson Gas & Elec. Corp. v. Pub. Serv. Comm’n, 447 U.S. 557, 562–64 (1980).
[318]. See Valentine v. Chrestensen, 316 U.S. 52, 54 (1942) (“[T]he Constitution imposes no such restraint on government as respects purely commercial advertising.”).
[319]. See 425 U.S. 748, 770–71 (1976).
[320]. See 447 U.S. at 566.
[321]. Id.
[322]. Id.
[323]. See 564 U.S. 552, 570 (2011); Bhagwat, supra note 287, at 859–60.
[324]. See Robert L. Kerr, Desperately Seeking Coherence: The Lower Courts Struggle to Determine the Meaning of Sorrell for the Commercial Speech Doctrine, 7 U. Balt. J. Media L. & Ethics 1, 4 (2019) (“[L]ower courts today are drawing upon a dizzying array of approaches and emphases in seeking to articulate and apply Sorrell.”); see also N.J. Dep’t of Lab. & Workforce Dev. v. Crest Ultrasonics, 82 A.3d 258, 268 (N.J. Super. Ct. App. Div. 2014) (“[T]he Court has not clearly elucidated what that ‘heightened scrutiny’ might entail. In the wake of the Supreme Court’s post-Sorrell silence and inaction, many federal and state courts are continuing to apply the standard set forth in Central Hudson.”).
[325]. Cf. Eugene Volokh, No Take-Backs, No Do-Overs, No Data Replevin, Reason: Volokh Conspiracy (June 13, 2019), https://reason.com/2019/06/13/no-take-backs-no-do-overs-no-data-replevin [https://perma.cc/ZW63-NLTW] (remarking on a case where the court held that the plaintiff couldn’t rely on the remedy of replevin to “take back” what he already said in a digital recording).
[326]. Cox Broad. Corp. v. Cohn, 420 U.S. 469, 471 (1975).
[327]. Okla. Publ’g Co. v. Dist. Ct., 430 U.S. 308, 310 (1977) (per curiam).
[328]. Smith v. Daily Mail Publ’g Co., 443 U.S. 97, 103 (1979).
[329]. Fla. Star v. B.J.F., 491 U.S. 524, 538 (1989).
[330]. See generally 420 U.S. 469.
[331]. See generally 491 U.S. 524.
[332]. Cox, 420 U.S. at 471, 474 (assessing the constitutionality of imposing civil liability under a state law recognizing tortious invasion of privacy); Fla. Star, 491 U.S. at 526 (assessing the constitutionality of imposing civil liability under a state law making it illegal to “print, publish, or broadcast” the names of victims of sexual offenses (quoting Fla. Stat. § 794.03 (1987))).
[333]. Cox, 420 U.S. at 471, 473–74.
[334]. Fla. Star, 491 U.S. at 526–27.
[335]. Cox, 420 U.S. at 495–96 (finding that the information entered the “the public domain” because the records containing the information were “open to public inspection” and had been “released to the public”); Fla. Star, 491 U.S. at 524, 527 (determining that because the department did not “restrict access either to the pressroom or to the reports made available therein” the information entered “the public domain”).
[336]. 443 U.S. 97 (1979).
[337]. Seeid. at 99.
[338]. Seeid. at 99–100.
[339]. Seeid. at 104.
[340]. Id. at 104.
[341]. See id.
[342]. Brooks v. Thomson Reuters Corp., No. 21-cv-01418-EMC, 2021 WL 3621837, at *9 (N.D. Cal. Aug. 16, 2021).
[343]. See Post, Constitutional Status of Commercial Speech, supra note 289, at 8 (suggesting that the distinction between public discourse and commercial speech rests upon a commonsense evaluation as to whether “the utterance of a particular speaker should be understood as an effort to engage public opinion or instead simply to sell products”).
[344]. 2021 WL 3621837, at *9.
[345]. Id.
[346]. See id.
[347]. See generally 472 U.S. 749 (1985).
[348]. See id. at 753.
[349]. Id. at 762.
[350]. See id.
[351]. See Justin Sherman, Credit Reporting Agencies Don’t Just Report Credit Scores, Duke Sanford Tech Pol’y Program (Nov. 9, 2022), https://techpolicy.sanford.duke.edu/blogroll/credit-reporting-agencies-don’t-just-report-credit-scores [https://perma.cc/R675-DV89].
[352]. SeeData Brokers, supra note 238 (“For these companies, consumers are the product, not the customer.”).
[353]. See, e.g., Austin v. Mich. Chamber of Com., 494 U.S. 652, 655 (1990).
[354]. See J. Morris Clark, Guidelines for the Free Exercise Clause, 83 Harv. L. Rev. 327, 330–31 (1969) (“The purpose of almost any law can be traced back to one or another of the fundamental concerns of government: public health and safety, public peace and order, defense, revenue.”).
[355]. Seesupra Part I.B.1.
[356]. See Scott Skinner-Thompson, Privacy at the Margins 8 (2021); see also Faye Vasilopoulos, Hanging by a Thread: Meta’s New Platform, Threads, Sheds Light on the Slow Unraveling of Individual Privacy, 58 U. Ill. Chi. L. Rev. 473, 482 (2024).
[357]. See Danielle Keats Citron & Jonathon W. Penney, When Law Frees Us to Speak, 87 Fordham L. Rev. 2317, 2318–20 (2019); see also Jonathon W. Penney, Understanding Chilling Effects, 106 Minn. L. Rev. 1451, 1478–79 (2022); Mary Anne Franks, Free Speech Black Hole: Can the Internet Escape the Gravitational Pull of the First Amendment?, Knight First Amend. Inst. at Colum. Univ. (Aug. 21, 2019), https://knightcolumbia.org/content/the-free-speech-black-hole-can-the-internet-escape-the-gravitational-pull-of-the-first-amendment [https://perma.cc/T4KQ-RUW2].
[358]. Cf. Danielle Keats Citron, Intimate Privacy’s Protection Enables Free Speech, 2 J. Free Speech L. 3, 3 (2022) (“[I]ntimate privacy is an essential precondition for self-expression.”).
[359]. In the limited public employer-employee context, the Court utilizes the Pickering test. See Rankin v. McPherson, 483 U.S. 378, 384–85 (1987) (balancing the speech interests of a public employee in commenting on matters of public concern with the State’s interest, as an employer, in promoting the efficiency of public services (citing Pickering v. Bd. of Educ., 391 U.S. 563, 568 (1968))); see also Anna Tichy, Gillis v. Miller, 64 N.Y.L. Sch. L. Rev. 115, 129 (2020); Abby Ward, In Defense of Pickering: When A Public Employee’s Social Media Speech, Particularly Political Speech, Conflicts with Their Employer’s Public Service, 108 Minn. L. Rev. 1643, 1700 (2024).
[360]. See Skinner-Thompson, supra note 53, at 456 (“Although visibility comes with risks for members of marginalized groups, controlled visibility through privacy protections has the potential to serve important antisubordination goals and lead to broader societal participation of entire communities in the public square. Given that public space may deny the existence of nonnormative identities, that participation may by itself be radical and politically transformative.”); see also Mary Anne Franks, Democratic Surveillance, 30 Harv. J.L. & Tech. 425, 430 (2017) (“A democratic conception of privacy, by emphasizing the experiences of those most vulnerable to its violation, offers the best chance of securing privacy for all.”).
[361]. See Andrew, supra note 83; see also Ira S. Rubinstein, Voter Privacy in the Age of Big Data, Wis. L. Rev. 861, 896–97 (2014) (claiming that a breach of political data results in harms such as a declining faith in publicly supervised political processes).
[362]. When arguing that New Jersey’s Daniel’s Law violated the First Amendment, the class of data brokers did not even bother to challenge whether the government had a compelling interest. See Plaintiffs’ Memorandum of Law in Opposition to Defendants’ Consolidated Motion to Dismiss Plaintiffs’ Complaint at 41–42, Atlas Data Priv. Corp. v. We Inform, LLC, 758 F. Supp. 3d 322 (D.N.J. 2024) (No. 24-4037), 2024 WL 4905924 [hereinafter Atlas Data Privacy Brief].
[363]. Cf.id. at 28, 31–36 (rebutting data brokers’ arguments that New Jersey’s Daniel’s Law is both overinclusive and underinclusive).
[364]. Seeid. at 32 (“[W]hether the statute contains a verification requirement does not alter the scope of its coverage or the amount of speech it restricts.”).
[365]. Seesupra Part III.B.1.
[366]. See Atlas Data Privacy Brief, supra note 362, at 33–36.
[367]. See Burson v. Freeman, 504 U.S. 191, 209 (1992) (plurality opinion); see also Bd. of Trs. of the State Univ. of N.Y. v. Fox, 492 U.S. 469, 480 (1989) (“What our decisions require is . . . a fit that is not necessarily perfect, but reasonable . . . .”); Matthew Passalacqua, Something’s Brewing Within the Commercial Speech Doctrine, 46 Valparaiso U. L. Rev. 607, 642 (2012).
[368]. Practical obscurity describes the functionally obscure state of scattered, uncollated, and noncomputerized information in contrast to streamlined, digital access to that very same information in an aggregated form. See Nancy S. Marder, From “Practical Obscurity” to Web Disclosure: A New Understanding of Public Information, 59 Syracuse L. Rev. 441, 441–43 (2009).
[369]. See 489 U.S. 749, 762–64 (1989).
[370]. Id. at 764.
[371]. Id.