California Law Review

View Original

Racializing Algorithms

There is widespread recognition that algorithms in criminal law’s administration can impose negative racial and social effects. Scholars tend to offer two ways to address this concern through law—tinkering around the tools or abolishing the tools through law and policy. This Article contends that these paradigmatic interventions, though they may center racial disparities, legitimate the way race functions to structure society through the intersection of technology and law. In adopting a theoretical lens centered on racism and the law, it reveals deeply embedded social assumptions about race that propel algorithms as criminal legal reform in response to mass incarceration. It further explains how these same assumptions normalize the socially and historically contingent process of producing race and racial hierarchy in society through law. Normatively, this Article rejects the notion that tinkering around or facilitating the abolition of algorithms present the only viable solutions in law. Rather, it calls upon legal scholars to consider directly how to use the law to challenge the production of racial hierarchy at the intersection of technology and society. This Article proposes shifting the legal discourse on algorithms as criminal legal reform to critically center racism as an important step in this larger project moving forward.

It is as if I had been looking at a fishbowl—the glide and flick of the golden scales, the green tip, the bolt of the white careening back from the gills; the castles at the bottom, surrounded by pebbles and tiny, intricate fronds of green; the barely disturbed water and the flecks of waste and food, the tranquil bubbles traveling to the surface—and suddenly I saw the bowl, the structure that transparently (and invisibly) permits the ordered life it contains to exist in the larger world.

—Toni Morrison[2]

Table of Contents Show

    Introduction

    The institutionalization of algorithms throughout the criminal legal process is both popular and controversial. Algorithmic risk assessments standardize the prediction of an individual engaging in criminal behavior in the future based upon statistical analyses of large historical data on past offenders’ behavior.

    3 Recently, these tools have emerged as a common legal practice intended to guide the exercise of discretion within criminal law’s administration.

    4 Much of the law and policy debate around these tools arises from the potential impact that algorithms impose on marginalized Black and Brown people already disproportionately affected by criminal law enforcement in the United States.

    5 This Article treats the advance of the tools as a foundation to examine historically contingent racial assumptions expressed in the law that produce whiteness and white supremacy in the digital age. In revealing the deep connection between racial assumptions, this technology, and the law, this Article provides a theoretical foundation to expand critical approaches to race and racial hierarchy in legal scholarship going forward.

    Existing legal scholarship considers the effect of algorithms in criminal law’s administration on racially marginalized people from diverging perspectives. On one hand, scholars contend that algorithms _can_improve criminal law’s administration by increasing objectivity and transparency, reducing incarceration, increasing public safety, and reducing the threat of racial bias by individual actors in decision-making. These scholars explore the role of law and policy in facilitating or hindering the potential for algorithms to achieve these normative ends. 6 On the other hand, scholars and activists question whether algorithmic risk assessments will actually reduce incarceration and punitive surveillance, particularly for Black and Brown people disproportionately impacted by the criminal legal apparatus. These scholars propose a variety of legal interventions, including the abolition of algorithms as criminal legal practice. 7 Though there is no consensus on how to conceptualize mass incarceration,

    8 both bodies of legal scholarship attend to the impact of algorithms on racial minorities.

    This scholarship helps illuminate how algorithms in criminal law’s administration shape the material conditions of Black people’s lives. 9 Yet, from a perspective oriented towards racial justice, the existing legal perspectives are insufficient. The question underlying existing legal scholarship tends to address the intersection of technology and criminal law in light of extant racial disparities in the criminal legal apparatus. But concern with racial disparities is not the same as critically questioning race and racial hierarchies in law.

    10 The latter stance recognizes race is socially produced and powerfully shapes society in various ways. From this stance, the key underlying question is different: how can law be a tool to destabilize the production of racial hierarchies in society? This inquiry demands attention to the co-productive dimensions of race in law. Law creates subordinating material conditions for Black people; it also legitimates and creates historically contingent ways of thinking that normalize these conditions in society. 11 In the context of criminal legal reform, less attention is paid to the intellectual component of producing race and racial hierarchy in law. Failure to attend to transitory racial assumptions expressed in the law is perilous. 12 Following the Toni Morrison quotation above, the existing perspectives on algorithms as criminal legal practice can easily center on the fish—here, marginalized Black and Brown people—while making the fishbowl, or the ways of thinking in society that produce race and sustain racial hierarchy through law.

    This Article examines the intersection of racism and law as an intellectual foundation that allows algorithms to expand in society. It illuminates and critiques deeply embedded racial assumptions in society expressed through the law around algorithms as criminal legal reform. Unpacking these assumptions demonstrates the contemporary, intersectional production of race and racial hierarchy in the United States. Socially constructed and historically contingent racial assumptions expressed in the law stabilize algorithms as criminal legal practice. These same racial assumptions normalize and legitimate the production of whiteness and white supremacy in society through law. Normatively, this Article shifts the grammar of racial justice at the intersection of law, technology, and society. It proposes a legal perspective to expand the normative horizons of legal critiques of algorithms going forward.

    This Article analyzes three embedded social assumptions about race that are expressed in the law around algorithms. First, I consider the conceptualization of the algorithm as an inevitable criminal legal reform. 13 In part, this claim relies upon the notion that the algorithm can resolve a pernicious threat that criminal legal actors cannot make individual decisions independent of permanent racial prejudices. This notion relies upon a belief about racism that constitutes whiteness as a race. The idea that racism is too difficult to overcome without nonhuman intervention legitimates race as a nonhuman, rather than social, phenomenon. It also depoliticizes algorithms as a legal response to mass incarceration. Second, I examine the conceptualization of the algorithm as imposing de minimis social costs in criminal law’s administration. 14 Yet algorithms pose significant material and epistemic costs facilitated by law. Assumptions about the morality of the algorithm as an antiracist intervention shape normative judgments about which costs are bearable, and which costs are politicized. Depoliticizing certain costs make whiteness a basis to exit criminal law’s disciplinary function in society. Third, I consider assumptions about the nature of race itself. 15 Legal scholarship debates whether and how law can address racial disparities produced by algorithms used in the criminal legal process. These debates are race-constructing. In questioning whether and how law should permit consideration of race in the construction of algorithms, this discourse produces a specific racial assumption through law: the notion that race is fixed in society. This assumption gives race social and political meaning; it makes structural marginalization distributed along racial lines appear natural and beyond redress.

    In short, racial assumptions expressed in the law are fundamental to the expansion of algorithms as criminal legal practice while algorithms sustain ways of thinking in society that legitimate race and racial hierarchy in the information age. Recognizing these connections among race, this technology, and the law sets a foundation to expand and reframe the emerging legal scholarship around algorithms as criminal legal reform. Unlike existing scholarship, which tends to question whether and how algorithms can be applied rigorously under the law, this Article does not proscribe a “fix” to the algorithm or the law around it. To the contrary, this Article demonstrates that such an approach leaves untouched the transitory racial assumptions legitimated through the intersection of this technology and law. Yet, to interrupt the production of race and racial hierarchy, legal scholarship must destabilize contemporary racial assumptions, not just technology or law. 16 I propose a legal perspective on algorithms as criminal legal practice that decenters the social phenomenon of mass incarceration to instead critically center racial assumptions in the law. 17 While this approach may leave a reader seeking a legal fix to algorithms and criminal law’s administration unsatisfied, it expands the foundation for legal scholars to engage with racism through the law going forward. Whether urging regulation or abolition of algorithms, legal scholars can better structure their interventions to make visible social assumptions about race in the law. Such an approach reminds us that racism in society remains complex. Our responses to it in law should be as well.

    This Article enhances at least three burgeoning areas of legal scholarship. First, it illuminates mass incarceration as a social concept that facilitates legal transformation. Much scholarship urges specific legal transformations based on divergent conceptualizations of mass incarceration.

    18 In contrast, this Article demonstrates that it is because mass incarceration is a racialized phenomenon that we choose to change the law in particular ways. Thus, our responses to mass incarceration reveal contemporary racial assumptions present in society. Second, this Article provides a theoretical foundation to expand critiques of algorithms on the basis of race. Existing critiques of algorithms emphasize the carceral supervision component to race production, while critical race interventions underscore the problematic connections between race, prediction, and criminality in relation to algorithms in criminal law.

    19 This Article treats algorithms as a technical process that standardizes ways of thinking in criminal law’s administration.

    20 Through this perspective on the technology, I bring to the fore a separate set of underlying contemporary racial assumptions expressed in the law that function to normalize racial hierarchy in society. Third, by centering racism as an intellectual foundation for algorithms, this Article begins to imagine a more expansive approach to scholarship at the intersection of law and technology. Though this Article examines the production of race and the expansion of algorithms in the context of criminal law, the methodological approach proposed here can be applied in legal scholarship across many spaces in society where algorithms and other technological interventions are deployed. Accordingly, this contribution expands the ways in which legal scholars can destabilize constitutive power embedded within, and represented by, technological practices going forward.

    The Article proceeds in four parts. Part I describes the state of legal scholarship around algorithmic risk assessments as criminal legal practice. Part I situates dominant critiques within two diverging perspectives on mass incarceration. Part II demonstrates that, through both perspectives, legal scholarship on mass incarceration tends to treat race as a social fact that already exists in the social world upon which algorithms in criminal law act. Drawing insights from critical race scholars, I argue that this assumption must be challenged in law. Part III theorizes on the production of race and racial hierarchy at the intersection of technology and law. I bring to the fore, and critique, specific racial assumptions that facilitate both the expansion of algorithms as a response to mass incarceration and the social production of whiteness and white supremacy through law. Part IV reflects on the implications of this theoretical critique for legal scholarship going forward.

    A note about terminology is important before moving forward. By “racism,” I refer to a mode of thought shaped by racial assumptions and a set of human practices influenced by that mode of thought. This Article foregrounds racial assumptions in the law around algorithms that facilitate the social production of racial hierarchy and race, together referred to as “racial difference.”

    21 These assumptions combine to demonstrate how the law around algorithms contributes to the hierarchical production of human difference through assumptions about race expressed in law. I invoke the “deeper normative, critical thrust” of the term “racialization” in my prescription. 22 That is, this Article urges legal scholars to approach algorithms in a way that promotes critical reflections on the complex ways racism continues to structure society at the intersection of technology and law going forward.

    I. Algorithmic Risk Assessments and Criminal Legal Reform Perspectives

    Information technologies—including algorithmic risk assessments, check-in kiosks, electronic monitoring devices, and more—have expanded exponentially as criminal legal practices across the United States.

    23 This development occurred as the historically significant rise in the U.S. incarcerated population and its disproportionate concentration among marginalized Black and Brown people reached public consciousness after the financial crisis of 2008. 24

    Algorithmic risk assessments (or “algorithms” or “tools”) are perhaps the most popular, and controversial, of these expanding technologies. Algorithms rely upon statistical analyses of large historical datasets consisting of observations about the past behaviors of people involved in the criminal legal process.

    25 The statistical models identify which factors correlate with preidentified types of future behavior developers and policymakers determine to be of interest in the criminal legal process. 26 Tool developers then select the factors upon which they will construct an algorithm for use in criminal law’s administration. 27 These “predictive risk factors” can include criminal history, age, gender, and a number of other demographic and psychological factors.

    28 Developers then select a statistical method that assigns weights to these risk factors so that the algorithm will rank individuals according to how much they share the characteristics of those who engaged in the specified behavior in the observed dataset. 29 This ranking is translated into a category, like low, medium, or high risk of recidivism. That category is then conveyed to criminal legal actors in their decision-making process.

    30 Thus, the algorithm is meant to standardize the estimate of an individual’s future behavior which, in turn, can shape the decisions of individual criminal legal actors.

    31

    Criminal legal actors began applying earlier iterations of this technology into myriad contexts in the criminal legal process as early as the 1930s.

    32 In the recent decade, this technology has emerged as a prominent practice. 33 Although the tools appear in all stages of the criminal legal process, this Article emphasizes their institutionalization in both the pretrial bail and post-conviction sentencing contexts. In these contexts, the algorithm is meant to provide judges and other criminal legal actors with standardized information about the defendant to inform decisions about whether to release the person pretrial, place the person under pretrial or post-conviction supervision, or subject the person to a term of incarceration.

    The following Sections situate the algorithm within two perspectives on mass incarceration. These prevailing perspectives are ways through which scholars tend to critique criminal legal reforms.

    34 I describe how these two dominant perspectives shape legal critiques of algorithms within the criminal process. I do not claim that every scholar or commentator fits neatly into either frame. Rather, in what follows, I set out the ideological parameters of the legal scholarship around algorithms as criminal legal reform.

    A. Information Technologies as a Response to Incarceration

    In one prevailing perspective, legal scholars engage with information technologies oriented around whether and how such tools may reduce incarceration to an “acceptable” level. 35 Today, the United States leads the world in terms of incarceration with approximately 1 percent of the U.S. adult population living behind bars. 36 Though adherents to this perspective do not often define the desired amount of incarceration, they agree that we have surpassed that quantitative amount. This pragmatic perspective “treats criminal justice problems as a matter of degree that can be remedied by recalibrating the way that the system sorts among defendants, categorizes conduct, and punishes wrongdoing.” 37 Thus, these scholars conceptualize mass incarceration as a problem of too much incarceration. To them, mass incarceration is the product of a series of ill-advised policy choices. 38

    For these adherents, algorithmic risk assessments are front and center as criminal legal reform. If properly implemented, these tools could be a critical component of the larger effort to reduce the unnecessary costs of incarceration on society and individuals.

    39 In theory, the right tool would help judges identify how to best surveil individual defendants who come before their courtroom—whether behind bars, via electronic monitoring, or through other mechanisms—through more efficient and cost-effective means.

    40 It could even improve police efforts to surveil communities and maintain safety. 41 Within this frame, institutionalizing algorithms could be an important step toward right-sizing the prison population while offering additional societal benefits like transparency and increased public safety.

    Still, algorithmic risk assessments present many concerns for these legal scholars to debate. A significant concern pertains to race. Among scholars adhering to this conceptualization of mass incarceration, racial inequality is an important reason to implement criminal legal reform.

    42 Myriad studies demonstrate that Black and Brown people are disproportionately represented in the U.S. prison population. 43 In light of this reality, whether algorithmic risk assessments reduce or exacerbate racial disparities is important.

    44 Indicators that algorithmic tools produce biased results are concerning.

    45 Thus, legal scholars adopting the incarceration perspective have queried whether and how law can address these concerns about algorithms in criminal law’s administration.

    46

    B. Information Technologies and the Carceral State

    Legal scholars also engage with information technologies through a lens sensitive to the “carceral state.” The carceral state captures the perspective that “criminal law now structures state-citizen interactions to such a substantial degree that it seems a defining characteristic of the polity, even for those not presently imprisoned.” 47 Scholars adhering to this perspective treat the historic rise in incarceration and its concentration among Black and Brown people as central to the transformation of government and broader structures of governance. 48 They situate and critique criminal legal practices in relation to broad regulatory structures in the United States.

    Seminal works in this tradition examine algorithmic risk assessments in criminal law’s administration.

    49 Consistent with the thrust of this literature more broadly, legal scholars adhering to the carceral state perspective approach this practice with a great deal of skepticism. They use algorithms as a vehicle to identify conceptual changes about the role of the state, including the expansion of market-based rationales

    50 and the notion that government is good for control rather than support.

    51 They illuminate how algorithms legitimate and expand a mode of governance that prioritizes criminal enforcement to regulate the poor and dispossessed.

    52

    Legal scholars adopting this perspective demonstrate the broad reaching, negative effects that algorithms impose on poor and Black people in the carceral state. They highlight how algorithms disproportionately impact historically marginalized people due to structural realities of the carceral state.

    53 They demonstrate how these tools entrench the lack of power and agency among the already marginalized characteristic of the carceral state. 54 They reveal the material consequences of expanding carceral technologies for already marginalized communities.

    55 Such critiques have led to a number of policy proposals, including abolishing the algorithm in criminal law’s administration.

    56

    In summary, whether one adopts a perspective on mass incarceration oriented toward incarceration or the carceral state, there is much to say about race and algorithms as a criminal legal practice. Depending on which perspective one adopts about mass incarceration, the substance of what one says about race, the algorithm, and how to respond to it in law will differ. This includes whether to reform the algorithm as criminal legal practice or seek to abolish it outright.

    II. The Production of Race and Racial Hierarchy in Law

    Through these dominant perspectives, legal scholars elucidate the lived experiences of Black people at the intersection of technology and criminal law to varying degrees.

    57 Yet, these perspectives tend to share a common assumption. The existing legal perspectives on mass incarceration, whether intentionally or not, take for granted the existence of race and racial hierarchy as transparent and relatively uncontroversial social facts. When mass incarceration is centered in legal scholarship on algorithms as criminal legal practice, the problem is that the algorithm exacerbates racial disparities or the criminal justice system exacerbates racial disparities, or some combination of the two.

    This Article takes a different approach. It begins to grapple with the production of racial difference at the intersection of technology and law. Drawing insight from critical race theory, Part II.A demonstrates that racial difference is co-produced through historically contingent intellectual assumptions and material conditions legitimated in law. Part II.B explains the importance of this insight for discussions about algorithms as criminal legal reform. It situates transitory racial assumptions in law as critical to engaging with the contemporary production of race and racial hierarchy through law. Thus, Part II sets a theoretical foundation to engage with algorithms as a legal practice that contemporaneously gives race meaning through law, the matter taken up in Part III.

    A. A Structural Account of Law’s Role in Producing Racial Hierarchy

    Race refers to a “vast group of people loosely bound together by historically contingent, socially significant elements of their morphology and/or ancestry.”

    58 Race is a social construct, meaning a human creation. More deeply, following Professor Ian Haney López, it is a “sui generis social phenomenon” in which historically contingent social systems of meaning connect physical features, faces, and personal characteristics to what we commonly refer to as race. 59 These meanings are the product of law, ideology, and social relations.

    60

    The social meanings associated with race function to hierarchically structure society. The United States actively maintains a society structured around racial caste. This social structure, wherein marginalized Black and Brown people appear at the “bottom” and White persons appear at the “top,” is not natural. It is constructed. Law plays a significant role in producing and legitimating this social structure. As Professor Devon Carbado explained, “the law does not simply reflect ideas about race. The law constructs race[]” in myriad ways.

    61

    The mechanisms through which law fabricates race and positions racial minorities in a subordinate position are not fixed. The remainder of this Section draws upon critical race theory to explicate how and why law functions this way. For now, it is important to emphasize upfront that the production of race and racial hierarchy is historically contingent.

    62 One can only engage with its production through law by adopting a critical lens sensitive to the specific cultural and historical context. Core concepts used by critical race scholars provide guidance to discern the contemporary ways the law produces race and racial hierarchy in society.

    1. Structural Dynamics, Not Individual/Attitudinal Behavior

    First, critical race scholars urge examination of _structural dynamics._These scholars reject the idea that racial hierarchy is the product of individual actors with negative attitudes about racial minorities. Although it is entirely plausible that individual actors will behave in ways animated by racial animus, legal scholars in this tradition recognize that structural dynamics enacted through law do the work to produce racial hierarchy. These structural dynamics operate within and across multiple institutions and fields to sustain racial minorities within a subordinate social position.

    63

    Here, a critique of common sense in law is helpful. There is “an element of banality” to structural racism. 64 That is, the legal practices that sustain racial hierarchy may appear prosaic and ordinary in social context. This is because law is the product of our cultural imagination. It “derive[s] from structures of thought, the collective constructs of many minds.” 65 These structures of thought appear as common sense; the ideas seem natural, practical, simple, and accessible. 66

    Thus, prevailing contemporary common-sense ideas of race pervade the law. Law produces structural dynamics that allocate benefits and burdens along racial lines through specific legal practices. Deeply embedded assumptions about race render the legal practices that allocate benefits and burdens along racial lines as natural, logical, and beyond critique in social, cultural, and historical contexts. These legal practices produce “the linkage between white privilege and the disadvantage of racial minorities that is a critical feature of how race structures social and economic relations.”

    67

    2. The Normalization of Whiteness

    Critical race scholars reject the notion that race is biological. Instead, they recognize that race is socially constructed. “Race” includes whiteness. Whiteness is, as Martha Mahoney contends, “historically located, malleable, contingent, and capable of being transformed.” 68 It includes a set of “linked dimensions,” including “a location of structural advantage and race privilege; a ‘standpoint’ from which White people look at... society; and a set of cultural practices that are usually unmarked and unnamed.” 69 These components of whiteness are “continuously constructed, reconstructed, and transformed for White people.” 70 Though whiteness as a privileged identity “requires reinforcement and maintenance,” the practices that produce this privilege go unmarked because racism renders invisible the mechanisms that socially reproduce and maintain this privilege. 71

    Scholarship critical of racism in the law reveals the ways that whiteness—and the benefits that accrue to those considered White—is constructed through law. Scholars in this tradition illuminate how whiteness has “functioned as a normative baseline” in law against which everyone else is measured. 72 They emphasize how and why the structural disadvantages the law accords to marginalized Black and Brown people confer the structural advantages the law accords to whiteness. For example, Professor Cheryl Harris powerfully compared whiteness to property for the ways in which it has been continually protected in law.

    73 As she emphasized, law “construct[s] whiteness as not merely race, but race plus privilege.” 74 “De facto white privilege” produces and is produced by norms wherein substantial inequality along racial lines is the base line. 75

    The normalization of whiteness, like the production of race more broadly, is an active dynamic that produces power to shape social interactions through law. 76 Law makes possible the continued protection and preservation of whiteness as the normative baseline in society. 77 Through the normalization of whiteness, law disciplines non-White people to be more like White people. It also disciplines White people to strive toward a constructed notion of whiteness that can intersect with other axes of identity (e.g., White + male + heterosexual, so on and so forth) to preserve a hierarchical social structure.

    78

    3. The Co-Production of Racial Hierarchy in Law

    Critical race scholars reject the notion that race or racism is fixed. For example, Professor Ian Haney López defined race as a “social complex of meanings we continually replicate in our daily lives” and over time in historically contingent and culturally specific ways.

    79 The racial meanings attached to morphological elements or ancestry (i.e., wooly hair + flat nose = Black) are the product of ongoing, contradictory, self-reinforcing processes enacted in and legitimated by law.

    80 The social meanings attached to these racial characteristics (i.e., Black = lazy/dangerous; White = industrious/law-abiding) shift in social context and social interaction. 81 Law helps to create and legitimate these meanings, which combine to produce and normalize racial hierarchy.

    82 Thus, law plays a significant role in the dual social processes that together make race and racial hierarchy salient in society. That is, law creates the material conditions of inequality along racial lines and law legitimates and produces social assumptions about race that make differing material conditions appear normal.

    Racial meanings that structure society through law constantly change. As Professor Khiara Bridges explained, “[c]hanged circumstances alter[] the techniques that a racist society use[s] to manage and maintain the racial hierarchy.”

    83 These changes occur within and through law. For example, consider the eugenics movement. On the one hand, the eugenics movement is part of a long history of the law affirming biological conceptions of race in society.

    84 That biological conception of race survives today through genomic developments.

    85 On the other hand, the employment of forcible sterilization to ensure racial domination depends on different racial assumptions and legal foundations in the 1930s compared to the 1960s on. Whereas sterilization in the 1930s was justified by assumptions about the need to purify whiteness, sterilization in the 1960s onward has been justified by assumptions about Black people’s social deficiency. 86 The legal practice of sterilization may be the same, but the racial meanings that the legal practice enforces are different. Further, this example demonstrates that a legal practice can represent social assumptions about race that are expressed in other areas of the law not necessarily conceptualized as connected.

    In short, social context is critical to identifying the role of law in the racial formation process. Historically contingent and culturally specific assumptions about race shape the law. Thus, even if race is fundamentally a social construct, law is a site through which institutional privilege and subordination is produced, legitimated, and transformed.

    87 Social assumptions about race gain power through law.

    Returning to the legal scholarship on algorithms as a criminal legal reform, this Section explains the import of examining racial assumptions independent from critiques of mass incarceration. Without attending to transitory racial assumptions in the law, these legal perspectives can obscure and legitimate the contemporary production of race and racial hierarchy through the law.

    Both legal perspectives treat race as a thing that already exists in the social world. This assumption is troubling from a racial justice perspective. Race is discursive—it is the product of “systems of meaning” that “factor into human culture and regulate human conduct.

    88 Critical race legal scholars emphasize how race and racial hierarchy are produced through the intersection of racial assumptions and law. Approaching criminal legal reforms through a lens focused on racial assumptions can reveal certain ways of thinking about race expressed in law that make race and racial hierarchy salient in the present. This insight is critical at this important moment in criminal legal reform. Without understanding how race and racism function to structure society through the law, we cannot begin to fully imagine how to go about changing it in the law. Yet change is exactly what the existing legal scholarship on mass incarceration seeks and facilitates in society, in particular at its intersection with information technology. Without attending to contemporary racial assumptions, legal scholarship encouraging massive transformations in society can contemporaneously, though inadvertently, contribute to the production of race and racial hierarchy through law.

    The remainder of this Article responds to this concern in the context of algorithms as criminal legal reform. It proceeds from the understanding that racism is not stagnant, though it is endemic to U.S. society. It treats race as a fluid social construct necessary to reproduce racial hierarchy in society.

    89 This approach contrasts with a tendency in scholarship on mass incarceration to conceptualize racism and race stagnantly in juxtaposition to expansive calls for transformation.

    90 It instead treats mass incarceration as the social and cultural context within which specific racial assumptions are expressed in and around the law.

    The following analysis demonstrates that ideas about race that structure society sustain support for specific criminal legal reforms in response to mass incarceration. Accordingly, this Article joins the small but growing body of legal scholarship pointing to the connections between the expansion of algorithms in criminal law governance and white supremacy. To date, this scholarship largely emphasizes the problems with prediction in criminallaw. 91 Such works challenge how “criminal laws and practices sustain prevailing beliefs of Black criminality.” 92 Part III complements and expands upon these emerging critiques. Rather than focus on what the algorithm is—a predictor—it underscores what the algorithm does: it standardizes ways of thinking in criminal law’s administration through technical means. Further, it identifies and critiques racial assumptions that produce whiteness at the intersection of the law and the algorithm as criminal legal practice. Such an analysis sets the foundation to broaden the bases for legal scholars to critique algorithms as legal practice in social and cultural context, the matter taken up directly in Part IV.

    III. Racial Assumptions in the Law Around Algorithms as Criminal Legal Reform

    This Section critically centers racism in the law around algorithms as criminal legal reform. It illuminates racial assumptions embedded in society that facilitate expansion of algorithms as criminal legal practice. These same assumptions normalize the production of racial difference in society through law.

    A. Race as a Nonhuman Phenomenon

    Scholars and policymakers often conceptualize algorithmic risk assessment instruments as necessary within the criminal legal process. Without them, so the argument goes, judges and other criminal legal actors may predict recidivism risk incorrectly on the basis of individual prejudices about race. This argument tends to take the form of a comparison: algorithmic risk tools are preferred to subjective risk assessments by humans.

    93 This Section illuminates how this argument draws from racial assumptions that constitute whiteness in society.

    The assertion that individual decision-makers will predict risk incorrectly on the basis of individual racial bias is foundational to the legal scholarship urging adoption of algorithms as criminal legal practice. As an example, consider Professor Sandra Mayson’s recent article, Bias In, Bias Out.

    94 Mayson examined the genesis of racial inequities produced by an algorithmic risk assessment. “The real source of the problem,” she suggested, “[is] the nature of prediction itself.” 95 Algorithms used in the criminal legal process “reveal[] the racial inequality inherent in all crime prediction in a racially unequal world.” 96 Here, “[t]weaking an algorithm or its input data, or even rejecting actuarial methods, will not redress the racial disparities in crime or arrest risk in a racially stratified world.” 97 Nevertheless, after illuminating the “impossibil[ity]” of equality-enhancing “algorithmic methodology,” 98 Mayson concluded that we must still use algorithms in criminal law’s administration.

    Mayson reached this conclusion through two analytical steps. 99 First, she situated the “default alternative” to algorithmic assessments as the “subjective risk assessment” by humans. 100 Second, she argued this alternative could harm Black defendants. 101 It leaves the possibility of a judge subjectively predicting risk on the basis of “irrational cognitive bias [that] can fuel racial inequality.” 102 Because “[i]ndividual criminal justice actors... may harbor animosity toward one racial group[,] [o]r the bias may be implicit,” 103 her alternative to adopting a tool—doing nothing—would be worse than adopting a tool that could burden minorities in particular. 104 She supported this point by referencing “experimental literature [that] has documented the effects of implicit bias in a range of criminal justice settings” and “ample and mounting evidence [that] has documented otherwise inexplicable racial disparities in policing, charging, pretrial detention, and sentencing.” 105

    Such analysis employs racial assumptions when confronting the absolute contingency of solutions to mass incarceration. First, the studies referenced do not compel the interpretation she accords. One could, like Mayson, conclude that these studies demonstrate massive, individual human bias. One could also conclude that these studies demonstrate racism is far more complex than existing studies in each respective field suggest.

    106 Second, the state of our knowledge about race and racism contradicts her characterization of the dilemma. Race is socially constructed in political, cultural, and historical contexts.

    107 If we accept that race is a social construct, it is incoherent to “conceptualize the causal effect of race by imagining [individual] decision-makers perceiv[ing] two [persons] as otherwise identical but for race.” 108 Yet this is exactly what legal scholars and policymakers tend to do when discussing the potential benefits of algorithms as criminal legal practice. 109 Looking to Mayson’s article as a paradigmatic example, she narrowly construed racial inequality as a product of unequal treatment by individual criminal legal actors. 110 A perception or belief about racism sustains this argument. Because algorithms standardize prediction across individual defendants, this perception makes algorithms appear responsive to the phenomenon of mass incarceration. Empirical research demonstrates that many Americans share this perception or belief.

    111

    Instituting algorithms as a response to the perception that individual prejudice produces racial inequality constitutes whiteness. It situates racial prejudice and bias toward those constructed as non-White as intractable and too deep to be eradicated. Underlying this notion is the assumption that whiteness, meaning the connection between being raced as white and privilege, is so deeply embedded in society that racial change, if possible, requires a nonhuman intervention. Algorithmic risk assessments, with their emphasis on big data sets, prediction, and constant surveillance, appear nonhuman.

    112 The combination legitimates race. Assuming that racism cannot or should not be resolved absent a nonhuman intervention suggest that race (including whiteness) is a nonhuman phenomenon as well.

    These race-legitimating ideas are expressed in the law around algorithms as criminal legal practice in a variety of ways. For example, the assumption that individual racial bias produces racial inequality renders the constitutional limitations that courts place on technical mechanisms to constrain judicial decision making as matters of sociopolitical crisis.

    113 Further, the assumption that nonhuman interventions respond to racism situates algorithms in a favored position over other, more structural, criminal legal reforms. 114 These assumptions produce a space of consensus around algorithms as criminal legal reform. That consensus threatens to foreclose other kinds of structural interventions, like investment in institutions of care, to the benefit of whiteness.

    115

    In summary, the algorithm relies upon social assumptions about the feasibility of addressing racism among human actors in the criminal legal process. These assumptions legitimate race as a nonhuman phenomenon in society while justifying algorithms as a necessary component to addressing mass incarceration.

    B. Technology as the Anti-Racist Intervention

    Adopting algorithms within the criminal legal process consumes resources and mental capacity that can undermine and overshadow other kinds of interventions. Yet many scholars and policymakers suggested that algorithms pose de minimis costs. I contend that the tools appear relatively costless because of a racial assumption that this technical intervention is a morally right response to racial inequality. This assumption depoliticizes significant material and epistemic costs associated with the expansion of algorithms while legitimating the production of white privilege in criminal law’s administration.

    The notion that algorithmic risk assessments are costless is pervasive. For example, the American Law Institute has suggested that algorithmic risk assessments in the sentencing process provide an objective means to save limited financial resources within criminal law administration while avoiding victimization. 116 Legal scholars have urged adopting algorithmic risk assessments throughout criminal legal institutions because, as a data-driven intervention, the tools can manage the “tough on crime” politics that have distorted criminal legal policies in recent decades. 117 Alternately, some have argued that these data-driven tools can save money by avoiding the cost of incarceration, while rendering decision making processes more transparent. 118 Such calls for data-driven interventions focus on reforming the criminal legal process without necessarily changing the structural forces that sustain the expansion of the criminal legal apparatus. As such, the financial orientation complements policy arguments relying on evidence of cost-savings that compare incarceration to alternative forms of surveillance.

    119

    This orientation obscures a lot. It trains the analytical lens toward certain types of costs among criminal legal institutions while leaving other costs unquestioned.

    120 However, algorithmic tools are both materially and epistemically costly.

    1. Material Costs of Algorithmic Risk Assessments

    Algorithmic tools are not materially costless. Tax dollars largely subsidize the information technology sector in criminal legal institutions.

    121 Starting in the 1960s, the federal government encouraged the fifty states and private entities to develop “public-safety related programs of all types,” including those that promote “technical efficiency” throughout prisons, courts, policing, and more.

    122 It achieved this end through the Law Enforcement Assistance Act of 1965, 123 and shortly thereafter the creation of the Law Enforcement Assistance Administration (LEAA) via the Omnibus Crime Control and Safe Streets Act of 1968. 124 While much of the LEAA’s billions of dollars in distributions went toward policing,

    125 a substantial amount subsidized other forms of technical support at the federal, state, and local level.

    126 This includes, for example, the expansion of technical sentencing guidelines 127 and the creation of interagency criminal record databases. 128 Though the Reagan Administration phased out the LEAA in 1982, 129 the administration’s legacy lives on through various federal agency offshoots. 130 These offshoots include the Office of Justice Programs, 131 the Bureau of Justice Statistics, 132 and the National Institute of Justice. 133 In recent years, the Office of Justice Programs and the National Institute of Justice, in particular, have financially supported states’ efforts to confront the economic pressures of mass incarceration through technical assistance offered via the Justice Reinvestment Initiative (JRI). 134 In turn, JRI—as a public/private initiative partially subsidized by the federal government—has been a key promoter of algorithmic risk assessments. 135 Thus, the federal government is a primary financial supporter of the costly endeavor to expand algorithms as criminal legal practice.

    To the extent tool adoption is considered to be materially cheap, that is because law makes it so.All current actuarial risk assessments rely on what Professor Ngozi Okidegbe calls “carceral knowledge sources” to predict future behavior.

    136 For example, publicly and privately developed tools rely heavily on police data, like criminal arrest records, court records, appearance records, and convictions records.

    137 These records, or data, are potentially valuable to the extent one can collect them for processing.

    138 The federal government created a centralized office to guide states toward creating centralized repositories of records taken from local police and courts in the 1970s. 139 In 1995, Congress financially incentivized states and localities to automate their criminal record systems through the National Criminal History Improvement Program. 140 The 1996 Electronic Freedom of Information Act further encouraged government agencies to “use new technology to enhance public access to agency records and information.” 141 By 2002, federal and state courts throughout the country were pushing toward making online access to court records widely available. 142 In recent years, the federal government has shifted its energy toward incentivizing states to collect other types of information to assist in recidivism prediction as well. 143

    Many critique this trend, as the government has effectively created a cheap commodity for use in the data-driven economy. 144 Eisha Jain, for example, has long critiqued the open access to criminal records through a race- and class-sensitive lens.

    145 In the context of policing, she argues that these records should not be created or at least not easily accessible for non-government use. 146 My point, however, is not whether or how to regulate these records. Instead, I simply emphasize that if these tools are “cheap” to make with these records, the law has constructed this to be so.

    147

    2. Epistemic Costs of Algorithmic Risk Assessments

    The costs of algorithmic risk assessments are not just material; they are epistemic as well. The tools are meant to__ __“economize the amount of brainpower that personnel expend on decision making” while allowing managerial predictions of internal operations. 148 Unsurprisingly, then, algorithms build from and conceptually fit within a larger transformation of key social concepts to accommodate the technical, programmatic language of risk in criminal law’s administration. 149 For example, rehabilitation has been conceptually reformatted to reflect risk assessment. 150 The conception of racial justice has been reframed to adhere to questions of what I elsewhere refer to as “technical formalism” at sentencing.

    151 The very idea of dangerousness has mutated to concern with recidivism risk. 152 These mutated social concepts obscure or normalize structural transformations in society that produce recidivism risk. 153 For example, conceptualizing rehabilitation as efficiently managing individuals through carceral supervision depoliticizes the expansion of punitive surveillance in marginalized communities.

    154 Simultaneously, expectations of judicial thinking at sentencing have been simplified to reflect a more surface-level, formalistic approach to decision making. 155 Quite simply, society increasingly conceptualizes what judges do as benefiting from technical assessments of risk. All these transformations are costs necessary for algorithms to make sense as criminal legal practice. 156

    3. Racism as a Depoliticizing Force

    The deeper question is why law and policy discourse tend to conceptualize this pathway as relatively costless. Racism provides an easily obscured answer. 157 On one level, this investment appears costless because of the racial assumption that surveillance tools respond to individual racial bias in criminal law’s administration. 158 To the extent that people are more likely to perceive racism as a matter of individual prejudice, that perception provides a cognitive framework that affirms investment in algorithms. 159 From this perspective, investment in information processing mechanisms to shape individual decision making emerges as necessary and beyond critique. The costs do not matter because this kind of spending is morally right.

    On a deeper level, revealing the centrality of racial assumptions contextualizes the depoliticization of information technologies through criminal law reform. While other types of spending in criminal law’s administration—for example, the cost of incarceration—have recently been politicized, government costs related to algorithms tend not to be. 160 To be sure, the rationales for modernizing the criminal legal apparatus vary depending on political leanings. 161 Yet the commitment to information processing infrastructure is persistent. It produces a symmetrical logic: lawmakers invest in information technologies because of race, and race renders spending on information technologies invisible, or at least beyond critique. Thus, commitments to specific racial assumptions legitimate the material and epistemic transformations in criminal legal institutions necessary to make the information age acceptable. 162

    Yet just as our understandings of race make various social costs associated with information technologies acceptable, our understandings of information technologies legitimate another cost: the social production of White supremacy through law. The algorithm as criminal legal reform demonstrates the point. Because algorithms seem to satisfy the demand to address racial inequality, when non-White people repeatedly fail to thrive on the metrics set out through those algorithms, a cultural assumption that non-White people are dangerous, prone to crime, or some other version of the non-law-abiding variety is reinforced. 163 Algorithms also reinforce a radical individualism that suggests only the deserving few should be released from the expanding criminal legal apparatus. 164 Those deserving few tend to be constructed as “White.” 165 Thus, in a society increasingly governed through the criminal legal apparatus, algorithms as criminal legal practice produce a new kind of white privilege. They normalize identifying White people as the small few who deserve exiting this punitive form of governance. 166

    Through Part III.A and Part III.B, a particular structural dynamic in law begins to emerge. Though unspoken and often unacknowledged, Part III.A illuminates how assumptions about whiteness make the algorithm appealing as a criminal legal practice. At the same time, this Section demonstrates that assumptions about how to respond to racial issues depoliticizes the production of white privilege through law.

    C. The Fixedness of Race

    How to address the potential discriminatory effects of algorithms on marginalized groups is a hot topic in legal scholarship. 167 A dominant strand of this scholarship debates whether and how race can be considered in the construction of algorithmic risk assessments. 168 This Section takes no stance on whether or how to construct an algorithm under law. Rather, by examining this legal debate in relation to algorithms used as criminal legal practice, this Section highlights dominant thought structures in the existing legal scholarship. This analysis reveals a pervasive assumption about the nature of race created through this debate: that race is fixed. In identifying this assumption, this Section further demonstrates how legal discourse creates racial meanings critical to the production of racial hierarchy in society.

    There exist essentially two prominent approaches to algorithm construction in the legal scholarship concerning algorithmic discrimination and criminal law’s administration. Recognizing that algorithms rely on racially inflected data, 169 one could limit the use of specific predictive factors in the algorithm’s design. 170 For ease of reference, let us refer to this as the “colorblind” approach. [171] While recognizing the same facts, one could alternatively urge consideration of race in the construction of the algorithm. For ease of reference, let us refer to this as the “race-conscious” approach. I briefly describe then analyze each in turn.

    Let us begin with the colorblind approach. Currently, none of the algorithmic risk assessment tools used in criminal law’s administration consider race as a predictive factor. [172] At the very least, some legal scholars suggest equal protection doctrine under the U.S. Constitution should prevent the use of race as a factor in the design of potentially more complex algorithms going forward.

    [173] This approach would facilitate the normatively desirable end of race neutrality in punishment decision-making. We might translate this position on race and algorithmic design into the following schematic map:

    Fig. 1. Schema A – The “Colorblind” Approach to Algorithmic Discrimination

    Moving to the alternative, a growing contingent of scholars argue that the “colorblind” approach is flawed. Based on empirical and computer science literature, some scholars suggest that race neutrality is impossible through “colorblind” tools.[174] Legal scholars adopting this approach contend that the best way to achieve a preferable algorithm for use in criminal law’s administration would be to use race as a factor in the construction of the algorithm.[175] Here, race consciousness in the construction of the algorithm leads to the normatively desirable end of race neutrality in punishment outcomes. We might translate this position on race and algorithmic design into the following schematic map:

    Fig. 2. Schema B – The Color-Conscious Approach to Algorithmic Discrimination

    Now, to be clear, neither schema A nor schema B, at least in the context of algorithms in criminal law’s administration, is race neutral. Algorithmic risk assessments are a social response compelled by a priori racial assumptions, some of which are set forth in Part III.A and B. But my aim here, contrary to the previous Sections, is pointing out how law does more than reflect racial assumptions; it produces them. I contend that these seemingly opposite positions construct the same racial meaning in society through law: namely, that race is fixed. I unpack the genesis of this assumption in the seemingly opposite approaches one at a time before explaining why this assumption is critical to the contemporaneous production of racial hierarchy in society.

    First, recognize that race is discursive: it “does not exist outside of, but is instead the effect of, discourses.”[176] When a discourse takes for granted the existence of race, it makes race socially salient. In analyzing such a discourse, “the question is not whether we want race but what we want race to mean.”[177] Applying this insight to the scholarship on constructing algorithms for use in criminal law’s administration, both approaches delineate race as a preexisting fact. The question driving this scholarship is whether and how to act upon (or not) racial disparities in criminal law’s administration potentially exacerbated by predictive algorithms under law.[178] Each approach thus produces a particular vision of what race means in society through law.

    Consider schema A. Here the suggestion is that race is an objective fact that a person cannot change, and so it should not be considered in an algorithmic prediction. Particularly where the tool is used in the criminal legal process, this consideration would be odious. To punish someone based on that which cannot be changed is antithetical to criminal law because race is so objectively fixed.[179] Through this approach, constitutional doctrine keeps law out of the process of labeling people, including within the algorithms used in criminal law.

    Schema B is also troubling. Algorithms rely on data to predict future outcomes. Humans produce the data for processing by annotating it.[180] That is, humans assign meaning to the data using labels. Typically, the humans that assign meaning to the data are not the humans that are being observed by the data. In other words, professionals or marketers or third-party annotators engage in the “sense-making practice” of classifying data by labeling people.[181] Currently, data annotation processes tend to rely upon “common sense” racial categories generated by coders or the public and private actors procuring the technology.[182] That is, should the law permit race as a factor in constructing algorithms used in criminal law’s administration, it would legitimate the social process of categorizing people according to preconceived racial categories. Such an approach to constitutional law would legitimate not just algorithms as criminal legal practice, but the allocation of power to the state or private actors involved in tool construction to see, then unsee, race for society.[183] Tool designers and data collectors would become the arbiters of who falls within what racial category.

    Not only is the result paradoxical; it is race-constructing. Under schema A, the law cannot intervene in the social production of racial inequality through the criminal legal apparatus. Under schema B, the law legitimates the social production of racial categories. Both approaches ascribe the same social meaning to race: the idea that race is fixed. Under schema A, it is the fixedness of race—the notion that race is a permanent social fact beyond an individual’s control—that makes it an odious factor to consider in the technical assessments used in the punishment process. Under schema B, it is the fixedness of race—the notion that race is an obvious and readily observable fact in the social world—that makes it a possible factor to consider in algorithmic assessment. Separate from how to resolve the debate, this legal discourse suggests that race is fixed, stable, and “absolute” in society.[184]

    Conceptualizing race as fixed inures benefits to whiteness in the United States. Race is, at its core, an epistemological and political system that governs people by sorting them into social groupings based on invented biological demarcations.[185] In the context of algorithms, unlike in the “hard sciences,” scholars are careful not to suggest that racial difference relates to innate, biological differences.[186] Yet, to the extent that scholars suggest the social forces that expose marginalized people to higher risk of contact with the criminal legal apparatus are beyond legal change, they risk reinforcing one of the most pernicious assumptions endemic to the biological definition of race. The assumption is that indicators of enduring racial hierarchy in society are natural, fixed, and beyond change.[187] This assumption emerges through discursive suggestions that society cannot change, so treating race as an objective metric to make certain algorithmic predictions normatively sound may be necessary.

    As relates to this assumption, the problem is that the orientation of the discourse—to assume that racial disparities can be separated from social, cultural, and historical context—rejects a political conception of race. Refusal to see race as a political construct naturalizes the deep inequality in society reflected in algorithmic risk assessments. It resonates with the notion that people raced as White and non-White are innately different. This notion legitimates the production of racial hierarchy in this country. It makes unequal conditions appear natural and beyond redress. In turn, it makes the privileges and benefits accorded to whiteness—not just in the criminal legal process, but more broadly—seem normal. Through the intersection of technology and law, the assumption that race is fixed occludes how deeply race continues to structure society in fundamentally unequal ways.

    IV. Challenging Racial Assumptions in Criminal Legal Reform

    Part III’s analysis makes clear that the algorithm as criminal legal practice is suffused with racial meaning. This insight reveals an underappreciated takeaway about criminal legal reform in this moment. Quite simply, society cannot resist the dynamic of embracing this technical “solution” in criminal legal institutions for deeply entrenched societal problems unless we contend with racism and the way it shapes dominant cultural imagination and collective social action. Said differently, this Article makes clear what legal perspectives oriented toward mass incarceration do not: the intersection between racism and law requires direct attention in technological criminal legal reform discourse going forward.

    The remainder of this Article theorizes on the role of law and legal scholarship in facilitating a racial justice agenda at the intersection of technology and society. Theoretically, Part IV.A proposes a legal perspective that renders visible and challenges racial assumptions expressed through the law around algorithms as criminal legal practice. Practically, Part IV.B sets forth a methodological approach that recenters legal scholarship and expands the normative horizon of legal interventions on algorithms as criminal legal reform going forward.

    The algorithm presents a technological controversy at the intersection of criminal law and society. Common claims in support of these tools include their objectivity and transparency, their potential to reduce reliance on incarceration while increasing efficiency and cost-savings, and their promise to reduce the threat of racial disparities. Critics contend that algorithms may not achieve any of those ends, and even if they do, the practice maintains punitive surveillance over marginalized communities in particular. Adherents to either of the mass incarceration frames tend to underscore the harmful racial and social effects of algorithmic risk assessments as criminal legal reforms. These frames lead to alternate solutions to the technological controversy algorithms present. Those adhering to the incarceration frame tend to suggest race may be a reason to pursue algorithms as criminal legal practice subject to some changes in law or policy. Oppositely, adherents to the carceral state frame tend to position race as a central reason against algorithms as criminal legal practice.

    This Article illuminates that these explanations and interventions are incomplete. In so doing, it sets forth a different legal perspective on algorithms as criminal legal practice. Algorithms as criminal legal practice legitimate dominant social ideas about race. These ideas, expressed in and produced through law, substantiate society’s beliefs in the normative justifiability of race and racial hierarchy as social phenomena. Said differently, algorithms are racially stabilizing. This aspect of the algorithm as criminal legal practice is critical to understanding the contemporary social production of racial difference through law. The law around algorithms reinforces ways of thinking about race in society that normalize the current distribution of power and resources to the benefit of whiteness. Adopting a legal perspective that understands the “problem” of algorithms in this way sets a theoretical foundation to reframe the legal scholarship around this criminal legal practice going forward.

    The legal perspective proposed by this Article provides a new foundation to resist narrow and even far-reaching interventions upon algorithms in law aimed to “fix” this criminal legal practice. For example, liberal and even left-leaning scholars in this tradition recognize that algorithms pose a significant threat of harm to marginalized Black people. As a solution, many explore how to “fix” the algorithm through law.[188] Yet, fixing the algorithm will not necessarily destabilize the social ideas about race expressed in law that algorithms legitimate and produce. Changing how individual decision-makers respond to algorithms would not necessarily achieve that end either. Unfortunately, as this Article emphasizes, the social assumptions component is critical to the production of racial difference in society through law.[189] From a racial justice perspective, then, the legal response to algorithms must change.

    This legal perspective on race in the law around algorithms expands the field beyond the normative interventions sustained by carceral state critiques as well. This perspective illuminates the historically situated nuance of racism. During the era of mass incarceration’s rise, punitiveness and the production of racial difference traveled together with the expansion of criminal law governance. In contrast, antiracism and the impulse to decarcerate do not inherently travel together in contemporary society. While critiques of the carceral state are sensitive to disrupting the punitive aspect of governance, that lens can overlook the production of racial difference. To be sure, these critiques demonstrate a clear resistance to racism. As described above, these critiques recognize the structural implications that information technologies like algorithms may have on marginalized Black communities.[190] Yet, in seeking to disrupt carceral governance, such critiques can fall short in demonstrating racism with any specificity.

    To the contrary, the analysis in this Article demonstrates the importance of examining racism with specificity. Racism shapes the solutions we pursue when confronting mass incarceration and the society we create in its wake. Without critically engaging racism, advocates adhering to either frame can encourage interventions that appear both excessively narrow and overly broad at the intersection of law and algorithms as criminal legal practice. For example, carceral state critiques can lead to positions against prediction, incarceration, or criminal law. But when fighting against racism specifically, it is not enough to be against any one of these things exclusively. Case in point, one can urge the abolition of algorithms in criminal law’s administration without undermining the social assumptions regarding race that algorithms affirm in society. Alternatively, one can advocate an end to incarceration without undermining these social assumptions about race.

    This Article proposes that legal scholars adopt a different, intersectional perspective on algorithms as criminal legal practice going forward. This perspective centers racism at its juncture with law, technology, and criminal legal reform. Such an approach underscores a critical point often obscured at the intersection of race, technology, and criminal law. Criminal law is a human practice, and so it is “partly what we make it.”[191] The algorithm is a human practice that we make in and for criminal law’s administration.[192] Racism is a human practice that transforms, survives, and thrives in society, in part, through law. When viewed in this way, resisting algorithms as criminal legal practice is not a sufficient solution. Destabilizing racism means locating the problem not in technology or law, but in all of us, and in the way we think about and act through the intersection of law and technology with a race-colored lens.

    A legal perspective that critically centers racial assumptions at the intersection of law and technology situates the problem not in law or technology, but in us. More specifically, the problem is what we do with law, including criminal law. Professor Dorothy Roberts illuminated the value of taking this approach to law well in her recent article, Abolitionist Constitutionalism.[193] There, she recognized that abolitionist theory could illuminate how the U.S. Constitution has been interpreted to facilitate slavery-like systems.[194] She also proposed a constitutional paradigm that could support prison abolitionist goals, strategies, and visions.[195] Drawing upon Frederick Douglass as an early abolitionist, Professor Roberts encouraged both of these pathways forward in constitutional law scholarship.[196] In so doing, Professor Roberts grappled with the ways in which law could be a tool to intervene upon our assumptions about the social world.[197] Her work underscored that furthering racially transformative ends in society through law required working in the law on multiple fronts.[198]

    Consistent with that insight, this Article urges working in the law around algorithms as criminal legal reform on multiple fronts. We legitimate racial hierarchy and make race salient via historically contingent racial assumptions expressed in the law. Existing legal scholarship considers how law or technology may change to advance or resist the advance of algorithms as criminal legal practice. Yet being for or against algorithms in criminal law is too simple a response when racism at the intersection of criminal law’s administration and this information technology is complex. Centering mass incarceration facilitates an emerging, normative binary as the horizon of law’s role: tinker around the algorithm through law or abolish it outright. Conversely, the legal perspectives on mass incarceration can limit the possibility that law can destabilize racial assumptions currently shaping society. The legal perspective proposed here rejects that limitation. The challenge, from this perspective, is to center racism and race not as a problem for solving, but as a practice to critically engage through the intersection of technology and law.[199] The following Section urges legal scholars to shift the center of legal discourse around algorithms to advance that larger racial justice agenda.

    B. Methodological Approach

    This Section sketches the contours of a legal perspective on algorithms as criminal legal practice that exists outside the confines of the dominant legal perspectives on mass incarceration. I propose altering the structure of legal scholarship on information technology and society to shift—and, more precisely, racialize—the legal discourse around this criminal legal practice in different ways going forward.

    A legal perspective that challenges racism at the intersection of algorithms and criminal law can adopt two different, but complementary methodological approaches. One approach can argue for the abolition of algorithms as criminal legal reform. Here, legal scholars must illuminate the contradictions of this technology as reform on its own terms. This thread requires bringing to the fore other possibilities available that could achieve more meaningful, structural changes through different criminal legal practices. The other thread of equal importance can illuminate alternate conditions of possibility within the law and technology discourse. This technology shifts the foundation of existing legal frameworks. Scholars considering the algorithm in criminal law’s administration can bring to the fore different regulatory interventions that connote different ways of understanding the social problems that this technology presents.

    What makes these approaches methodologically different is the endpoint of the contribution. This shift, in turn, may upend the accepted norms around the form of legal scholarship at the intersection of race, information technology, and criminal law altogether. In the case of abolition-oriented critiques, the endpoint is not simply to identify alternate reforms. Instead, the endpoint must address how and why these alternate reforms do not rely on or reinforce the racial assumptions illuminated by the dominant legal discourse around algorithms as criminal legal reform. Such scholarship must name and critique racial assumptions as a reason we do not accept those alternate paths. Separately, in the context of alternate conditions of possibility in law and technology discourse, the endpoint is not simply to illuminate the alternate modes of utilizing or regulating the algorithm through law and policy. Rather, the endpoint must be illuminating how racism prevents embrace of these alternate responses. When such scholarship specifies how law can facilitate these alternate routes, it must also name the way technology combines with our racially inflected cultural imagination to prevent those possibilities. The point of the legal scholarship, in this context, is not to center mass incarceration, even though it speaks to that cultural and historical reality. Instead, it critically centers race and racism. As such, the always trying “so what” section of the law article would not concern incarceration or the carceral state. Rather, it would concern current social assumptions in relation to race. That structural change in the form of legal scholarship bucks the normative expectation that a law review article concludes with a pragmatic solution.[200]

    To make this proposed shift in legal perspective concrete, consider some of my writing on the topic of algorithms as criminal legal practice. I have written about the expansion of technological interventions, in particular algorithms, in criminal law’s administration for almost ten years. My scholarship reflects the confines of the dominant discourses around mass incarceration when conducting this research with a critical perspective on race. In Against Neorehabilitation, written ten years ago, I examined the expansion of predictive risk assessments as part of states’ response to the economic and social pressures of mass incarceration.[201] The article defined and critiqued “neorehabilitation” as an emergent way of thinking about technical reforms, including algorithms, as solution to mass incarceration. It illuminated how this category of criminal legal reforms could expand carceral surveillance, particularly among marginalized Black people, while distorting normative ideas about justice. Several years later, I authored another article, Constructing Recidivism Risk, that examined the process through which algorithms used in criminal law’s administration are produced.[202] The article emphasized the normative judgments that go into the construction of an algorithm and critiqued the tensions that arise between this process, sentencing law and policy, and efforts to reduce incarceration.

    Both works, intuitively, sought to challenge the legitimating force behind algorithms as criminal legal practice. Yet, perhaps in part because the works exist within the confines of dominant discourse on mass incarceration, I failed to engage racism in my interventions with any specificity. Against Neorehabilitation largely adhered to the carceral state frame. There, I challenged thinking about algorithms as rehabilitative. My normative intervention evoked a different social meaning of rehabilitation that existed prior to the 1970s. If I were to write that piece now, considering this call to shift the legal discourse, I would unpack the historical connection between the transformation in the idea of rehabilitation and racism. For example, I might explain how assumptions about race operate on the social idea about rehabilitation to sustain the advance of these technological reforms.[203] Constructing Recidivism Risk, on the other hand, largely adhered to the incarceration frame. That article proposed ways that law could intervene into the normative judgments that arise in the construction of algorithms used at sentencing. If I were to write that piece now, considering this call to shift the legal discourse, I would still suggest many of the legal interventions that I proposed there. However, I would conclude the article by explaining that to pursue those interventions meaningfully would mean contending with dominant racial assumptions. For example, it requires confronting the assumption that the algorithm is anything other than a human practice, which in turn requires confronting the racial assumption that a nonhuman intervention is the only or preferred way to address racial inequality in society.[204]

    My point, in briefly rehashing these articles, is not to rewrite them. To the contrary, my point here is to elucidate what a legal perspective critical of the intersection between race, technology, and criminal legal reform might look like in legal scholarship around algorithms going forward. The question for legal scholars is not whether to tinker or abolish this criminal legal practice. Rather, it is how to transform the legal discourse this criminal legal practice produces from one that is racially stabilizing to one that is destabilizing. An initial step lies in shifting the center of the discourse. Such legal scholarship can continue to question whether algorithms in criminal law facilitate an end to mass incarceration—however that concept is defined—through technological ends. But in centering race and racism, it can also raise awareness to the operation of racial assumptions in technology-oriented criminal legal reform discourse.

    To be sure, racializing the legal discourse around algorithms in criminal legal reform presents its own concerns. Each of the legal approaches proposed here poses different threats. For the abolitionist account, the concern is marginalization from mainstream legal discourse. Data-driven discourse has proved an ideological mechanism through which criminal legal reforms have been rendered possible in the United States.[205] Algorithms are central to that agenda, though not the exclusive component of that larger trend.[206] In this context, legal scholarship urging abolition of algorithms may appear utopian. It could be dismissed as unsubstantial across legal spheres, thus lessening its potential impact.

    For the alternate conditions within the law and technology account, the threat is legitimation.[207] That is, such legal scholarship may facilitate “the various ways in which law in all its manifestations helps to generate ways of thinking that reinforce numerous aspects of social life that might otherwise be considered normatively undesirable.”[208] In this moment, social movements are building to demonstrate the possibilities of an alternate, more equitable world.[209] Activists and scholars in this tradition disrupt preservation of the status quo through legal transformation. They urge legal change using a language entirely different from the data-driven, cost-efficient discourse of information technology as criminal legal reform. Scholars and advocates focused on racial justice may seek to disrupt the social production of race through regulation of information technology rather than joining in that different language of contestation and resistance. In so doing, such scholars and advocates may lend force to a broader discourse they seek to interrupt.

    Yet, for Black people in particular, it is not enough to operate outside the dominant discourse. Black people gain through the production of “a series of ideological and political crises.”[210] Those crises occur within and outside of dominant discourse. In her seminal article, Retrenchment and Reform: Transformation and Legitimation in Antidiscrimination Law, Professor Kimberlé Crenshaw noted a similar dilemma in the context of rights discourse. There, she situated rights discourse not as a truism that was good or bad, but as a thing that could be used to advance or hinder “the efforts of Black people to transform their world.”[211] The same point applies here, in a very different context. Centering racial assumptions in the vibrant legal discourse around algorithms as criminal legal reform provides a means to demonstrate political and ideological crises to the benefit of Black people.

    Though hardly offering a single “solution” to the dilemmas that algorithms illuminate, this methodological approach to legal scholarship can “engender radical scholarly praxis.”[212] We can only begin to imagine different, more equitable futures if we begin to see the work that racism does to structure our understanding of the present. In searching for novel ways to use law to intervene on this criminal legal practice, legal scholars can remind us all that the pursuit of racial justice is not static, linear, or chronological.[213] It must be as fluid and capacious in approach as racism is in producing racial difference through law.[214]

    Conclusion

    This Article proposes a major shift in conceptualizing the problems algorithms present in society at the intersection of race and law. It reveals deep-seated racial assumptions embedded in society that propel the expansion of algorithms as criminal legal reform. These same racial assumptions function to normalize race as a salient social category, while justifying the distribution of resources in society along racial lines through law. Though existing legal scholarship recognizes algorithms as a controversial criminal legal practice, it fails to consider how racism structures legal discourse. Said differently and following the Toni Morrison quotation at the opening of this Article, existing legal scholarship captures the fish; it may even encompass the pebbles and the castles. However, it does not begin to conceptualize the fishbowl. Yet, to begin imagining fully a different, more equitable world, this Article reminds us that we must see racism as a fishbowl that “transparently (and invisibly)” structures our lives through law.

    Normatively, this Article urges a structural change in legal scholarship to help render visible otherwise-invisible structures of thought functioning at the intersection of technology, society, and law. The theoretical analysis set forth in this Article reveals how law operates as a tool to express racial assumptions through collective social action concerning algorithms as criminal legal reform. But while law may facilitate this social process, it can also destabilize it. Currently, legal scholarship tends to destabilize technology, the law, or both. It leaves untouched racial assumptions expressed at the intersection of technology, the law, and society. This Article proposes a legal perspective that critically centers race and racism at its intersection with technology and criminal legal reform going forward. One need not imagine that such a discursive shift will immediately end the production of racial difference through law. It is enough that racializing the legal discourse differently can challenge the complacency within which we all live socially and historically contingent racisms. In so doing, legal scholars can contribute to the important project of searching for “the words to say it.”[215]

    Copyright © 2023 Jessica M. Eaglin. Professor of Law, Indiana University Maurer School of Law. The author thanks Chaz Arnett, Monica Bell, Guy-Uriel Charles, Jessica Clarke, Gina-Gail Fletcher, Fanna Gamal, Jasmine Harris, Margaret Hu, Aziz Huq, Eisha Jain, Irene Joe, Amy Kimpel, Benjamin Levin, Tracey Meares, Ngozi Okidegbe, Dan Richman, Andrew Selbst, Jocelyn Simonson, Sonja Starr, India Thusi, participants in the ABA Criminal Justice Section Academy for Justice Workshop, Cornell Law Faculty Workshop, Decarceration Works-in-Progress Workshop, George Washington Law Faculty Workshop, Law and Technology Workshop, Minnesota Law Faculty Workshop, Penn State Law Faculty Workshop, University of Chicago Constitutional Law Workshop, and the Vanderbilt Law Faculty Workshop for meaningful engagement with earlier iterations of this work. Many thanks to the student editors of the California Law Review for their insightful editorial assistance. All errors are my own. DOI available at https://doi.org/10.15779/Z38MW28G1S

    1. Toni Morrison, Playing in the Dark: Whiteness and the Literary Imagination 17 (1992).
    1. An algorithmic risk assessment is “an automatic rule that uses numerical inputs to produce some result, in this case a prediction relevant to the criminal justice system.” Anegèle Christin, Alex Rosenblat & Danah Boyd, Courts and Predictive Algorithms 1 (2015), https://datasociety.net/wp-content/uploads/2015/10/Courts_and_Predictive_Algorithms.pdf [https://perma.cc/79FA-9HAW]. For more on the design of algorithmic tools used in criminal sentencing, see Jessica M. Eaglin, Constructing Recidivism Risk, 67 Emory L.J. 59, 67–88 (2017) [hereinafter Eaglin, Constructing].
    1. Numerous jurisdictions employ algorithms throughout the criminal legal process as a matter of law, see, e.g., The First Step Act, § 115-391 (2018) (requiring the Attorney General to develop and release a risk and needs assessment system for the Bureau of Prison); N.J. Stat. § 2A:162-16(b) (2017) (“The court shall consider the Pretrial Services Program’s risk assessment and recommendations on conditions of release before making any pretrial release decision for the eligible defendant.”); Ohio Rev. Code Ann. § 5120.114(A) (2019) (requiring the Department of Rehabilitation and Corrections to select a single validated risk assessment tool for mandated use by courts for sentencing, departments of probation, correctional institutions, parole boards, and other criminal legal institutions); and policy, see, e.g., 725 Ill. Comp. Stat. 5 110-5 (2023) (“The Court may use a regularly validated risk assessment tool to aid its determination of appropriate conditions of [pretrial] release”); La. Rev. Stat. § 15:327(A)-(B) (2019) (“The presentence investigation validated risk and needs assessment tool and evaluation report may be utilized by the sentencing court prior to determining an appropriate sentence... [and] to determine eligibility or suitability of the defendant for any available specialty court”). This Article refers to the use of an algorithm as part of criminal law’s administration as a “criminal legal practice.” For recent accounts of algorithms’ prevalence in criminal legal institutions, see Bureau of Justice Assistance, Risk Assessment Landscape: Public Safety Risk Assessment Clearinghouse, U.S. Dep’t of Justice, https://bja.ojp.gov/program/psrac/selection/risk-assessment-landscape [https://perma.cc/Z9W8-8XPS] (providing an interactive data visualization of what risk assessments are used across various decision points across the United States); Electronic Privacy Information Ctr, AI & Human Rights: Criminal Justice System, https://epic.org/ai/criminal-justice/index.html [https://perma.cc/GC8H-T2J5] (last visited Aug. 16, 2021) (contending that “Risk Assessment Tools are used in almost every state in the U.S. – and many use them pretrial, though they exist at sentencing, in prison management, and for parole determinations” and representing their use alongside different surveillance tools in the criminal justice system).
    1. For examples of public policy debate around algorithms in criminal law’s administration, see Carrie Johnson, Flaws Plague a Tool Meant to Help Low-Risk Federal Prisoners Win Early Release, Nat’l Public Radio (Jan. 26, 2022), https://www.npr.org/2022/01/26/1075509175/justice-department-algorithm-first-step-act [https://perma.cc/U6HC-EGAS] (characterizing “persistent racial disparities that put Black and brown people at a disadvantage” as “[t]he biggest flaw” in the federal risk assessment tool implemented in the federal Bureau of Prisons); Tim Simonite, Algorithms Should’ve Made Courts More Fair. What Went Wrong?, Wired (Sept. 5, 2019), https://www.wired.com/story/algorithms-shouldve-made-courts-more-fair-what-went-wrong/ [https://perma.cc/N55W-ETM9] (noting that “[j]ournalists and academics have shown that risk-scoring algorithms can be unfair or racially biased” and describing studies that suggest algorithms can exacerbate judicial biases in decision-making along racial lines); Sam Corbett-Davies, Sharad Goel, & Sandra González-Bailón, Even Imperfect Algorithms Can Improve the Criminal Justice System, N.Y. Times Upshot (Dec. 20, 2017), https://www.nytimes.com/2017/12/20/upshot/algorithms-bail-criminal-justice-system.html [https://perma.cc/TA8J-62HA] (recognizing that “some people fear that algorithms simply amplify the biases of those who develop them and the biases buried deep in the data on which they are built,” but urging adoption of algorithms in criminal law’s administration because the tools can “mitigate pernicious problems with unaided human decisions”). For examples of debates about the impact of such algorithms on racially marginalized groups in legal scholarship, see infra Part I.
    1. See infra Part I.A.
    1. See infra Part I.B.
    1. Benjamin Levin, The Consensus Myth in Criminal Justice Reform, 117 Mich. L. Rev. 259, 262–63 (2018) (identifying two divergent yet pervasive ways of conceptualizing “mass incarceration” in legal scholarship and public discourse).
    1. The author recognizes that algorithms impact many intersecting marginalized groups along a variety of axes. In this Article, I choose to focus on the experience of Black people. However, my methodological approach could be used to analyze and critique the production of whiteness through its intersection with law, technology, and a variety of different marginalized identities.
    1. See Kimberlé Crenshaw, From Private Violence to Mass Incarceration: Thinking Intersectionally About Women, Race, and Social Control, 59 UCLA L. Rev. 1418, 1468–69 (2012) (critiquing the limits of “crisis discourses” for imagining racial justice).
    1. See infra Part II.A.
    1. See infra Part II.B.
    1. See infra Part III.A.
    1. See infra Part III.B.
    1. See infra Part III.C.
    1. See infra Part IV.A.
    1. See infra Part IV.B.
    1. Levin, supra note 7, at 309–15 (illuminating how divergent perspectives on mass incarceration shape criminal legal reforms); Jessica M. Eaglin, Technologically Distorted Conceptions of Punishment, 97 Wash. U. L. Rev. 483, 538 (2019) (situating algorithms as a criminal legal reform responsive to only one conceptualization of mass incarceration) [hereinafter Eaglin, Distorted].
    1. See, e.g., Dorothy E. Roberts, Digitizing the Carceral State, 132 Harv. L. Rev. 1695, 1712 (2019) [hereinafter Roberts, Digitizing] (“Algorithms that predict future conduct reinforce the state’s control over marginalized populations by legitimizing punishment without the need to prove individual culpability... Prediction is also fundamental to white supremacy because it both helps to obscure structural racism and is essential to the very concept of race.”); Sean Alan Hill II, Bail Reform and the (False) Racial Promise of Algorithmic Risk Assessments, 68 UCLA L. Rev. 910, 928–37 (2021) (demonstrating how the shift toward dangerousness predictions in pretrial bail determinations connects to “beliefs about the inherent criminality of Black and Latinx communities” expressed in the law).
    1. For examples of my earlier work articulating this perspective on technologies in criminal sentencing, see Jessica M. Eaglin, The Perils of “Old” and “New” in Sentencing Reform, 76 Ann. Surv. Am. L. 355, 361–63 (2021) (emphasizing connections between technical sentencing guidelines and algorithms); Jessica M. Eaglin, Population-Based Sentencing, 106 Cornell L. Rev. 353, 364–68 (2021) (same) [hereinafter Eaglin, Population-Based Sentencing].
    1. I use the term “difference” to connote “how ideas and knowledges of difference organize human practices between individuals.” Stuart Hall, Race, The Floating Signifier: What More Is There to Say about “Race”?, in Selected Writings on Race and Difference 359, 364 (Paul Gilroy & Ruth Wilson Gilmore eds. 2021).
    1. David Theo Goldberg, The Threat of Race: Reflections on Racial Neoliberalism 67 (2009) (distinguishing between the “merely descriptive” and normative usage of “racialization”). This Article extends Simone Brown’s concept of “racializing surveillance” by locating the production of race and racial hierarchy at the intersection of law and algorithms in criminal legal reform. See Simone Browne, Dark Matters: On the Surveillance of Blackness 16–17 (2015) (“Racializing surveillance is a technology of social control where surveillance practices, policies, and performance concern the production of norms pertaining to race and exercise a ‘power to define what is in and out of place.’”). However, I use the term “racializing” to connote a methodological approach that destabilizes unspoken social assumptions through legal critique and invention.
    1. On the prevalence of technologies in criminal law’s administration, see Chaz Arnett, From Decarceration to E-Carceration, 41 Fordham L. Rev. 641, 651 (2019) (describing the expansion of an “electronic surveillance pipeline” via the promotion of “smart decarceration” reform efforts).
    1. See, e.g., Adam Liptak, Inmate Count in U.S. Dwarfs Other Nations’, N.Y. Times (Apr. 23, 2008), https://www.nytimes.com/2008/04/23/us/23prison.html [https://perma.cc/8YVC-LDLW]; David Remnick, Ten Years after “The New Jim Crow,” New Yorker (Jan. 17, 2020), https://www.newyorker.com/news/the-new-yorker-interview/ten-years-after-the-new-jim-crow [https://perma.cc/WHY4-YF4R] (highlighting the impact of Michelle Alexander’s landmark book, The New Jim Crow: Mass Incarceration in the Age of Colorblindness, on public discourse about criminal justice).
    1. See Eaglin, Constructing, supra note 2, at 73–75.
    1. Id. at 75–78.
    1. Id. at 79–80.
    1. Id. at 83.
    1. Id. at 85.
    1. Eaglin, Constructing, supra note 2, at 86–87; Brandon L. Garrett & John Monahan, Judging Risk, 108 Calif. L. Rev. 439, 476 (2020) (reflecting on how to best convey risk predictions to judges).
    1. Christopher Slobogin, Just Algorithms: Using Science to Reduce Incarceration and Inform a Jurisprudence of Risk 30–31 (2021) (urging required adherence to the quantified results of well-validated risk assessment instruments in criminal law’s administration to reduce the “human urge to incapacitate”).
    1. Bernard E. Harcourt, Against Prediction: Profiling, Policing, and Punishing in an Actuarial Age 77 (2007) [hereinafter Harcourt, Against Prediction].
    1. See supra note 3.
    1. This insight builds from Benjamin Levin’s astute article, The Consensus Myth in Criminal Justice Reform, 117 Mich. L. Rev. 259, 269–73 (2018), wherein he identifies two frameworks, “over” and “mass,” within which legal scholars tend to critique the criminal justice system more broadly.
    1. One prevalent way of understanding mass incarceration (what Levin refers to as the “over” critique) assumes that “there is an optimal (or acceptable) rate of punishment.” Id. at 285. From this perspective, technologies appear as tools that may help to “right the ship” toward that optimal amount. Cf. id. at 310.
    1. Approximately 2.2 million people are incarcerated in prisons and jails across the country. Wendy Sawyer & Peter Wagner, Mass Incarceration: The Whole Pie 2020 (2020). The adult population in the United States is approximately 255 million people. National Population by Characteristics: 2010-2019, U.S. Census Bureau, https://www.census.gov/data/tables/time-series/demo/popest/2010s-national-detail.html [https://perma.cc/99W5-CPYM]. “Not only does the U.S. have the highest incarceration rate in the world; every single U.S. state incarcerates more people per capita than virtually any independent democracy on earth.” Emily Widra & Tiana Herring, States of Incarceration: The Global Context 2021, Prison Policy Initiative (Sept. 2021), https://www.prisonpolicy.org/global/2021.html [https://perma.cc/ZA54-D4SY].
    1. Levin, supra note 7, at 270.
    1. Id.
    1. See, e.g., Slobogin, supra note 30, at viii (“If developed and used properly, [risk assessment instruments] could become a major tool of [criminal justice] reform. Most importantly, they can help reduce the use of pretrial detention and prison, as well as the length of prison sentences, without appreciably increasing the peril to the public.”); Model Penal Code: Sentencing § 6B.09 cmt. d (Proposed Final Draft Am. L. Inst. 2017) (“If used as a tool to encourage sentencing judges to divert low-risk offenders from prisons to community sanctions, risk assessments conserve scarce prison resources for the most serious offenders, reduce overall costs of the corrections system, and avoid the human costs of unneeded confinement to offenders, offenders’ families, and communities.”) [hereinafter MPC]; Megan T. Stevenson & Jennifer Doleac, Algorithmic Risk Assessments in the Hands of Humans (Apr. 21, 2021), https://deliverypdf.ssrn.com/delivery.php?ID=672090112009085082098116088016083076109025046003043075006116076007099123064110126095098106127035013015098005071071126103027107051055086041049117123068027081012095113036087084029001123105114098001107071118118124124003011070025005064108094098104028005119&EXT=pdf&INDEX=TRUE [https://perma.cc/HE5W-T8WQ] (reciting the claim that algorithms in pre-trial determinations can reduce incarceration without increasing crime).
    1. See, e.g., Brandon L. Garrett & John Monahan, Judging Risk, 108 Calif. L. Rev. 439, 481–82 (2020); Crystal S. Yang, Toward an Optimal Bail System, 92 N.Y.U. L. Rev. 1399, 1409–10 (2017) (observing that existing risk assessment tools may recommend pretrial detention for high-risk defendants who are most likely to be harmed by detention and urging development of a “net-benefit” assessment tool that advances a more robust cost-benefit analysis in judicial decision making going forward); Pew-MacArthur Results First Initiative, How States Engage in Evidence-Based Policymaking: A National Assessment (2017).
    1. Andrew Guthrie Ferguson, The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement (2017) (critiquing the trend toward using algorithms to surveil individuals but urging use of algorithms to monitor the police).
    1. See I. India Thusi, The Pathological Whiteness of Prosecution, 110 Calif. L. Rev. 795, 800 (2022) (noting that criminal law scholarship increasingly references that “there is a racial component to its administration”); Levin, supra note 7, at 289 (“race receives significant attention in many over critiques and is often used as a way to frame what makes overpunishment so objectionable”).
    1. See, e.g., E. Ann Carson, Bureau of Justice Statistics, Prisoners in 2019 10 (2020) (“The imprisonment rate of black adults at year-end 2019 was more than five times that of [W]hite adults... and almost twice the rate of Hispanic adults.”); Jessica M. Eaglin & Danyelle Solomon, Brennan Center for Justice, Reducing Racial and Ethnic Disparities in Local Jails 18–22 (2015) (collecting sources on racially disparate treatment in sentencing and correctional supervision).
    1. See, e.g., John Monahan & Jennifer L. Skeem, Risk Assessment in Criminal Sentencing, 12 Annu. Rev. Pschyol. 489, 506 (2016) (“Whether risk assessment affects sentencing disparities is an important empirical question.... [That question is] anchored on the baseline sentencing context, i.e., risk assessment compared to what?”).
    1. For highly publicized indicators that algorithms may produce racially biased results at sentencing, see Julia Angwin, Jeff Larson, Surya Mattu, & Lauren Kirchner, Machine Bias, ProPublica (May 23, 2016), https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing [https://perma.cc/VZ6H-6DTY] (empirical study indicating racial bias in a commercial algorithm used at sentencing); Eric Holder, U.S. Attorney General, Remarks at the National Association of Criminal Defense Lawyers 57th Annual Meeting (Aug. 1, 2014), https://www.justice.gov/opa/speech/attorney-general-eric-holder-speaks-national-association-criminal-defense-lawyers-57th [https://perma.cc/2SDF-HZS6] (“By basing sentencing decisions on static factors and immutable characteristics—like the defendant’s education level, socioeconomic background, or neighborhood—[risk assessments at sentencing] may exacerbate unwarranted and unjust disparities that are already far too common in our criminal justice system and in our society.”). For a seminal study demonstrating the threat of racial bias in algorithms outside the criminal legal context, see Solon Barocas & Andrew D. Selbst, Big Data’s Disparate Impact, 104 Calif. L. Rev. 671, 673–76 (2016).
    1. See, e.g., Sandra G. Mayson, Bias In, Bias Out, 128 Yale L.J. 2218, 2228–31 (2019) (providing an overview of the scholarly and policy-oriented concerns regarding criminal justice risk assessment’s potential racial impact).
    1. Alice Ristroph, An Intellectual History of Mass Incarceration, 60 B.C. L. Rev. 1949, 1992 n.168 (2019) (attributing the genesis of the term “carceral state” to political theorist Marie Gottschalk and noting its ascendance in the 2000s).
    1. See Levin, supra note 7, at 272 (“The mass critique asks why criminal law has replaced other regulatory models, and what the consequences of criminal regulation are (e.g., arrest, conviction, and collateral consequences of both).”).
    1. See, e.g., David Garland, The Culture of Control: Crime and Social Order in Contemporary Society 18–19, 188–90 (2001) (highlighting the expansion of actuarial risk assessments as one of many managerial crime control techniques emergent with larger shifts away from a welfare state); Malcolm Feeley & Jonathan Simon, The New Penology, 30 Criminology 449, 452–58 (1992) (noting the shift toward actuarial thinking in criminal legal practices and the expansion of “new technologies to identify and classify risk”).
    1. See Erin R. Collins, Abolishing the Evidence-Based Paradigm, 48 BYU L. Rev. 403, 409–10 (2022) (“[T]he contemporary bipartisan enthusiasm for evidence-based reforms is being marshaled in favor of a specific project: the creation of a more fiscally-conservative, efficient criminal legal system.”).
    1. See, e.g., Roberts, Digitizing, supra note 18, at 1712–16 (arguing that predictive technology in the criminal legal process furthers a form of carceral governance that disproportionately harms low-income communities of color across public institutions); Eaglin, Population-Based Sentencing, supra note 50, at 398 (arguing that predictive algorithms “threaten[] to further entrench mass incarceration as a particular mode of governance that operates to manage and control marginalized populations through the carceral state rather than offer support and resources outside it”).
    1. See, e.g., id.; Erin Collins, Punishing Risk, 107 Geo. L.J. 57, 108 (2018) (emphasizing that algorithmic-risk-assessment-informed sentencing “benefit[s] [those who] come from a background of relative privilege and were afforded access to educational and employment opportunities, a low-crime zip code, and perhaps even the privilege of committing low-level, quality of life violations that were brought to the attention of law enforcement authorities”); Feeley & Simon, supra note 48, at 468 (“The concept of the underclass, with its connotation of a permanent marginality for whole portions of the population... laid the groundwork for a strategic field that emphasizes low-cost management of a permanent offender population.”).
    1. See, e.g., Eaglin, supra note 17, at 527–29; Collins, Punishing Risk, supra note 51, at 105–06; Bernard E. Harcourt, Risk as a Proxy for Race, 27 Fed. Sent’g Rep. 237, 240 (2015) [hereinafter Harcourt, Proxy]; Harcourt, Against Prediction, supra note 31, at 161–64.
    1. See, e.g., Ngozi Okidegbe, The Democratizing Potential of Algorithms, 53 Conn. L. Rev. 739, 743–44 (2022) (revealing the racialized harm of algorithmic pretrial governance).
    1. See, e.g., Hill, supra note 18, at 959–60 (revealing the racialized harms of "pretrial algorithmic governance"); cf. Amna Akbar, Toward a Radical Imagination of Law, 93 N.Y.U. L. Rev. 405, 465–66 (2018) (critiquing investment in police-worn body cameras as police reform).
    1. See, e.g., Hill, supra note 18, at 919 (providing an antisubordination framework to “supply insights into why, regardless of how [pretrial risk assessment instruments] are constructed, social movements remain opposed to the instruments”); Harcourt, Proxy, supra note 52, at 240 (urging criminal legal reforms that do not rely upon prediction in response to mass incarceration). But see Aziz Z. Huq, Racial Equity in Algorithmic Criminal Justice, 68 Duke L.J. 1043, 1045, 1101–04 (2019) (recognizing the role of criminal law in the “dynamic (re)production of iniquitous social stratification,” proposing an algorithmic design approach that accounts for this impact, and observing the approaching obsolescence of equal protection doctrine).
    1. For example, legal scholarship adopting the incarceration frame may demonstrate the ways in which algorithms, combined with incarceration, or even the threat of incarceration, can be an engine of racial inequality for Black people. See, e.g., Crystal Yang & William Dobbie, Equal Protection Under Algorithms, 119 Mich. L. Rev. 291, 299–300, 376–81 (2020) (illustrating the value of alternative algorithmic methods to predict recidivism risk, which could reduce the number of Black defendants detained under algorithmic pretrial governance); Mayson, supra note 45 at 2284 (“[S]ome risk-assessment-tool developers have found that past arrests and misdemeanor convictions ‘mean less’ about future risk for black people than for other demographic groups. In places where this is true, it suggests that black communities have been disproportionately subject to past arrest and misdemeanor prosecution relative to rates of offense.”); Megan Stevenson, Assessing Risk Assessments, 103 Minn. L. Rev. 303, 362–68 (2018) (examining whether predictive risk assessments increase racial disparities in pretrial release in Kentucky). Conversely, legal scholarship adopting the carceral state frame provides a broader, and thus more exacting, perspective on how algorithms as a criminal legal practice impacts the lived experiences of Black people. See, e.g., Huq, infra note 55, at 1111–13 (encouraging a legal standard that considers whether algorithms may concentrate the allocation of coercion within the criminal justice system among Black people); Roberts, Digitizing, supra note 18, at 1707 (“Politics shapes the carceral state’s use of computerized tools in two main ways: (1) unequal political structures are built into the data collected and the algorithms that interpret that data; and (2) state agencies then use the results according to a predetermined philosophy to punishment instead of support marginalized communities.”); Harcourt, Proxy, supra note 52, at 240 (recognizing the threat that predictive instruments employed to address mass incarceration will aggravate racial disproportionality in prisons and impose “significant detrimental consequences on the employment, educational, familial, and social outcomes of the profiled populations” including the effects of “the notion of black criminality”).
    1. Ian Haney López, The Social Construction of Race, 29 Harv. C.R.-C.L. L. Rev. 1, 7 (1994) [hereinafter Haney López, Social Construction].
    1. Id.
    1. Devon Carbado, Critical What What?, 43 Conn. L. Rev. 1593, 1610 (2011) [hereinafter Carbado, Critical What What?].
    1. Id.
    1. See, e.g., Michael Omi & Howard Winant, Racial Formation in the United States 13 (3d ed. 2015) (explaining how race is socially constructed and historically fluid, continually being made and remade in everyday life).
    1. See Khiara M. Bridges, Critical Race Theory: A Primer 149 (1st ed. 2019); john a. powell, Structural Racism: Building Upon the Insights of John Calmore, 8 N.C. L. Rev. 791, 795 (2008).
    1. Bridges, supra note 62, at 148.
    1. Robert W. Gordon, Critical Legal Histories, 36 Stan. L. Rev. 57, 112 (1984).
    1. Drawing on the work of anthropologist Clifford Geertz, Professor Terry Maroney describes common sense as “shar[ing] the characteristics of being natural, practical, thin, immethodical, and accessible.” Terry A. Maroney, Emotional Common Sense as Constitutional Law, 62 Vand. L. Rev. 851, 860–61 (2009). Though legal scholars debate whether “common sense [should] permeate[] law,” the reality is that it does. See id. at 867 (emphasis added).
    1. Devon W. Carbado & Cheryl I. Harris, The New Racial Preference, 96 Calif. L. Rev. 1139, 1168 (2008).
    1. Martha R. Mahoney, Segregation, Whiteness, and Transformation, 143 U. Pa. L. Rev. 1659, 1660 (1995).
    1. Id. at 1664.
    1. Id.
    1. Id. at 1665.
    1. Carbado, Critical What What?, supra note 59, at 1611 (“We are all defined with whiteness in mind. We are the same as, or different, from whites.”); see also Kimberlé Williams Crenshaw, Demarginalizing the Intersection of Race and Sex: A Black Feminist Critique of Antidiscrimination Doctrine, Feminist Theory, and Antiracist Politics, 1989 U. Chi. Legal F. 139, 151 (1989) (“Because the scope of antidiscrimination law is so limited, sex and race discrimination have come to be defined in terms of the experiences of those who are privileged but for their racial or sexual characteristics.”).
    1. See Cheryl I. Harris, Whiteness as Property, 106 Harv. L. Rev. 1707, 1709, 1738 (1993).
    1. Id. at 1738.
    1. Id. at 1753.
    1. Conceptualizing the constitution of racial subjects requires “conceiving of categories not as distinct but as always permeated by other categories, fluid and changing, always in the process of creating and being created by dynamics of power.” Sumi Cho, Kimberlé Williams Crenshaw & Leslie McCall, Toward a Field of Intersectionality Studies: Theory, Applications, and Praxis, 38 Signs 786, 795 (2013). Here, “[p]ower begins... where force ends.” Lewis R. Gordon, African-American Philosophy, Race, and the Geography of Reason, in Not Only the Master’s Tools: African American Studies in Theory and Practice 42 (Lewis R. Gordon & Jane Anna Gordon eds., 2006).
    1. Harris, supra note 72, at 1761 (“Whiteness is an aspect of racial identity surely, but it is much more; it remains a concept based on relations of power, a social construct predicated on white dominance and Black subordination.”).
    1. See Khiara M. Bridges, White Privilege and White Disadvantage, 105 Va. L. Rev. 449, 468 (2019) (emphasizing that whiteness can function as a “double-edged sword”—it allows inclusion into the group of people considered superior to all others, but it also makes white people subject to “quality control”) [hereinafter Bridges, White Privilege].
    1. Haney López, Social Construction, supra note 57, at 6.
    1. See id. at 1–2, 6–7.
    1. . See Carbado, Critical What What?, supra note 59, at 1609.
    1. See Kimberlé Williams Crenshaw, Race, Reform, and Retrenchment: Transformation and Legitimation in Antidiscrimination Law, 101 Harv. L. Rev. 1331, 1373 (1988) [hereinafter Crenshaw, Retrenchment].
    1. Bridges, White Privilege, supra note 77, at 474.
    1. Dorothy E. Roberts, Fatal Invention: How Science, Politics, and Business Recreate Race in the Twenty-First Century 36–43 (2011) [hereinafter Roberts, Fatal Invention].
    1. See id. at 104–07 (connecting advances in biomedical research that focus on race as a gene with legal policies on federal funding).
    1. See Bridges, White Privilege, supra note 77, at 472–73 (examining the social context around Buck v. Bell, 274 U.S. 200 (1927), to emphasize that eugenics-era sterilizations were justified by assumptions about the White race while coercive sterilizations of non-White women in the post-civil rights era were justified by racial assumptions about non-White people’s social deficiency).
    1. See Harris, supra note 72, at 1756–57.
    1. Hall, supra note 20, at 364; Devon W. Carbado, (E)racing the Fourth Amendment, 100 Mich. L. Rev. 946, 978 (2002) (“race does not exist outside of, but is instead the effect of, discourses”) [hereinafter Carbado, (E)racing]; supra notes 57–60 and accompanying text.
    1. See Katherine McKittrick, Dear Science and Other Stories 38–39 (2021) (encouraging scholars to read racial struggle differently in order to “observe how our present system of knowledge... is a self-referential system that profits from recursive normalization”); Hall, supra note 20, at 364 (“[T]he interplay between the representation of racial difference, the writing of power, and the production of knowledge, is crucial to the way in which they are generated, and the way in which they function.”).
    1. For example, criminological studies tend to conceptualize racism as an animus or emotion perpetuated by an individual actor. See generally Naomi Murakawa & Katherine Beckett, The Penology of Racial Innocence: The Erasure of Racism in the Study and Practice of Punishment, 44 L. & Soc. Rev. 695, 696–97 (2010) (critiquing the tendency in “criminal justice research” to adopt “the narrow standards of contemporary antidiscrimination law,” including the focus on the intent of individual actors). But see Derrick Bell, Faces at the Bottom of the Well: The Permanence of Racism 55 (1992) (“Racism is more than a group of [W]hite folks whose discriminatory predilections can be controlled by well-formed laws, vigorously enforced.”). Sociological studies tend to conceptualize race as a biological fact or, alternately, a social construct imprinted on the body. See McKittrick, supra note 88, at 38 (“Studying identity so often involves demonstrating that biology is socially constructed, not displacing biology, but rather empowering biology—the flesh—as the primary way to study identity.”).
    1. See, e.g., Roberts, Digitizing, supra note 18, at 1712–21; Hill, supra note 18 at 943–56; Vincent M. Southerland, The Intersection of Race and Algorithmic Tools in the Criminal Legal System, 80 Md. L. Rev. 487, 493 (2021) (emphasizing the shortcomings in algorithmic tools aimed at “forecasting the behavior of those who are ensnared by the carceral state”).
    1. Hill, supra note 18, at 944.
    1. See, e.g., MPC, supra note 38, at cmt. a (“Responsible actors in every system—from prosecutors to judges to parole officials—make daily judgments about the treatment needs of offenders, and the risks of recidivism posed by offenders... They often derive from the intuitions and abilities of individual decisionmakers, who typically lack professional training in the sciences of human behavior. In some instances, judgments about offenders’ future conduct may be influenced by biases—conscious or unconscious—of official decisionmakers.”); id. at reporters’ note a (emphasizing “the superiority of actuarial over clinical predictions of risk”).
    1. Mayson, supra note 45.
    1. Id. at 2224.
    1. Id. at 2225.
    1. Id.
    1. Id. at 2251; see generally id. at 2255–58 (asking whether underlying offense rates vary by race due to differential offending or enforcement, then concluding that using any data where the underlying offense rates vary by race in an algorithm will lead to unavoidable racial disparities in prediction).
    1. Mayson makes a third analytical argument regarding transparency and accountability. See id. at 2279–80. This point is outside the scope of this Article.
    1. Id. at 2277.
    1. Id. at 2281 (“[G]iven the state of practice and the state of our knowledge, there is every reason to expect that subjective risk assessment produces greater racial disparity than algorithmic risk assessment.”).
    1. Id. at 2278.
    1. Id.
    1. Id. at 2281. As a solution, Mayson urged transforming the meaning of risk into something that should be cared for through the criminal legal apparatus, rather than something that should be punished. See id. at 2284, 2286­–87.
    1. Id. at 2279.
    1. Mayson cited a variety of studies. See id. at 2279 and accompanying footnotes. I focus on the references to social science research, rather than complementing law review articles drawing on social science research. Regarding the “experimental literature” on implicit bias in criminal law’s administration, see, e.g., Jennifer L. Eberhardt, Paul G. Davies, Valerie J. Purdue-Vaughns & Sheri Lynn Johnson, Looking Deathworthy: Perceived Stereotypicality of Black Defendants Predicts Capital-Sentencing Outcomes, 17 Pscyhol. Sci. 383, 383 (2006) (“We argue that only in death-eligible cases involving White victims—cases in which race is most salient—will Black defendants’ physical traits function as a significant determinant of deathworthiness.”). Regarding the evidence of “otherwise inexplicable racial disparities in policing, charging, pretrial detention, and sentencing,” Mayson points to a 2014 study finding that black defendants in the federal system were 1.75 times more likely to face a mandatory minimum charge than similarly situated [W]hite defendants. See Mayson, supra note 45, at 2279 n.214 (citing to M. Marit Rehavi & Sonja B. Starr, Racial Disparity in Federal Criminal Sentences, 122 J. Pol. Econ. 1320, 1323 (2014)). Rehavi and Starr’s remarkable study documents “substantial racial disparity in federal criminal sentences, across the sentence distribution and across a wide variety of samples and specifications.” Rehavi & Starr, supra, at 1346. It innovated to assess the often-overlooked processes of charging, plea bargaining, and sentencing fact-finding. Id. at 1321. Notably, Rehavi and Starr express skepticism about interpreting their study as a demonstration of massive implicit bias on the part of individual criminal legal actors. See id. at 1347 (“While other unobserved differences cannot be ruled out, there remains the possibility that the observed disparities are driven by discrimination, which... might well be implicit biases such as racial disparities in empathy that drive selective leniency rather than animus [or]... statistical discrimination based on expectations concerning criminal recidivism. One might, however, expect the effect of beliefs about these nonrace factors and their correlation with race to vary across the conditional distribution rather than produce the surprisingly stable race parameters we document.”).
    1. See supra notes 61–86 and accompanying text; Beckett & Murakawa, supra note 89, at 710 (“[F]raming the question of racism in criminal justice as one of individual bias ignores how racism shapes the very process of identifying disorder and defining criminality.”); Ian F. Haney López, Post-Racial Racism: Racial Stratification and Mass Incarceration in the Age of Obama, 98 Calif. L. Rev. 1023, 1059 (2010) (“Because stratification occurs cumulatively, racism cannot be fully measured when treated as a residual, as the correlation left over once individual variation has been eliminated. On the contrary, individual differences such as education, income, or prior criminal record are not exogenous, nonracial attributes; they are, in aggregate, themselves partly the products of structural racism.”).
    1. Issa Kohler-Hausmann, Eddie Murphy and the Dangers of Counterfactual Causal Thinking About Detecting Racial Discrimination, 113 Nw. U. L. Rev. 1163, 1207 (2019).
    1. See, e.g., Slobogin, supra note 30, at 30 (“racial and other types of bias in decision-making about pretrial and post-conviction release can be significantly reduced [with enforced algorithms]”... “In contrast, a regime that is based on intuitive or clinical judgments about who is dangerous is too easily manipulated and prone to overly conservative outcomes influenced by conscious or unconscious prejudices.”); supra note 92.
    1. See Mayson, supra note 45, at 2278–79.
    1. Julian M. Rucker & Jennifer A. Richeson, Toward an Understanding of Structural Racism: Implications for Criminal Justice, 374 Science 286, 287 (2021) (“[A]s recently as 2016, most [laypersons] believed that racism in contemporary society is primarily an interpersonal problem rather than a structural one.”).
    1. Cf. Steve Woolgar, Reconstructing Man and Machine: A Note on Sociological Critiques of Cognitivism, in The Social Construction of Technological Systems 311, 313 (2012) (“The work of [artificial intelligence]... attempts to develop a technology that emulates actions and performances previously accredited to unique human intellectual abilities. Consequently, the advent of computers, and of AI in particular, has raised questions about the uniqueness of man in a slightly different form.”). Most algorithms currently used in the administration of criminal law would not qualify as “artificial intelligence” or “machine learning.” Huq, supra note 55, at 1067–68; Eaglin, Constructing, supra note 2, at 68 n.41. Nevertheless, reference to the capacity for algorithms to divine insights beyond the capacity of a human are characteristic to scholarship on this technology. See, e.g., Huq, supra note 55, at 1066–67 (juxtaposing human judgment, checklists and simple algorithms on the one hand and “machine-learning algorithms” on the other); Andrew Guthrie Ferguson, Big Data and Predictive Reasonable Suspicion, 163 U. Pa. L. Rev. 327, 354 (2015) (discussing how electronic data “reveals information about individuals that simply was not knowable in previous generations”). Such assertions belie a conceptualization of the algorithm as something other than human. See also Pauline T. Kim, Race-Aware Algorithms, 110 Calif. L. Rev. 1539, 1544 (2022) (“[T]he tendency to assume that algorithms have fixed form, rather than recognizing them as malleable and contingent on the choices made by their creators. In popular and legal discourse, the algorithm is imagined as an objective thing, as if a correct solution exists to every prediction problem and consideration of group fairness somehow represents a deviation from the ‘true’ solution.”).
    1. In the context of criminal sentencing, algorithms have been proposed as a solution in the wake of the U.S. Supreme Court decisions undermining the then-mandatory nature of sentencing guidelines in the states and the federal system. Blakely v. Washington, 542 U.S. 296, 301 (2004) (applying the jury trial right to strike down Washington State’s presumptive guideline sentencing structure); United States v. Booker, 543 U.S. 220, 222, 245 (2005) (extending Blakely to the federal system and rendering the federal sentencing guidelines “effectively advisory”). For popular discourse framing these decisions as problematic from a racial perspective, see Carrie Johnson, GOP Seeks Big Changes in Federal Prison Sentences, Nat’l Pub. Radio (Jan. 31, 2012), https://www.kcur.org/2012-01-31/gop-seeks-big-changes-in-federal-prison-sentences [https://perma.cc/8C2M-EHUM]__ (suggesting that Booker “threw a wrench into the system [that makes sentencing fair and predictable] by ruling that the guidelines that judges use to figure out a prison sentence are only suggestions”). For legal scholarship encouraging a shift toward risk assessments in sentencing after the U.S. Supreme Court’s decisions in Blakely and Booker, see J.C. Oleson, Risk in Sentencing: Constitutionally Suspect Variables and Evidence-Based Sentencing, 64 SMU L. Rev. 1329, 1395–98 (2011). For a policy-driven shift toward algorithms after a state supreme court embraced Booker, see Michigan Department of Corrections, Administration and Use of COMPAS in the Presentence Investigation Report 22 (2017), https://www.michbar.org/file/news/releases/archives17/COMPAS-at-PSI-Manual-2-27-17-Combined.pdf [https://perma.cc/PM9V-2Y7P]; People v. Lockridge, 870 N.W.2d 502, 506 (Mich. 2015) (rendering the previously mandatory state sentencing guidelines advisory by applying the U.S. Supreme Court’s Sixth Amendment jurisprudence). In the context of pretrial bail, law and policymakers position algorithms as a sociopolitical response to recent legal efforts to eliminate cash bail. See, e.g., Lauryn P. Gouldin, Defining Flight Risk, 85 U. Chi. L. Rev. 677, 680–81 (2018) (situating pretrial bail algorithms as part of the “active third generation of bail reform efforts” in the United States) (internal quotation marks omitted). The momentum around this reform increased with the California Supreme Court’s decision in In re Humphrey, 482 P.3d 1008 (Cal. 2021), https://www.latimes.com/california/story/2021-03-25/california-supreme-court-nixes-cash-bail-some-defendants [https://perma.cc/952L-7KE3]__. In that decision, the California Supreme Court held that detaining a person pretrial solely because they cannot afford bail violates the due process and equal protection guarantees of the U.S. Constitution. See In re Humphrey, 482 P.3d 1008 (Cal. 2021); Maura Dolan, California’s Top Court Ends Cash Bail for Some Defendants Who Can’t Afford It, L.A. Times (Mar. 25, 2021) (urging adoption of algorithms in pretrial bail).
    1. For instance, social activists have proposed pretrial bail reforms that do not include algorithmic risk assessments. See, e.g., Jocelyn Simonson, Bail Nullification, 115 Mich. L. Rev. 585. 599–606 (2017) (describing the rise of community bail funds). Yet, as Professor Sean Hill observed, algorithmic risk assessments have emerged not just as one possible reform among many, but as the legal practice positioned against other pretrial legal interventions. Hill, supra note 18, at 917–19.
    1. One could characterize this impulse as an instance of “technological benevolence.” It complements, but is distinct from, the concept advanced by Dr. Ruha Benjamin. See Ruha Benjamin, Race After Technology: Abolitionist Tools for the New Jim Code 95–96 (2021) (critiquing electronic monitoring as a technological solution to “mass incarceration and prison overcrowding” because it is a “racial fix” that “gives rise to innovative new forms of injustice”). Benjamin focuses on the ways in which understanding technical fixes as “so desirable, even magical” masks the ways in which the technology can legitimate or produce negative effects for marginalized people in specific contexts like employment and healthcare. See id. at 97. In contrast, I am referring to the way assumptions about the responsiveness of algorithms to the racialized phenomenon of mass incarceration legitimates specific distributions of resources across different legal institutions through law. For an example of the way assumptions about criminal legal reform shape the distribution of resources across different legal institutions, see Jessica M. Eaglin, To “Defund” the Police, 73 Stan. L. Rev. Online 121, 124–34 (2021) (demonstrating how different meanings of “defund the police” lead to different legal reforms, only some of which address the structural marginalization of Black people in the United States). That distribution, in turn, can benefit whiteness to the extent that the privilege associated with whiteness is constructed through laws that facilitate various benefits in society like, for example, access to wealth. William Darity Jr., Darrick Hamilton, Mark Paul, Alan Aja, Anne Price, Antonio Moore, & Caterina Chiopris, What We Get Wrong About Closing the Racial Wealth Gap 3 (2018), https://socialequity.duke.edu/wp-content/uploads/2020/01/what-we-get-wrong.pdf [https://perma.cc/P56D-2N3F] (detailing the vast wealth gap between White and non-White people in the United States); Dorothy A. Brown, The Whiteness of Wealth: How the Tax System Impoverishes Black Americans and How We Can Fix It 15­–16 (2021) (detailing how the existing federal tax system intersects with New Deal-era federal efforts to “fund the creation of a robust middle class, one that was almost exclusively white” in ways that reproduce racial inequality); Ira Katznelson, When Affirmative Action Was White: An Untold History of Racial Inequality in Twentieth Century America 17 (2005) (highlighting how “the wide array of significant and far-reaching public policies that were shaped and administered during the New Deal and Fair Deal era of the 1930s and 1940s were crafted and administered in a deeply discriminatory manner” and pointing to “the contours of Social Security, key labor legislation, the GI Bill, and other landmark laws that helped create a modern white middle class”). But see Bridges, White Privilege, supra note 77, at 482 (“With respect to whiteness specifically, privilege and subordination are unstable because white privilege opens lots of doors—even the ones to unprivileged conditions.”).
    1. See MPC, supra note 38, at cmt d (“If used as a tool to encourage sentencing judges to divert low-risk offenders from prisons to community sanctions, risk assessments conserve scarce prison resources for the most dangerous offenders, reduce the overall costs of the corrections system, and avoid the human costs of unneeded confinement to offenders, offenders’ families, and communities. The use of validated actuarial tools produces lower probabilities of future victimizations in society than prison-diversion decisions based on professional or clinical judgments.”).
    1. John Pfaff, Locked In: The True Causes of Mass Incarceration and How to Achieve Real Reform 198–200 (2017) (urging adoption of actuarial risk assessments to manage politics of crime); cf. Rachel E. Barkow, Prisoner of Politics: Breaking the Endless Cycle of Mass Incarceration 175–77 (2019) (urging criminal justice agencies to set policies for a range of issues relevant to criminal law based on empirical evidence that maximizes public safety, including extensive knowledge about future risk, in lieu of “let[ting] emotions take charge”).
    1. See, e.g., Kevin R. Reitz, “Risk Discretion” at Sentencing, 30 Fed. Sent’g Rep. 68, 69–71 (2017); Gouldin, supra note 112, at 729–37 (2018) (emphasizing how properly designed risk assessments in the pretrial bail context could identify less intrusive and lower-cost interventions like court-date reminders as opposed to providing a basis for pretrial detention to ensure court appearances).
    1. See, e.g., U.S. Dep’t of Justice, Smart on Crime: Reforming the Criminal Justice System for the 21st Century 4 (2013) (discussing the need to reduce spending in the criminal justice system by finding cheaper alternatives to incarceration).
    1. Eisha Jain, Capitalizing on Criminal Justice, 67 Duke L.J. 1381, 1424–25 (2018) (critiquing the shortcomings of the cost-benefit framework beyond the scope of prison incarceration for felony convictions) [hereinafter Jain, Capitalizing]; see also Jessica M. Eaglin, The Drug Court Paradigm, 53 Am. Crim. L. Rev. 595, 631–33 (2016) (critiquing the net-widening threat of the cost-effective framework advanced through criminal law reforms influenced by drug courts) [hereinafter Eaglin, Paradigm].
    1. Brian Jefferson, Digitize and Punish: Racial Criminalization in the Digital Age 4 (2020) (“[T]he prototyping of criminal justice technology has been largely subsidized by tax dollars. IT firms have siphoned billons in public grants through the Community Oriented Policing Services Office and the Office of Justice Programs to research and design products for the Wars on Crime and Drugs.”). Consistent with Professor Trevor Gardner’s observations, one might consider this reality as falling within an overlooked subfield of federal policing power. See Trevor Gardner, Immigrant Sanctuary as the “Old Normal,” 119 Colum. L. Rev. 1, 17–18 (2019).
    1. Micol Seigel, Violence Work: State Power and the Limits of Police 39 (2018).
    1. Pub. L. No. 89-197, 79 Stat. 828 (repealed in 1968).
    1. Pub. L. No. 90-351, 82 Stat. 197 (codified as amended in scattered sections of 47 U.S.C.).
    1. The LEAA maintained a $60 million budget in 1968-69, increasing to $268 million in 1970 and jumping exponentially thereafter. At its highpoint, the agency’s budget reached over $1 billion in a single year. Seigel, supra note 121, at 41. Much legal and policy research on the LEAA focuses on its police-related distributions. See, e.g., id. at 47–48 (framing expansion of technology via LEAA as central to the transformation of police work in United States); Malcolm Feeley & Austin Sarat, The Policy Dilemma: Federal Crime Policy and the Law Enforcement Assistance Administration, 1968-1978 36 (1980) (centering discussion on the distribution of grants under the Office of Law Enforcement Assistance, which distributed most of its grants to policing initiatives); Naomi Murakawa, The First Civil Right: How Liberals Built Prison America 73 (2014) (situating the “flow of federal funds to state and local police departments” between 1961–1968, including creation of the LEAA, as a bipartisan agenda).
    1. See Elizabeth Hinton, From the War on Poverty to the War on Crime: The Making of Mass Incarceration in America 143 (2016).
    1. Id. at 163-68; Eaglin, Distorted, supra note 17, at 508–10.
    1. Hinton, supra note 125, at 156; Jefferson, supra note 120, at 70–71.
    1. See 47 Fed. Reg. 16695 (April 19, 1982) (closing out the operation of the LEAA and transferring remaining functions to the Office of Justice Assistance).
    1. See The Justice System Improvement Act of 1979, P.L. 96-157, 93 Stat. 1167 (creating an Office of Justice Assistance Research and Statistics, the National Institute of Justice, and the Bureau of Justice Statistics, and (re)creating the LEAA as separate from these entities).
    1. The Office of Justice Programs emerged as the lead federal agency to manage “federal leadership, grants, training, technical assistance, and other resources to improve the nation’s capacity to prevent and reduce crime, assist victims and enhance the rule of law by strengthening the criminal and juvenile justice systems.” Office of Justice Programs, About Us, U.S. Dep’t of Justice, https://www.ojp.gov/about [https://perma.cc/82QT-973V]. Several programs that once lived within the LEAA now exist under the umbrella of the OJP. Notably, the Bureau of Justice Assistance, as distinct from the Bureau of Justice Statistics (BJS), provides technical support to state and local jurisdictions coping with the pressures of mass incarceration.
    1. BJS was once a subordinate operation within the LEAA organization. See William F. Powers, The Law Enforcement Assistance Administration: An Administrative History 81 (June 1982) (M.P.A. thesis, City University of New York), https://www.ojp.gov/pdffiles1/Photocopy/153696NCJRS.pdf [https://perma.cc/VG5K-ZA2J]. The BJS now serves two functions: (1) it provides statistical information on “crime, criminal offenders, victims of crime, and the operation of justice systems at all levels of government;” and (2) it “provides financial and technical support to state, local, and tribal governments to improve both their statistical capabilities and the quality and utility of their criminal history records.” Bureau of Justice Statistics, About BJS, U.S. Dep’t of Justice, https://bjs.ojp.gov/about [https://perma.cc/H9BF-G5A5].
    1. The NIJ was once a subordinate operation within the LEAA organization. See Powers, supra note 150, at 115. It is now the “research, development and evaluation agency of the U.S. Department of Justice.” See Nat’l Inst. of Justice, About the National Institute of Justice, U.S. Dep’t of Justice (May 2, 2022), https://nij.ojp.gov/about-nij [https://perma.cc/HTD8-FR4Z]. It facilitates public and private research on crime victims, communities, and criminal justice professionals, particularly at the state and local levels.
    1. JRI is a “data-driven process to improve public safety by helping jurisdictions make more effective and efficient use of criminal justice resources to address the complex factors that drive crime and recidivism.” Bureau of Justice Assistance, Justice Reinvestment Initiative (JRI): Overview, U.S. Dep’t of Justice, https://bja.ojp.gov/program/justice-reinvestment-initiative/overview [https://perma.cc/JEX8-XZ2B]. In partnership with the Pew Charitable Trust, it is housed within the Bureau of Justice Assistance. Id.
    1. For more on the connections between the Justice Reinvestment Initiative and algorithmic risk assessments, see Cecelia Klingele, The Promises and Perils of Evidence-Based Corrections, 91 Notre Dame L. Rev. 537, 563–66 (2015); Eaglin, Paradigm, supra note 119, at 619.
    1. Ngozi Okidegbe, Discredited Data, 107 Cornell L. Rev. 2007, 2012 (2022) [hereinafter Okidegbe, Discredited].
    1. See id. (examining “carceral knowledge sources” in all algorithms used by criminal legal institutions); see also Rashida Richardson, Jason M. Schultz & Kate Crawford, Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice, 94 N.Y.U. L. Rev. Online 15, 20–21 (2019) (identifying “dirty data” generated in policing and used in predictive policing algorithms); Eaglin, Constructing, supra note 2, at 73 (describing data collected and used for algorithmic risk assessments at sentencing); Wayne Logan & Andrew Ferguson, Policing Criminal Justice Data, 101 Minn. L. Rev. 541, 543–45 (2016) (critiquing the lack of oversight in the “individual-level, discrete data points” of criminal justice related data, which “provide[] the building blocks for all data-driven systems” in the criminal justice system).
    1. Julie E. Cohen, Between Truth and Power: The Legal Constructions of Informational Capitalism 48–49 (2019) (arguing that the “biopolitical public domain” is the product of “enabling legal construct[s]”); cf. Jefferson, supra note 120, at 47 (describing the origin and significance of relational databases for criminal legal institutions, particularly the police).
    1. Through the LEAA, Congress funded the creation of SEARCH in 1969. Nat’l Consortium of Justice Information & Statistics, An Introduction to SEARCH 1 (Jan. 2021), https://www.search.org/files/pdf/An_Introduction_to_SEARCH.pdf [https://perma.cc/HZC4-KUB8]. This private, nonprofit organization is composed of governor-appointed members from all states. Id. SEARCH develops guidance on the management and exchange of criminal justice information. Id. at 1–3; see also James E. Jacobs, The Eternal Criminal Record 40–41 (2015) (discussing the role of SEARCH in creating a national, integrated criminal records system).
    1. Bureau of Justice Statistics, National Criminal History Improvement Program 1–2 (2021) (noting that every state and territory received funding through the program by FY 2002).
    1. Electronic Freedom of Information Act Amendments, P.L. 104-231, 110 St. 3048.
    1. See Sarah Esther Lageson, Digital Punishment: Privacy, Stigma, and the Harms of Data-Driven Criminal Justice 22 (2020) (attributing expansion of online access to state court records to federal legislation inspiring the shift).
    1. Sara Friedman, State Data Officers Offer Feedback on Federal Data Strategy, GCN (July 31, 2018), https://gcn.com/state-local/2018/07/state-data-officers-offer-feedback-on-federal-data-strategy/299854/ [https://perma.cc/E8H2-QRFJ] (noting that the federal government will actively engage with state and local government programs responsible for data collection by, among other things, incentivizing the collection of data for measuring recidivism in “housing, substance abuse, Medicaid, employment, education, and transportation”).
    1. See also Jain, Capitalizing, supra note 119, at 1402 (critiquing the institutional dynamic through which “[c]riminal records, in effect, become commodities”); Alessandro Corda & Sarah Lageson, Disordered Punishment: Workaround Technologies of Criminal Records Disclosure and the Rise of a New Penal Entrepeneurialism, 60 Brit. J. Criminol. 245, 246 (2020) (“The current state of affairs of criminal record systems resembles more a disorganized, consumer-based digital web of haphazard effects that do not neatly map onto information nor punishment functions.”).
    1. Eisha Jain, Arrests as Regulation, 67 Stan. L. Rev. 809, 826 (2015); Eisha Jain, The Mark of Policing: Race and Criminal Records, 73 Stan. L. Rev. Online 162, 165 (2021) [hereinafter Jain, Mark].
    1. Jain, Mark, supra note 144, at 170.
    1. Id. (critiquing the governmental decisions to invest time and resources to create, disseminate, and use criminal records.”); see also Cohen, supra note 137, at 61.
    1. Jefferson, supra note 120, at 42.
    1. Eaglin, Distorted, supra note 17, at 487; see Harcourt, Against Prediction, supra note 31, at 186–92.
    1. Eaglin, Distorted, supra note 17, at 517–23; Garland, supra note 48, at 176.
    1. Eaglin, Distorted, supra note 17, at 528–29 (“‘[T]echnical formalism’ refers to two things. First, the broader notion that ‘recidivism risk’ is objective rather than socially constructed, and that factors used to construct it are objective and neutral as well. Second, the notion that by achieving empirical accuracy regardless of tool construction, tools are legitimate at sentencing just as much as they are in other contexts.”).
    1. Id. at 529–33; Harcourt, Against Prediction, supra note 31, at 189–91.
    1. Eaglin, Distorted, supra note 17, at 487.
    1. Id. at 520–23; Jessica M. Eaglin, Against Neorehabilitation, 66 S.M.U. L. Rev. 189, 211 (2013) [hereinafter Eaglin, Neorehabilitation].
    1. See Albert Alschuler, A Plea for Less Aggregation, 58 U. Chi. L. Rev. 901, 941–45 (1993) (urging descriptive sentencing guidelines in contrast to the technical, aggregation-oriented trend in sentencing reform); see also Jelani Jefferson-Exum, Purpose-Based Sentencing, 18 Lewis & Clark L. Rev. 95, 97–98 (2014) (proposing Particular Purpose Sentencing in contrast to the trend toward formalist critiques of racial inequality in sentencing); Eaglin, Population-Based Sentencing, supra note 19, at 364–68 (situating algorithms within the larger shift toward formalized judicial decision making at sentencing). For examples of critical analysis on this point outside the legal context, see Jefferson, supra note 120, at 55 (noting that the technical turn produces a barrier against critical thought because technical language warps the social phenomena they represent); Katja Franko Aas, Sentencing in the Age of Information: From Faust to MacIntosh 5 (2005) (“[T]he new forms of decision-making introduce and privilege a certain mode of thinking which is based on working on the surface, rather than on in-depth understanding [at sentencing].”).
    1. See Eaglin, Population-Based Sentencing, supra note 19, at 367–68 (summarizing conceptual transformations at sentencing in the late-twentieth century necessary to support the policy-driven expansion of algorithms).
    1. There are other explanations as well. See supra note 48. Examining racism gives new insight to this particular path in criminal legal reform.
    1. See supra notes 105–110 (revealing the conceptualization of racial inequality in criminal law’s administration as the product of widespread individual biases).
    1. This insight complements work at the intersection of race, law, and social psychology referenced above. On social psychology and intergroup perceptions of race, see Rucker & Richeson, supra note 110, at 286­–87; Russell K. Robinson, Perceptual Segregation, 108 Colum. L. Rev. 1093, 1126 (2008) (noting that White people are more likely to see racial discrimination from a colorblind perspective, and so they presume that racial discrimination is unusual and requires overt evidence of racial hostility, while Black people are more likely to be “unapologetically race-conscious” and see racism as common and structural).
    1. On the politicization of imprisonment costs, see Peter Wagner & Bernadette Rabuy, Following the Money of Mass Incarceration, Prison Policy Initiative (Jan. 25, 2017), https://www.prisonpolicy.org/reports/money.html [https://perma.cc/YQ3B-8PHG] (“The cost of imprisonment—including who benefits and who pays—is a major part of the national discussion around criminal justice policy.”); The Editorial Board, Cutting Prison Sentences, and Costs, N.Y. Times (Dec. 24, 2016), https://www.nytimes.com/2016/12/24/opinion/sunday/cutting-prison-sentences-and-costs.html [https://perma.cc/KH74-ULUE]; Smart on Crime, supra note 118. By comparison, plenty of public debate considers algorithms in criminal law’s administration. While issues of bias are prominent in critiquing algorithms, see supra note 4, the cost of algorithms are not.
    1. See Murakawa, supra note 124, at 70–71 (liberal and conservative lawmakers committed to addressing race through carceral modernization in the 1970s for diverging but convergent reasons); Hinton, supra note 125, at 15 (locating the genesis of this convergent dynamic in the 1960s).
    1. C.f. Cohen, supra note 137, at 2 (“We are witnessing the emergence of legal institutions adapted to the information age.”).
    1. On the centrality of oppositional categories to racist ideology, see Crenshaw, Retrenchment, supra note 81, at 1372–73 (1988). On historically contextualized shifts in the meaning of risk and dangerousness, see Eaglin, Distorted, supra note 17, at 529–33.
    1. Eaglin, Neorehabilitation, supra note 153, at 221.
    1. See id. at 217–18.
    1. Roberts, Digitizing, supra note 18, at 1707–09.
    1. See, e.g., Kim, supra note 111, at 1548 (“Th[e] growing body of evidence of the risks of algorithmic discrimination has shifted the conversation from whether algorithms can discriminate to what to do about it.”) (emphasis in original). Legal scholarship addresses this issue under a variety of terms, including algorithmic discrimination, algorithmic fairness, and algorithmic bias. See, e.g., id. (discrimination); Deborah Hellman, Measuring Algorithmic Fairness, 106 Va. L. Rev. 811, 814 (2020) (fairness); Daniel E. Ho & Alice Xiang, Affirmative Algorithms: The Legal Grounds for Fairness as Awareness, 2020 U. Chi. L. Rev. Online 134, 135–36 (2020) (equating algorithmic fairness with unbiased algorithms).
    1. While this legal scholarship often expands beyond the scope of just algorithms used in criminal law’s administration, I focus on the subset of this scholarship that centers algorithms for this particular context.
    1. “It is well established that [criminal legal institutions] produce data that are infected with racial and socioeconomic bias.” Okidegbe, Discredited, supra note 135, at 2012. See, e.g., id. at 2026–32 (providing a descriptive overview of potential biases in data produced by police, pretrial service agencies, and court systems that are used in the construction of algorithms used in the pretrial bail context); Richardson et al., supra note 136, at 26–27 (providing a case study on biased police data used to construct algorithms employed to assist policing).
    1. For a brief overview of the tool construction process, see supra notes 24–29; see generally Eaglin, Constructing, supra note 2, at 72–88 (describing the normative judgments involved in the construction of risk assessments used in the sentencing process). Note that more advanced algorithms, often referred to as “machine learning” algorithms, would remove the step in the construction process where developers select the predictive factors used to estimate a prespecified outcome. See Huq, supra note 55, at 1063 (describing machine learning and deep learning methods to algorithms).
    1. This shorthand characterization is concerning because the term “colorblind” does conceptual work in relation to algorithms as a response to mass incarceration. This issue requires more careful consideration and explanation than I have space to offer here. I look forward to exploring this concern in future work.
    1. See Yang & Dobbie, supra note 56, at 330–31.
    1. Dawinder S. Sidhu, Moneyball Sentencing, 56 B.C. L. Rev. 671, 694–99 (2015). Professor Sonja Starr takes a more nuanced but related view. She contends that consensus about constitutional law has prevented the adoption of algorithms that consider race in its design. See Sonja B. Starr, Evidence-Based Sentencing and the Scientific Rationalization of Discrimination, 66 Stan. L. Rev. 803, 812 (2014) (“There appears to be a general consensus that using race [in actuarial risk assessment instruments at sentencing] would be unconstitutional.”). Additionally, the use of protected identity characteristics, including race, should not pass heightened constitutional scrutiny based upon empirical evidence of their limited accuracy and effectiveness. Id. at 820 (“[I]f the instruments don’t work well, their use in sentencing is almost surely unconstitutional as well as terribly unwise.”); see also id. at 842. Her analysis does not center upon race as a predictive factor; rather, it encapsulates race as one of several protected characteristics under the U.S. Constitution that demand heightened scrutiny. Because characteristics that require less stringent scrutiny (like gender and poverty) should not be considered in the algorithm as a constitutional matter under her analysis, neither should race.
    1. See Yang & Dobbie, supra note 56, at 299–300 (finding that “all commonly used algorithmic inputs are correlated with race in the New York City data [upon which they study pretrial algorithms]... [and] the overly formalistic exclusion of race actually generates unwarranted racial disparities, undermining the objective of equal treatment [under algorithms]”); Mayson, supra note 45, at 2266 (observing that “to achieve any specific form of output equality, it may be necessary to treat race as an input [in the algorithm]” and concluding that “colorblindness is not a meaningful measure of equality”).
    1. How to consider race and whether such a method would be constitutionally permissible remain open questions explored in the context of criminal law and outside it. For diverging suggestions on how race could be considered in the design of an algorithm used in criminal law’s administration, see Huq, supra note 55, at 1128–33 (proposing a “multiple threshold regime” for differently stratified racial groups and recognizing that such a regime would be in “serious constitutional jeopardy” under existing equality doctrine); Yang & Dobbie, supra note 56, at 352–55 (proposing the use of race in the “estimation step but not the prediction step” to avoid anticlassification concerns under equal protection and antidiscrimination law). For insight to various ways in which race could be considered across the various spaces where algorithms are employed, see Kim, supra note 111, at 1574–83 (emphasizing that only some of these methods of consideration would be legally suspect).
    1. Carbado, (E)racing, supra note 87, at 978.
    1. Id.; see also supra notes 78–82 and accompanying text (emphasizing that historically contingente racial meanings sustain racial hierarchy).
    1. Yang & Dobbie, supra note 56, at 377 (demonstrating that their proposed algorithm would further reduce racial disparities in pretrial detention “relative to the typical algorithm used in practice today”); Mayson, supra note 45, at 2225 (arguing that algorithms “reveal[] the racial inequality inherent in all crime prediction in a racially unequal world” and so urging “a more fundamental rethinking of the way in which the criminal justice system understands and responds to risk”); Huq, supra note 55, at 1104–05 (characterizing racial equity in the criminal justice context as avoiding the “(re)production of iniquitous social stratification”); see also Starr, supra note 172, at 806 (worrying that algorithms “can be expected to contribute to the concentration of the criminal justice system’s punitive impact among those who already disproportionately bear its brunt, including people of color”).
    1. See Buck v. Davis, 137 S.Ct. 759, 775 (2017) (declaring that overt consideration of race as a risk factor to predict future behavior is unconstitutional); Sidhu, supra note 172, at 696–70 (concluding that existing Supreme Court jurisprudence “leaves no room for race-conscious risk assessment tools” and drawing on an earlier iteration of Buck case as example). Cf. Starr, supra note 172, at 822 (“Incarceration... profoundly interferes with virtually every right the Supreme Court has deemed fundamental, and [actuarial risk assessments at sentencing] makes these rights interferences turn on identity rather than criminal conduct.”); John Monahan, A Jurisprudence of Risk Assessment: Forecasting Harm Among Prisoners, Predators, and Patients, 92 Va. L. Rev. 391, 428 (2006) (“Blame attaches to what a person has done. Past criminal behavior is the only scientifically valid risk factor for violence that unambiguously implicates blameworthiness, and therefore the only one that should enter the jurisprudential calculus in criminal sentencing.”).
    1. Milagros Miceli, Martin Schuessler & Tianling Yang, Between Subjectivity and Imposition: Power Dynamics in Data Annotation for Computer Vision, Proc. ACM Hum.-Comput. Interact. 1, 2-3, 115 (2020), https://doi.org/10.1145/3415186 [https://perma.cc/9NK8-ETA6] (“[D]ata annotation [is] a sense-making process where actors classify data by assigning meaning to its content through the use of labels.... [D]ata is created through human intervention.”).
    1. See id. at 4–5 (describing data annotation processes); Morgan Klaus Scheuerman, Kandrea Wade, Caitlin Lustig & Jed. R. Brubaker, How We’ve Taught Algorithms to See Identity: Constructing Race and Gender in Image Databases for Facial Analysis, Proc. ACM Hum.-Comput. Interact. 1, Article 115, § 1–2 (2020), https://doi.org/10.1145/3392866 [https://perma.cc/TW6X-NQSP] (finding that 96 percent of all databases used for facial recognition study do not contain demographic descriptions). While Scheuerman, Wade, Lustig & Brubaker “generally assumed that the database authors gathered this [racial] information directly from subjects or determined subject race/gender themselves,” Miceli, Schuessler & Yang demonstrate in a follow up study that collection from individual subjects is unlikely. Id.; Miceli, Schuessler & Yang, supra note 179, at § 1 (“hierarchical structures broadly inform the interpretation of data”). See also Eaglin, Constructing, supra note 2, at 73–80 (noting that most sentencing algorithms are constructed from limited databases and coded by government actors or third parties).
    1. See Scheuerman, Wade, Lustig & Brubaker, supra note 180, at § 7 (finding that image databases rely on “common” categories like Caucasian/Moroccan or White/Asian/Black/Indian for annotation); Miceli, Schuessler & Yang, supra note 179, at § 4.4.3 (demonstrating that such categories represent “the top-down ascription of meanings to data through multi-layered structures”).
    1. Cf. Fanna Gamal, The Private Life of Education, Stan. L. Rev. (forthcoming 2023) (critiquing the “power to document” in the production of school records by school officials over Black girls in particular) (draft on file with author).
    1. . Cf. Hall, supra note 20, at 366­–67 (explaining how science functions in racial discourse to “fix and secure what else otherwise cannot be fixed or secured,” namely race).
    1. Roberts, Fatal Invention, supra note 83, at 4.
    1. On the rise of statistical race and genetic ancestry and its connection to biological race in the genomic sciences, see id. at 58–68; Trina Jones & Jessica L. Roberts, Genetic Race? DNA Ancestry Tests, Racial Identity, and the Law, 120 Colum. L. Rev. 1929, 1969–70 (2020) (noting that “biological conceptions of race continue to abound” in society, and warning that genetic race threatens to reinforce the political work this concept does to naturalize inequality in the United States).
    1. See id. at 1970 (characterizing biological race as the notion that “if nature created racial hierarchy and resulting inequities, then policy interventions cannot—and presumably should not attempt to—disrupt them.”); Osagie K. Obasogie, The Return of Biological Race? Regulating Innovation in Race and Genetics Through Administrative Agency Race Impact Assessments, 22 S. Cal. L.J. 1, 12–13 (2012) (asserting that biological race “not only justified the status quo, it gave moral impetus to the belief that to try to change these status relationships would be contrary to evolutionary progress and, thus, society itself”).
    1. See supra notes 43–45 and accompanying text; see infra Part III.C.
    1. See supra notes 79–89 and accompanying text.
    1. See supra notes 52–55 and accompanying text.
    1. Alice Ristroph, The Wages of Criminal Law Exceptionalism, Crim. L. & Phil. § 4 (2021).
    1. Eaglin, Constructing, supra note 2, at 97–98 (emphasizing that “the humans developing the tools are decisionmakers”).
    1. . Recognizing that “[a]bolition is both a practical and intellectual endeavor,” Professor Roberts examined “[t]he tension between recognizing the relentless antiblack violence of constitutional doctrine, on the one hand, and demanding the legal recognition of black people’s freedom and equal citizenship, on the other.” Dorothy E. Roberts, Foreword: Abolition Constitutionalism, 133 Harv. L. Rev. 1, 10 & n.42 (2019).
    1. Id. at 9.
    1. Id.
    1. Professor Roberts set forth two paths forward in constitutional law: one “resigned to the futility of employing U.S. constitutional law to dismantle the prison industrial complex and other aspects of the carceral state” and one that “finds utility in applying the abolitionist history and logic of the Reconstruction Amendments to today’s political conditions in the service of prison abolition.” Id. at 9–10 (emphasis in original); see also id. at 58–62, 72–73 (drawing on Fredrick Douglass for inspiration in this dual approach). She asserted, “I believe both approaches are worthy of consideration, and considering both is essential to developing a theoretically and pragmatically useful legal framework to advance prison abolition.” Id. at 10.
    1. “[C]ourt-made doctrines that maintain[] white supremacy [are] not constitutionally mandated and could be overturned by a counter-constitutionalism that affirm[s] freedom and democracy.” Id. at 72–73.
    1. Id. at 10; see also Monica C. Bell, Anti-Segregation Policing, 95 N.Y.U. L. Rev. 650, 764–65 (2020) (calling for “people committed to racial justice” to “operate on many fronts”).
    1. See Kimberlé Williams Crenshaw, Luke Charles Harris, Daniel Martinez HoSang & George Lipsitz, Introduction, in Seeing Race Again: Countering Colorblindness across the Disciplines 1, 14 (2019) (“The task of countering colorblindness is thus not merely to see race again, but to reenvision how disciplinary tools, conventions, and knowledge-producing practices that erase the social dynamics that produce race can be critically engaged and selectively repurposed toward emancipatory ends.”).
    1. Cf. Levin, supra note 7, at 310–11 (noting that in the context of criminal law, “the law review article as a general matter includes a final policy proposal”).
    1. See generally Eaglin, Neorehabilitation, supra note 153, at 193 (“This Article... identifies and critiques the expansion of the neorehabilitative model of sentencing reform as states attempt to manage their prison populations.”).
    1. See generally Eaglin, Constructing, supra note 2, at 63 (“This Article... exposes how external incentives intersect with law and policy in the construction for risk tools at sentencing.”).
    1. See Eaglin, Distorted, supra note 17, at 505–08.
    1. See supra notes 106–114 and accompanying text.
    1. Eaglin, Neorehabilitation, supra note 153, at 201–02 (discussing “the contribution of evidence-based programs and risk assessment tools to inform penal and sentencing policies” under neorehabilitation); Eaglin, Paradigm, supra note 119, at 606 (“The use of scientifically driven data on the drug courts’ effectiveness at reducing both recidivism and costs appeals to all politicians, regardless of political leanings, because it provides a depoliticized platform to address drug crimes.”).
    1. Collins, supra note 49, at 421–25. See Cohen, supra note 137, at 144–46 (characterizing dispute resolution, including within criminal legal institutions, as engaging in the process of institutional reinvention around “neoliberal managerialism”).
    1. Following Professors Carol Steiker and Jordan Steiker, and in line with Max Weber and Antonio Gramsci, I use the term “legitimation” here to connote how legal scholarship could “induc[e] a false or exaggerated belief in the normative justifiability of something in the social world.” Carol S. Steiker & Jordan M. Steiker, Sober Second Thoughts: Reflections on Two Decades of Constitutional Regulation of Capital Punishment, 109 Harv. L. Rev. 355, 429 (1995).
    1. Id. at 430.
    1. See, e.g., Akbar et al., supra note 54, at 412–13 (discussing how radical social movements reimagine the social problems with which significant bodies of legal scholarship engage).
    1. Crenshaw, Retrenchment, supra note 81, at 1382.
    1. Id.
    1. McKittrick, supra note 88, at 35 (examining “methodology as an act of disobedience and rebellion”).
    1. Crenshaw, Retrenchment, supra note 81, at 1385 (“In the quest for racial justice, winning and losing have been part of the same experience.”).
    1. . See Jessica M. Eaglin, When Critical Race Theory Enters the Law and Technology Frame, 25 Mich. J. Race & L. 151, 168 (2021) (“[W]e built this world through legal constructions around social ideas of race; we must find novel ways to confront the social production of race through law if we are going to change it.”); Monica C. Bell, Safety, Friendship, and Dreams, 54 Harv. C.R.-C.L. L. Rev. 703, 715 (2019) (through discussion of safety, friendship, and dreams at the intersection of legal and qualitative social science methodologies, “inviting conversation about ways to imagine the goal of civil rights advocacy in a more capacious manner”); Murakawa & Beckett, supra note 89, at 721 (“In the post-civil rights era, both racism and criminal justice operate in systemic, interactive, and serpentine ways; epistemologies and methods for investigating racial power should be equally systemic and capacious.”).
    1. Morrison, supra note 1, at xiii.