The necessity of judgment


Essay by Jeff Malpas in AI and Society: “In 2016, the Australian Government launched an automated debt recovery system through Centrelink—its Department of Human Services. The system, which came to be known as ‘Robodebt’, matched the tax records of welfare recipients with their declared incomes as held by Ethe Department and then sent out debt notices to recipients demanding payment. The entire system was computerized, and many of those receiving debt notices complained that the demands for repayment they received were false or inaccurate as well as unreasonable—all the more so given that those being targeted were, almost by definition, those in already vulnerable circumstances. The system provoked enormous public outrage, was subjected to successful legal challenge, and after being declared unlawful, the Government paid back all of the payments that had been received, and eventually, after much prompting, issued an apology.

The Robodebt affair is characteristic of a more general tendency to shift to systems of automated decision-making across both the public and the private sector and to do so even when those systems are flawed and known to be so. On the face of it, this shift is driven by the belief that automated systems have the capacity to deliver greater efficiencies and economies—in the Robodebt case, to reduce costs by recouping and reducing social welfare payments. In fact, the shift is characteristic of a particular alliance between digital technology and a certain form of contemporary bureaucratised capitalism. In the case of the automated systems we see in governmental and corporate contexts—and in many large organisations—automation is a result both of the desire on the part of software, IT, and consultancy firms to increase their customer base as well as expand the scope of their products and sales, and of the desire on the part of governments and organisations to increase control at the same time as they reduce their reliance on human judgment and capacity. The fact is, such systems seldom deliver the efficiencies or economies they are assumed to bring, and they also give rise to significant additional costs in terms of their broader impact and consequences, but the imperatives of sales and seemingly increased control (as well as an irrational belief in the benefits of technological solutions) over-ride any other consideration. The turn towards automated systems like Robodebt is, as is now widely recognised, a common feature of contemporary society. To look to a completely different domain, new military technologies are being developed to provide drone weapon systems with the capacity to identify potential threats and defend themselves against them. The development is spawning a whole new field of military ethics-based entirely around the putative ‘right to self-defence’ of automated weapon systems.

In both cases, the drone weapon system and Robodebt, we have instances of the development of automated systems that seem to allow for a form of ‘judgment’ that appears to operate independently of human judgment—hence the emphasis on this systems as autonomous. One might argue—and typically it is so argued—that any flaws that such systems currently present can be overcome either through the provision of more accurate information or through the development of more complex forms of artificial intelligence….(More)”.

How to Use the Bureaucracy to Govern Well


Good Governance Paper by Rebecca Ingber:”…Below I offer four concrete recommendations for deploying Intentional Bureaucratic Architecture within the executive branch. But first, I will establish three key background considerations that provide context for these recommendations.  The focus of this piece is primarily executive branch legal decisionmaking, but many of these recommendations apply equally to other areas of policymaking.

First, make room for the views and expertise of career officials. As a political appointee entering a new office, ask those career officials: What are the big issues on the horizon on which we will need to take policy or legal views?  What are the problems with the positions I am inheriting?  What is and is not working?  Where are the points of conflict with our allies abroad or with Congress?  Career officials are the institutional memory of the government and often the only real experts in the specific work of their agency.  They will know about the skeletons in the closet and where the bodies are buried and all the other metaphors for knowing things that other people do not. Turn to them early. Value them. They will have views informed by experience rather than partisan politics. But all bureaucratic actors, including civil servants, also bring to the table their own biases, and they may overvalue the priorities of their own office over others. Valuing their role does not mean handing the reins over to the civil service—good governance requires exercising judgement and balancing the benefits of experience and expertise with fresh eyes and leadership. A savvy bureaucratic actor might know how to “get around” the bureaucratic roadblocks, but the wise bureaucratic player also knows how much the career bureaucracy has to offer and exercises judgment based in clear values about when to defer and when to overrule.

Second, get ahead of decisions: choose vehicles for action carefully and early. The reality of government life is that much of the big decisionmaking happens in the face of a fire drill. As I’ve written elsewhere, the trigger or “interpretation catalyst” that compels the government to consider and assert a position—in other words, the cause of that fire drill—shapes the whole process of decisionmaking and the resulting decision. When an issue arises in defensive litigation, a litigation-driven process controls.  That means that career line attorneys shape the government’s legal posture, drawing from longstanding positions and often using language from old briefs. DOJ calls the shots in a context biased toward zealous defense of past action. That looks very different from a decisionmaking process that results from the president issuing an executive order or presidential memorandum, a White House official deciding to make a speech, the State Department filing a report with a treaty body, or DOD considering whether to engage in an operation involving force. Each of these interpretation catalysts triggers a different process for decisionmaking that will shape the resulting outcome.  But because of the stickiness of government decisions—and the urgent need to move on to the next fire drill—these positions become entrenched once taken. That means that the process and outcome are driven by the hazards of external events, unless officials find ways to take the reins and get ahead of them.

And finally, an incoming administration must put real effort into Intentional Bureaucratic Architecture by deliberately and deliberatively creating and managing the bureaucratic processes in which decisionmaking happens. Novel issues arise and fire drills will inevitably happen in even the best prepared administrations.  The bureaucratic architecture will dictate how decisionmaking happens from the novel crises to the bread and butter of daily agency work. There are countless varieties of decisionmaking models inside the executive branch, which I have classified in other work. These include a unitary decider model, of which DOJ’s Office of Legal Counsel (OLC) is a prime example, an agency decider model, and a group lawyering model. All of these models will continue to co-exist. Most modern national security decisionmaking engages the interests and operations of multiple agencies. Therefore, in a functional government, most of these decisions will involve group lawyering in some format—from agency lawyers picking up the phone to coordinate with counterparts in other agencies to ad hoc meetings to formal regularized working groups with clear hierarchies all the way up to the cabinet. Often these processes evolve organically, as issues arise. Some are created from the top down by presidential administrations that want to impose order on the process. But all of these group lawyering dynamics often lack a well-defined process for determining the outcome in cases of conflict or deciding how to establish a clear output. This requires rule setting and organizing the process from the top down….(More).

Tracking COVID-19: U.S. Public Health Surveillance and Data


CRS Report: “Public health surveillance, or ongoing data collection, is an essential part of public health practice. Particularly during a pandemic, timely data are important to understanding the epidemiology of a disease in order to craft policy and guide response decision making. Many aspects of public health surveillance—such as which data are collected and how—are often governed by law and policy at the state and sub federal level, though informed by programs and expertise at the Centers for Disease Control and Prevention (CDC). The Coronavirus Disease 2019 (COVID-19) pandemic has exposed limitations and challenges with U.S. public health surveillance, including those related to the timeliness, completeness, and accuracy of data.

This report provides an overview of U.S. public health surveillance, current COVID-19 surveillance and data collection, and selected policy issues that have been highlighted by the pandemic.Appendix B includes a compilation of selected COVID-19 data resources….(More)”.

AI’s Wide Open: A.I. Technology and Public Policy


Paper by Lauren Rhue and Anne L. Washington: “Artificial intelligence promises predictions and data analysis to support efficient solutions for emerging problems. Yet, quickly deploying AI comes with a set of risks. Premature artificial intelligence may pass internal tests but has little resilience under normal operating conditions. This Article will argue that regulation of early and emerging artificial intelligence systems must address the management choices that lead to releasing the system into production. First, we present examples of premature systems in the Boeing 737 Max, the 2020 coronavirus pandemic public health response, and autonomous vehicle technology. Second, the analysis highlights relevant management practices found in our examples of premature AI. Our analysis suggests that redundancy is critical to protecting the public interest. Third, we offer three points of context for premature AI to better assess the role of management practices.

AI in the public interest should: 1) include many sensors and signals; 2) emerge from a broad range of sources; and 3) be legible to the last person in the chain. Finally, this Article will close with a series of policy suggestions based on this analysis. As we develop regulation for artificial intelligence, we need to cast a wide net to identify how problems develop within the technologies and through organizational structures….(More)”.

Trace Labs


Trace Labs is a nonprofit organization whose mission is to accelerate
the family reunification of missing persons while training members in
the trade craft of open source intelligence (OSINT)….We crowdsource open source intelligence through both the Trace Labs OSINT Search Party CTFs and Ongoing Operations with our global community. Our highly skilled intelligence analysts then triage the data collected to produce actionable intelligence reports on each missing persons subject. These intelligence reports allow the law enforcement agencies that we work with the ability to quickly see any new details required to reopen a cold case and/or take immediate action on a missing subject.(More)”

The Potential Role Of Open Data In Mitigating The COVID-19 Pandemic: Challenges And Opportunities


Essay by Sunyoung Pyo, Luigi Reggi and Erika G. Martin: “…There is one tool for the COVID-19 response that was not as robust in past pandemics: open data. For about 15 years, a “quiet open data revolution” has led to the widespread availability of governmental data that are publicly accessible, available in multiple formats, free of charge, and with unlimited use and distribution rights. The underlying logic of open data’s value is that diverse users including researchers, practitioners, journalists, application developers, entrepreneurs, and other stakeholders will synthesize the data in novel ways to develop new insights and applications. Specific products have included providing the public with information about their providers and health care facilities, spotlighting issues such as high variation in the cost of medical procedures between facilities, and integrating food safety inspection reports into Yelp to help the public make informed decisions about where to dine. It is believed that these activities will in turn empower health care consumers and improve population health.

Here, we describe several use cases whereby open data have already been used globally in the COVID-19 response. We highlight major challenges to using these data and provide recommendations on how to foster a robust open data ecosystem to ensure that open data can be leveraged in both this pandemic and future public health emergencies…(More)” See also Repository of Open Data for Covid19 (OECD/TheGovLab)

Harnessing the wisdom of crowds can improve guideline compliance of antibiotic prescribers and support antimicrobial stewardship


Paper by Eva M. Krockow et al: “Antibiotic overprescribing is a global challenge contributing to rising levels of antibiotic resistance and mortality. We test a novel approach to antibiotic stewardship. Capitalising on the concept of “wisdom of crowds”, which states that a group’s collective judgement often outperforms the average individual, we test whether pooling treatment durations recommended by different prescribers can improve antibiotic prescribing. Using international survey data from 787 expert antibiotic prescribers, we run computer simulations to test the performance of the wisdom of crowds by comparing three data aggregation rules across different clinical cases and group sizes. We also identify patterns of prescribing bias in recommendations about antibiotic treatment durations to quantify current levels of overprescribing. Our results suggest that pooling the treatment recommendations (using the median) could improve guideline compliance in groups of three or more prescribers. Implications for antibiotic stewardship and the general improvement of medical decision making are discussed. Clinical applicability is likely to be greatest in the context of hospital ward rounds and larger, multidisciplinary team meetings, where complex patient cases are discussed and existing guidelines provide limited guidance….(More)

The Reasonable Robot: Artificial Intelligence and the Law


Book by Ryan Abbott: “AI and people do not compete on a level-playing field. Self-driving vehicles may be safer than human drivers, but laws often penalize such technology. People may provide superior customer service, but businesses are automating to reduce their taxes. AI may innovate more effectively, but an antiquated legal framework constrains inventive AI. In The Reasonable Robot, Ryan Abbott argues that the law should not discriminate between AI and human behavior and proposes a new legal principle that will ultimately improve human well-being. This work should be read by anyone interested in the rapidly evolving relationship between AI and the law….(More)”.

Challenging the Use of Algorithm-driven Decision-making in Benefits Determinations Affecting People with Disabilities


Paper by Lydia X. Z. Brown, Michelle Richardson, Ridhi Shetty, and Andrew Crawford: “Governments are increasingly turning to algorithms to determine whether and to what extent people should receive crucial benefits for programs like Medicaid, Medicare, unemployment, and Social Security Disability. Billed as a way to increase efficiency and root out fraud, these algorithm-driven decision-making tools are often implemented without much public debate and are incredibly difficult to understand once underway. Reports from people on the ground confirm that the tools are frequently reducing and denying benefits, often with unfair and inhumane results.

Benefits recipients are challenging these tools in court, arguing that flaws in the programs’ design or execution violate their due process rights, among other claims. These cases are some of the few active courtroom challenges to algorithm-driven decision-making, producing important precedent about people’s right to notice, explanation, and other procedural due process safeguards when algorithm-driven decisions are made about them. As the legal and policy world continues to recognize the outsized impact of algorithm-driven decision-making in various aspects of our lives, public benefits cases provide important insights into how such tools can operate; the risks of errors in design and execution; and the devastating human toll when tools are adopted without effective notice, input, oversight, and accountability. 

This report analyzes lawsuits that have been filed within the past 10 years arising from the use of algorithm-driven systems to assess people’s eligibility for, or the distribution of, public benefits. It identifies key insights from the various cases into what went wrong and analyzes the legal arguments that plaintiffs have used to challenge those systems in court. It draws on direct interviews with attorneys who have litigated these cases and plaintiffs who sought to vindicate their rights in court – in some instances suing not only for themselves, but on behalf of similarly situated people. The attorneys work in legal aid offices, civil rights litigation shops, law school clinics, and disability protection and advocacy offices. The cases cover a range of benefits issues and have netted mixed results.

People with disabilities experience disproportionate and particular harm because of unjust algorithm-driven decision-making, and we have attempted to center disabled people’s stories and cases in this paper. As disabled people fight for rights inside and outside the courtroom on a wide range of issues, we focus on litigation and highlight the major legal theories for challenging improper algorithm-driven benefit denials in the U.S. 

The good news is that in some cases, plaintiffs are successfully challenging improper adverse benefits decisions with Constitutional, statutory, and administrative claims. But like other forms of civil rights and impact litigation, the bad news is that relief can be temporary and is almost always delayed. Litigation must therefore work in tandem with the development of new processes driven by people who require access to public assistance and whose needs are centered in these processes. We hope this contribution informs not only the development of effective litigation, but a broader public conversation about the thoughtful design, use, and oversight of algorithm-driven decision-making systems….(More)”.

Learning like a State: Statecraft in the Digital Age


Essay by Marion Fourcade and Jeff Gordon: “…Recent books have argued that we live in an age of “informational” or “surveillance” capitalism, a new form of market governance marked by the accumulation and assetization of information, and by the dominance of platforms as sites of value extraction. Over the last decade-plus, both actual and idealized governance have been transformed by a combination of neoliberal ideology, new technologies for tracking and ranking populations, and the normative model of the platform behemoths, which carry the banner of technological modernity. In concluding a review of Julie Cohen’s and Shoshana Zuboff’s books, Amy Kapcyznski asks how we might build public power sufficient to govern the new private power. Answering that question, we believe, requires an honest reckoning with how public power has been warped by the same ideological, technological, and legal forces that brought about informational capitalism.

In our contribution to the inaugural JLPE issue, we argue that governments and their agents are starting to conceive of their role differently than in previous techno-social moments. Our jumping-off point is the observation that what may first appear as mere shifts in the state’s use of technology—from the “open data” movement to the NSA’s massive surveillance operation—actually herald a deeper transformation in the nature of statecraft itself. By “statecraft,” we mean the state’s mode of learning about society and intervening in it. We contrast what we call the “dataist” state with its high modernist predecessor, as portrayed memorably by the anthropologist James C. Scott, and with neoliberal governmentality, described by, among others, Michel Foucault and Wendy Brown.

The high modernist state expanded the scope of sovereignty by imposing borders, taking censuses, and coercing those on the outskirts of society into legibility through broad categorical lenses. It deployed its power to support large public projects, such as the reorganization of urban infrastructure. As the ideological zeitgeist evolved toward neoliberalism in the 1970s, however, the priority shifted to shoring up markets, and the imperative of legibility trickled down to the individual level. The poor and working class were left to fend for their rights and benefits in the name of market fitness and responsibility, while large corporations and the wealthy benefited handsomely.

As a political rationality, dataism builds on both of these threads by pursuing a project of total measurement in a neoliberal fashion—that is, by allocating rights and benefits to citizens and organizations according to (questionable) estimates of moral desert, and by re-assembling a legible society from the bottom up. Weakened by decades of anti-government ideology and concomitantly eroded capacity, privatization, and symbolic degradation, Western states have determined to manage social problems as they bubble up into crises rather than affirmatively seeking to intervene in their causes. The dataist state sets its sights on an expanse of emergent opportunities and threats. Its focus is not on control or competition, but on “readiness.” Its object is neither the population nor a putative homo economicus, but (as Gilles Deleuze put it) “dividuals,” that is, discrete slices of people and things (e.g. hospital visits, police stops, commuting trips). Under dataism, a well-governed society is one where events (not persons) are aligned to the state’s models and predictions, no matter how disorderly in high modernist terms or how irrational in neoliberal terms….(More)”.