Tracking COVID-19: U.S. Public Health Surveillance and Data


CRS Report: “Public health surveillance, or ongoing data collection, is an essential part of public health practice. Particularly during a pandemic, timely data are important to understanding the epidemiology of a disease in order to craft policy and guide response decision making. Many aspects of public health surveillance—such as which data are collected and how—are often governed by law and policy at the state and sub federal level, though informed by programs and expertise at the Centers for Disease Control and Prevention (CDC). The Coronavirus Disease 2019 (COVID-19) pandemic has exposed limitations and challenges with U.S. public health surveillance, including those related to the timeliness, completeness, and accuracy of data.

This report provides an overview of U.S. public health surveillance, current COVID-19 surveillance and data collection, and selected policy issues that have been highlighted by the pandemic.Appendix B includes a compilation of selected COVID-19 data resources….(More)”.

AI’s Wide Open: A.I. Technology and Public Policy


Paper by Lauren Rhue and Anne L. Washington: “Artificial intelligence promises predictions and data analysis to support efficient solutions for emerging problems. Yet, quickly deploying AI comes with a set of risks. Premature artificial intelligence may pass internal tests but has little resilience under normal operating conditions. This Article will argue that regulation of early and emerging artificial intelligence systems must address the management choices that lead to releasing the system into production. First, we present examples of premature systems in the Boeing 737 Max, the 2020 coronavirus pandemic public health response, and autonomous vehicle technology. Second, the analysis highlights relevant management practices found in our examples of premature AI. Our analysis suggests that redundancy is critical to protecting the public interest. Third, we offer three points of context for premature AI to better assess the role of management practices.

AI in the public interest should: 1) include many sensors and signals; 2) emerge from a broad range of sources; and 3) be legible to the last person in the chain. Finally, this Article will close with a series of policy suggestions based on this analysis. As we develop regulation for artificial intelligence, we need to cast a wide net to identify how problems develop within the technologies and through organizational structures….(More)”.

Trace Labs


Trace Labs is a nonprofit organization whose mission is to accelerate
the family reunification of missing persons while training members in
the trade craft of open source intelligence (OSINT)….We crowdsource open source intelligence through both the Trace Labs OSINT Search Party CTFs and Ongoing Operations with our global community. Our highly skilled intelligence analysts then triage the data collected to produce actionable intelligence reports on each missing persons subject. These intelligence reports allow the law enforcement agencies that we work with the ability to quickly see any new details required to reopen a cold case and/or take immediate action on a missing subject.(More)”

The Potential Role Of Open Data In Mitigating The COVID-19 Pandemic: Challenges And Opportunities


Essay by Sunyoung Pyo, Luigi Reggi and Erika G. Martin: “…There is one tool for the COVID-19 response that was not as robust in past pandemics: open data. For about 15 years, a “quiet open data revolution” has led to the widespread availability of governmental data that are publicly accessible, available in multiple formats, free of charge, and with unlimited use and distribution rights. The underlying logic of open data’s value is that diverse users including researchers, practitioners, journalists, application developers, entrepreneurs, and other stakeholders will synthesize the data in novel ways to develop new insights and applications. Specific products have included providing the public with information about their providers and health care facilities, spotlighting issues such as high variation in the cost of medical procedures between facilities, and integrating food safety inspection reports into Yelp to help the public make informed decisions about where to dine. It is believed that these activities will in turn empower health care consumers and improve population health.

Here, we describe several use cases whereby open data have already been used globally in the COVID-19 response. We highlight major challenges to using these data and provide recommendations on how to foster a robust open data ecosystem to ensure that open data can be leveraged in both this pandemic and future public health emergencies…(More)” See also Repository of Open Data for Covid19 (OECD/TheGovLab)

Harnessing the wisdom of crowds can improve guideline compliance of antibiotic prescribers and support antimicrobial stewardship


Paper by Eva M. Krockow et al: “Antibiotic overprescribing is a global challenge contributing to rising levels of antibiotic resistance and mortality. We test a novel approach to antibiotic stewardship. Capitalising on the concept of “wisdom of crowds”, which states that a group’s collective judgement often outperforms the average individual, we test whether pooling treatment durations recommended by different prescribers can improve antibiotic prescribing. Using international survey data from 787 expert antibiotic prescribers, we run computer simulations to test the performance of the wisdom of crowds by comparing three data aggregation rules across different clinical cases and group sizes. We also identify patterns of prescribing bias in recommendations about antibiotic treatment durations to quantify current levels of overprescribing. Our results suggest that pooling the treatment recommendations (using the median) could improve guideline compliance in groups of three or more prescribers. Implications for antibiotic stewardship and the general improvement of medical decision making are discussed. Clinical applicability is likely to be greatest in the context of hospital ward rounds and larger, multidisciplinary team meetings, where complex patient cases are discussed and existing guidelines provide limited guidance….(More)

The Reasonable Robot: Artificial Intelligence and the Law


Book by Ryan Abbott: “AI and people do not compete on a level-playing field. Self-driving vehicles may be safer than human drivers, but laws often penalize such technology. People may provide superior customer service, but businesses are automating to reduce their taxes. AI may innovate more effectively, but an antiquated legal framework constrains inventive AI. In The Reasonable Robot, Ryan Abbott argues that the law should not discriminate between AI and human behavior and proposes a new legal principle that will ultimately improve human well-being. This work should be read by anyone interested in the rapidly evolving relationship between AI and the law….(More)”.

Challenging the Use of Algorithm-driven Decision-making in Benefits Determinations Affecting People with Disabilities


Paper by Lydia X. Z. Brown, Michelle Richardson, Ridhi Shetty, and Andrew Crawford: “Governments are increasingly turning to algorithms to determine whether and to what extent people should receive crucial benefits for programs like Medicaid, Medicare, unemployment, and Social Security Disability. Billed as a way to increase efficiency and root out fraud, these algorithm-driven decision-making tools are often implemented without much public debate and are incredibly difficult to understand once underway. Reports from people on the ground confirm that the tools are frequently reducing and denying benefits, often with unfair and inhumane results.

Benefits recipients are challenging these tools in court, arguing that flaws in the programs’ design or execution violate their due process rights, among other claims. These cases are some of the few active courtroom challenges to algorithm-driven decision-making, producing important precedent about people’s right to notice, explanation, and other procedural due process safeguards when algorithm-driven decisions are made about them. As the legal and policy world continues to recognize the outsized impact of algorithm-driven decision-making in various aspects of our lives, public benefits cases provide important insights into how such tools can operate; the risks of errors in design and execution; and the devastating human toll when tools are adopted without effective notice, input, oversight, and accountability. 

This report analyzes lawsuits that have been filed within the past 10 years arising from the use of algorithm-driven systems to assess people’s eligibility for, or the distribution of, public benefits. It identifies key insights from the various cases into what went wrong and analyzes the legal arguments that plaintiffs have used to challenge those systems in court. It draws on direct interviews with attorneys who have litigated these cases and plaintiffs who sought to vindicate their rights in court – in some instances suing not only for themselves, but on behalf of similarly situated people. The attorneys work in legal aid offices, civil rights litigation shops, law school clinics, and disability protection and advocacy offices. The cases cover a range of benefits issues and have netted mixed results.

People with disabilities experience disproportionate and particular harm because of unjust algorithm-driven decision-making, and we have attempted to center disabled people’s stories and cases in this paper. As disabled people fight for rights inside and outside the courtroom on a wide range of issues, we focus on litigation and highlight the major legal theories for challenging improper algorithm-driven benefit denials in the U.S. 

The good news is that in some cases, plaintiffs are successfully challenging improper adverse benefits decisions with Constitutional, statutory, and administrative claims. But like other forms of civil rights and impact litigation, the bad news is that relief can be temporary and is almost always delayed. Litigation must therefore work in tandem with the development of new processes driven by people who require access to public assistance and whose needs are centered in these processes. We hope this contribution informs not only the development of effective litigation, but a broader public conversation about the thoughtful design, use, and oversight of algorithm-driven decision-making systems….(More)”.

Learning like a State: Statecraft in the Digital Age


Essay by Marion Fourcade and Jeff Gordon: “…Recent books have argued that we live in an age of “informational” or “surveillance” capitalism, a new form of market governance marked by the accumulation and assetization of information, and by the dominance of platforms as sites of value extraction. Over the last decade-plus, both actual and idealized governance have been transformed by a combination of neoliberal ideology, new technologies for tracking and ranking populations, and the normative model of the platform behemoths, which carry the banner of technological modernity. In concluding a review of Julie Cohen’s and Shoshana Zuboff’s books, Amy Kapcyznski asks how we might build public power sufficient to govern the new private power. Answering that question, we believe, requires an honest reckoning with how public power has been warped by the same ideological, technological, and legal forces that brought about informational capitalism.

In our contribution to the inaugural JLPE issue, we argue that governments and their agents are starting to conceive of their role differently than in previous techno-social moments. Our jumping-off point is the observation that what may first appear as mere shifts in the state’s use of technology—from the “open data” movement to the NSA’s massive surveillance operation—actually herald a deeper transformation in the nature of statecraft itself. By “statecraft,” we mean the state’s mode of learning about society and intervening in it. We contrast what we call the “dataist” state with its high modernist predecessor, as portrayed memorably by the anthropologist James C. Scott, and with neoliberal governmentality, described by, among others, Michel Foucault and Wendy Brown.

The high modernist state expanded the scope of sovereignty by imposing borders, taking censuses, and coercing those on the outskirts of society into legibility through broad categorical lenses. It deployed its power to support large public projects, such as the reorganization of urban infrastructure. As the ideological zeitgeist evolved toward neoliberalism in the 1970s, however, the priority shifted to shoring up markets, and the imperative of legibility trickled down to the individual level. The poor and working class were left to fend for their rights and benefits in the name of market fitness and responsibility, while large corporations and the wealthy benefited handsomely.

As a political rationality, dataism builds on both of these threads by pursuing a project of total measurement in a neoliberal fashion—that is, by allocating rights and benefits to citizens and organizations according to (questionable) estimates of moral desert, and by re-assembling a legible society from the bottom up. Weakened by decades of anti-government ideology and concomitantly eroded capacity, privatization, and symbolic degradation, Western states have determined to manage social problems as they bubble up into crises rather than affirmatively seeking to intervene in their causes. The dataist state sets its sights on an expanse of emergent opportunities and threats. Its focus is not on control or competition, but on “readiness.” Its object is neither the population nor a putative homo economicus, but (as Gilles Deleuze put it) “dividuals,” that is, discrete slices of people and things (e.g. hospital visits, police stops, commuting trips). Under dataism, a well-governed society is one where events (not persons) are aligned to the state’s models and predictions, no matter how disorderly in high modernist terms or how irrational in neoliberal terms….(More)”.

Taming Complexity


Martin Reeves , Simon Levin , Thomas Fink and Ania Levina at Harvard Business Review: “….“Complexity” is one of the most frequently used terms in business but also one of the most ambiguous. Even in the sciences it has numerous definitions. For our purposes, we’ll define it as a large number of different elements (such as specific technologies, raw materials, products, people, and organizational units) that have many different connections to one another. Both qualities can be a source of advantage or disadvantage, depending on how they’re managed.

Let’s look at their strengths. To begin with, having many different elements increases the resilience of a system. A company that relies on just a few technologies, products, and processes—or that is staffed with people who have very similar backgrounds and perspectives—doesn’t have many ways to react to unforeseen opportunities and threats. What’s more, the redundancy and duplication that also characterize complex systems typically give them more buffering capacity and fallback options.

Ecosystems with a diversity of elements benefit from adaptability. In biology, genetic diversity is the grist for natural selection, nature’s learning mechanism. In business, as environments shift, sustained performance requires new offerings and capabilities—which can be created by recombining existing elements in fresh ways. For example, the fashion retailer Zara introduces styles (combinations of components) in excess of immediate needs, allowing it to identify the most popular products, create a tailored selection from them, and adapt to fast-changing fashion as a result.

Another advantage that complexity can confer on natural ecosystems is better coordination. That’s because the elements are often highly interconnected. Flocks of birds or herds of animals, for instance, share behavioral protocols that connect the members to one another and enable them to move and act as a group rather than as an uncoordinated collection of individuals. Thus they realize benefits such as collective security and more-effective foraging.

Finally, complexity can confer inimitability. Whereas individual elements may be easily copied, the interrelationships among multiple elements are hard to replicate. A case in point is Apple’s attempt in 2012 to compete with Google Maps. Apple underestimated the complexity of Google’s offering, leading to embarrassing glitches in the initial versions of its map app, which consequently struggled to gain acceptance with consumers. The same is true of a company’s strategy: If its complexity makes it hard to understand, rivals will struggle to imitate it, and the company will benefit….(More)”.

Surveillance in South Africa: From Skin Branding to Digital Colonialism


Paper by Michael Kwet: “South Africa’s long legacy of racism and colonial exploitation continues to echo throughout post-apartheid society. For centuries, European conquerors marshaled surveillance as a means to control the black population. This began with the requirements for passes to track and control the movements, settlements, and labor of Africans. Over time, surveillance technologies evolved alongside complex shifts in power, culture, and the political economy.

This Chapter explores the evolution of surveillance regimes in South Africa. The first surveillance system in South Africa used paper passes to police slave movements and enforce labor contracts. To make the system more robust, various white authorities marked the skin of workers and livestock with symbols registered in paper databases. At the beginning of the twentieth century, fingerprinting was introduced in some areas to simplify and improve the passes. Under apartheid, the National Party aimed to streamline a national, all-seeing surveillance system. They imported computers to impose a regime of fixed race classification and keep detailed records about the African population. The legal apparatus of race-based surveillance was finally abolished during the transition to democracy. However, today a regime of Big Data, artificial intelligence, and centralized cloud computing has ushered in a new era of mass surveillance in South Africa.

South Africa’s surveillance regimes were always devised in collaboration with foreign colonizers, imperialists, intellectuals, and profit-seeking capitalists. In each era, the United States increased its participation. During the period of settler conquest, the US had a modest presence in Southern Africa. With the onset of the minerals revolution, US power expanded, and American capitalists and engineers with business interests in the mines pushed for an improved pass system to police African workers. Under apartheid, US corporations supplied the computer technology essential to apartheid governance and business enterprise. Finally, during the latter years of post-apartheid, Silicon Valley corporations, together with US surveillance agencies, began imposing surveillance capitalism on South African society. A new form of domination, digital colonialism, has emerged, vesting the United States with unprecedented control over South African affairs. To counter the force of digital colonialism, a new movement may emerge to push to redesign the digital ecosystem as a socialist commons based on open technology, socialist legal solutions, bottom-up democracy, and Internet decentralization….(More).”