Paper by Lin William Cong and Simon Mayer: “We model platform competition with endogenous data generation, collection, and sharing, thereby providing a unifying framework to evaluate data-related regulation and antitrust policies. Data are jointly produced from users’ economic activities and platforms’ investments in data infrastructure. Data improves service quality, causing a feedback loop that tends to concentrate market power. Dispersed users do not internalize the impact of their data contribution on (i) service quality for other users, (ii) market concentration, and (iii) platforms’ incentives to invest in data infrastructure, causing inefficient over- or under-collection of data. Data sharing proposals, user privacy protections, platform commitments, and markets for data cannot fully address these inefficiencies. We propose and analyze user union, which represents and coordinates users, as an effective solution for antitrust and consumer protection in the digital era…(More)”.
Charting an Equity-Centered Public Health Data System
Introduction to Special Issue by Alonzo L. Plough: “…The articles in this special issue were written with that vision in mind; several of them even informed the commission’s deliberations. Each article addresses an issue essential to the challenge of building an equity-focused public health data system:
- Why Equity Matters in Public Health Data. Authors Anita Chandra, Laurie T. Martin, Joie D. Acosta, Christopher Nelson, Douglas Yeung, Nabeel Qureshi, and Tara Blagg explore where and how equity has been lacking in public health data and the implications of considering equity to the tech and data sectors.
- What is Public Health Data? As authors Joie D. Acosta, Anita Chandra, Douglas Yeung, Christopher Nelson, Nabeel Qureshi, Tara Blagg, and Laurie T. Martin explain, good public health data are more than just health data. We need to reimagine the types of data we collect and from where, as well data precision, granularity, timeliness, and more.
- Public Health Data and Special Populations. People of color, women, people with disabilities, and people who are lesbian, gay bisexual trans-gendered queer are among the populations that have been inconsistently represented in public health data over time. This article by authors Tina J. Kauh and Maryam Khojasteh reviews findings for each population, as well as commonalities across populations.
- Public health data interoperability and connectedness. What are challenges to connecting public health data swiftly yet accurately? What gaps need to be filled? How can the data and tech sector help address these issues? These are some of the questions explored in this article by authors Laurie T. Martin, Christopher Nelson, Douglas Yeung, Joie D. Acosta, Nabeel Qureshi, Tara Blagg, and Anita Chandra.
- Integrating Tech and Data Expertise into the Public Health Workforce. This article by authors Laurie T. Martin, Anita Chandra, Christopher Nelson, Douglas Yeung, Joie D. Acosta, Nabeel Qureshi, and Tara Blag envisions what a tech-savvy public health workforce will look like and how it can be achieved through new workforce models, opportunities to expand capacity, and training….(More)”.
Wicked Problems Might Inspire Greater Data Sharing
Paper by Susan Ariel Aaronson: “In 2021, the United Nations Development Program issued a plea in their 2021 Digital Economy Report. “ Global data-sharing can help address major global development challenges such as poverty, health, hunger and climate change. …Without global cooperation on data and information, research to develop the vaccine and actions to tackle the impact of the pandemic would have been a much more difficult task. Thus, in the same way as some data can be public goods, there is a case for some data to be considered as global public goods, which need to be addressed and provided through global governance.” (UNDP: 2021, 178). Global public goods are goods and services with benefits and costs that potentially extend to all countries, people, and generations. Global data sharing can also help solve what scholars call wicked problems—problems so complex that they require innovative, cost effective and global mitigating strategies. Wicked problems are problems that no one knows how to solve without
creating further problems. Hence, policymakers must find ways to encourage greater data sharing among entities that hold large troves of various types of data, while protecting that data from theft, manipulation etc. Many factors impede global data sharing for public good purposes; this analysis focuses on two.
First, policymakers generally don’t think about data as a global public good; they view data as a commercial asset that they should nurture and control. While they may understand that data can serve the public interest, they are more concerned with using data to serve their country’s economic interest. Secondly, many leaders of civil society and business see the data they have collected as proprietary data. So far many leaders of private entities with troves of data are not convinced that their organization will benefit from such sharing. At the same time, companies voluntarily share some data for social good purposes.
However, data cannot meet its public good purpose if data is not shared among societal entities. Moreover, if data as a sovereign asset, policymakers are unlikely to encourage data sharing across borders oriented towards addressing shared problems. Consequently, society will be less able to use data as both a commercial asset and as a resource to enhance human welfare. As the Bennet Institute and ODI have argued, “value comes from data being brought together, and that requires organizations to let others use the data they hold.” But that also means the entities that collected the data may not accrue all of the benefits from that data (Bennett Institute and ODI: 2020a: 4). In short, private entities are not sufficiently incentivized to share data in the global public good…(More)”.
Addressing ethical gaps in ‘Technology for Good’: Foregrounding care and capabilities
Paper by Alison B. Powell et al: “This paper identifies and addresses persistent gaps in the consideration of ethical practice in ‘technology for good’ development contexts. Its main contribution is to model an integrative approach using multiple ethical frameworks to analyse and understand the everyday nature of ethical practice, including in professional practice among ‘technology for good’ start-ups. The paper identifies inherent paradoxes in the ‘technology for good’ sector as well as ethical gaps related to (1) the sometimes-misplaced assignment of virtuousness to an individual; (2) difficulties in understanding social constraints on ethical action; and (3) the often unaccounted for mismatch between ethical intentions and outcomes in everyday practice, including in professional work associated with an ‘ethical turn’ in technology. These gaps persist even in contexts where ethics are foregrounded as matters of concern. To address the gaps, the paper suggests systemic, rather than individualized, considerations of care and capability applied to innovation settings, in combination with considerations of virtue and consequence. This paper advocates for addressing these challenges holistically in order to generate renewed capacity for change at a systemic level…(More)”.
Does AI Debias Recruitment? Race, Gender, and AI’s “Eradication of Difference”
Paper by Eleanor Drage & Kerry Mackereth: “In this paper, we analyze two key claims offered by recruitment AI companies in relation to the development and deployment of AI-powered HR tools: (1) recruitment AI can objectively assess candidates by removing gender and race from their systems, and (2) this removal of gender and race will make recruitment fairer, help customers attain their DEI goals, and lay the foundations for a truly meritocratic culture to thrive within an organization. We argue that these claims are misleading for four reasons: First, attempts to “strip” gender and race from AI systems often misunderstand what gender and race are, casting them as isolatable attributes rather than broader systems of power. Second, the attempted outsourcing of “diversity work” to AI-powered hiring tools may unintentionally entrench cultures of inequality and discrimination by failing to address the systemic problems within organizations. Third, AI hiring tools’ supposedly neutral assessment of candidates’ traits belie the power relationship between the observer and the observed. Specifically, the racialized history of character analysis and its associated processes of classification and categorization play into longer histories of taxonomical sorting and reflect the current demands and desires of the job market, even when not explicitly conducted along the lines of gender and race. Fourth, recruitment AI tools help produce the “ideal candidate” that they supposedly identify through by constructing associations between words and people’s bodies. From these four conclusions outlined above, we offer three key recommendations to AI HR firms, their customers, and policy makers going forward…(More)”.
Nudging the Nudger: A Field Experiment on the Effect of Performance Feedback to Service Agents on Increasing Organ Donor Registrations
Paper by Julian House, Nicola Lacetera, Mario Macis & Nina Mazar: “We conducted a randomized controlled trial involving nearly 700 customer-service representatives (CSRs) in a Canadian government service agency to study whether providing CSRs with performance feedback with or without peer comparison affected their subsequent organ donor registration rates. Despite having no tie to remuneration or promotion, the provision of individual performance feedback three times over one year resulted in a 25% increase in daily signups, compared to otherwise similar encouragement and reminders. Adding benchmark information that compared CSRs performance to average and top peer performance did not further enhance this effect. Registrations increased more among CSRs whose performance was already above average, and there was no negative effect on lower-performing CSRs. A post-intervention survey showed that CSRs found the information included in the treatments helpful and encouraging. However, performance feedback without benchmark information increased perceived pressure to perform…(More)”.
Global healthcare fairness: We should be sharing more, not less, data
Paper by Kenneth P. Seastedt et al: “The availability of large, deidentified health datasets has enabled significant innovation in using machine learning (ML) to better understand patients and their diseases. However, questions remain regarding the true privacy of this data, patient control over their data, and how we regulate data sharing in a way that does not encumber progress or further potentiate biases for underrepresented populations. After reviewing the literature on potential reidentifications of patients in publicly available datasets, we argue that the cost—measured in terms of access to future medical innovations and clinical software—of slowing ML progress is too great to limit sharing data through large publicly available databases for concerns of imperfect data anonymization. This cost is especially great for developing countries where the barriers preventing inclusion in such databases will continue to rise, further excluding these populations and increasing existing biases that favor high-income countries. Preventing artificial intelligence’s progress towards precision medicine and sliding back to clinical practice dogma may pose a larger threat than concerns of potential patient reidentification within publicly available datasets. While the risk to patient privacy should be minimized, we believe this risk will never be zero, and society has to determine an acceptable risk threshold below which data sharing can occur—for the benefit of a global medical knowledge system….(More)”.
Legal Dynamism
Paper by Sandy Pentland and Robert Mahari: “Shortly after the start of the French Revolution, Thomas Jefferson wrote a now famous letter to James Madison. He argued that no society could make a perpetual constitution, or indeed a perpetual law, that binds future generations. Every law ought to expire after nineteen years. Jefferson’s argument rested on the view that it is fundamentally unjust for people in the present to create laws for those in the future, but his argument is also appealing from a purely pragmatic perspective. As the state of the world changes, laws become outdated, and forcing future generations to abide by outdated laws is unjust and inefficient.
Today, the law appears to be at the cusp of its own revolution. Longer than most other disciplines, it has resisted technical transformation. Increasingly, however, computational approaches are finding their way into the creation and implementation of law and the field of computational law is rapidly expanding. One of the most exciting promises of computational law is the idea of legal dynamism: the concept that a law, by means of computational tools, can be expressed not as a static rule statement but rather as a dynamic object that includes system performance goals, metrics for success, and the ability to adapt the law in response to its performance…
The image of laws as algorithms goes back to at least the 1980s when the application of expert systems to legal reasoning was first explored. Whether applied by a machine learning system or a human, legal algorithms rely on inputs from society and produce outputs that affect social behavior and that are intended to produce social outcomes. As such, it appears that legal algorithms are akin to other human-machine systems and so the law may benefit from insights from the general study of these systems. Various design frameworks for human-machine systems have been proposed, many of which focus on the importance of measuring system performance and iterative redesign. In our view, these frameworks can also be applied to the design of legal systems.
A basic design framework consists of five components..(More)”.
Public procurement of artificial intelligence systems: new risks and future proofing
Paper by Merve Hickok: “Public entities around the world are increasingly deploying artificial intelligence (AI) and algorithmic decision-making systems to provide public services or to use their enforcement powers. The rationale for the public sector to use these systems is similar to private sector: increase efficiency and speed of transactions and lower the costs. However, public entities are first and foremost established to meet the needs of the members of society and protect the safety, fundamental rights, and wellbeing of those they serve. Currently AI systems are deployed by the public sector at various administrative levels without robust due diligence, monitoring, or transparency. This paper critically maps out the challenges in procurement of AI systems by public entities and the long-term implications necessitating AI-specific procurement guidelines and processes. This dual-prong exploration includes the new complexities and risks introduced by AI systems, and the institutional capabilities impacting the decision-making process. AI-specific public procurement guidelines are urgently needed to protect fundamental rights and due process…(More)”.
Law Informs Code: A Legal Informatics Approach to Aligning Artificial Intelligence with Humans
Paper by John Nay: “Artificial Intelligence (AI) capabilities are rapidly advancing. Highly capable AI could cause radically different futures depending on how it is developed and deployed. We are unable to specify human goals and societal values in a way that reliably directs AI behavior. Specifying the desirability (value) of an AI system taking a particular action in a particular state of the world is unwieldy beyond a very limited set of value-action-states. The purpose of machine learning is to train on a subset of states and have the resulting agent generalize an ability to choose high value actions in unencountered circumstances. But the function ascribing values to an agent’s actions during training is inevitably an incredibly incomplete encapsulation of human values, and the training process is a sparse exploration of states pertinent to all possible futures. Therefore, after training, AI is deployed with a coarse map of human preferred territory and will often choose actions unaligned with our preferred paths.
Law-making and legal interpretation form a computational engine that converts opaque human intentions and values into legible directives. Law Informs Code is the research agenda capturing complex computational legal processes, and embedding them in AI. Similar to how parties to a legal contract cannot foresee every potential “if-then” contingency of their future relationship, and legislators cannot predict all the circumstances under which their proposed bills will be applied, we cannot ex ante specify “if-then” rules that provably direct good AI behavior. Legal theory and practice have developed arrays of tools to address these specification problems. For instance, legal standards allow humans to develop shared understandings and adapt them to novel situations, i.e., to generalize expectations regarding actions taken to unspecified states of the world. In contrast to more prosaic uses of the law (e.g., as a deterrent of bad behavior through the threat of sanction), leveraged as an expression of how humans communicate their goals, and what society values, Law Informs Code.
We describe how data generated by legal processes and the practices of law (methods of law-making, statutory interpretation, contract drafting, applications of standards, legal reasoning, etc.) can facilitate the robust specification of inherently vague human goals. This increases human-AI alignment and the local usefulness of AI. Toward society-AI alignment, we present a framework for understanding law as the applied philosophy of multi-agent alignment, harnessing public law as an up-to-date knowledge base of democratically endorsed values ascribed to state-action pairs. Although law is partly a reflection of historically contingent political power – and thus not a perfect aggregation of citizen preferences – if properly parsed, its distillation offers the most legitimate computational comprehension of societal values available. Other data sources suggested for AI alignment – surveys of preferences, humans labeling “ethical” situations, or (most commonly) the implicit beliefs of the AI system designers – lack an authoritative source of synthesized preference aggregation. Law is grounded in a verifiable resolution: ultimately obtained from a court opinion, but short of that, elicited from legal experts. If law eventually informs powerful AI, engaging in the deliberative political process to improve law takes on even more meaning…(More)”.