Who rules the deliberative party? Examining the Agora case in Belgium


Paper by Nino Junius and Joke Matthieu: “In recent years, pessimism about plebiscitary intra-party democracy has been challenged by assembly-based models of intra-party democracy. However, research has yet to explore the emergence of new power dynamics in parties originating from the implementation of deliberative practices in their intra-party democracy. We investigate how deliberative democratization reshuffles power relations within political parties through a case study of Agora, an internally deliberative movement party in Belgium. Employing a process-tracing approach using original interview and participant observation data, we argue that while plebiscitary intra-party democracy shifts power towards passive members prone to elite domination, our case suggests that deliberative intra-party democracy shifts power towards active members that are more likely to be critical of elites…(More)”

Can Social Media Rhetoric Incite Hate Incidents? Evidence from Trump’s “Chinese Virus” Tweets


Paper by Andy Cao, Jason M. Lindo & Jiee Zhong: “We will investigate whether Donald Trump’s “Chinese Virus” tweets contributed to the rise of anti-Asian incidents. We find that the number of incidents spiked following Trump’s initial “Chinese Virus” tweets and the subsequent dramatic rise in internet search activity for the phrase. Difference-in-differences and event-study analyses leveraging spatial variation indicate that this spike in anti-Asian incidents was significantly more pronounced in counties that supported Donald Trump in the 2016 presidential election relative to those that supported Hillary Clinton. We estimate that anti-Asian incidents spiked by 4000 percent in Trump-supporting counties, over and above the spike observed in Clinton-supporting counties…(More)”.

Antitrust, Regulation, and User Union in the Era of Digital Platforms and Big Data


Paper by Lin William Cong and Simon Mayer: “We model platform competition with endogenous data generation, collection, and sharing, thereby providing a unifying framework to evaluate data-related regulation and antitrust policies. Data are jointly produced from users’ economic activities and platforms’ investments in data infrastructure. Data improves service quality, causing a feedback loop that tends to concentrate market power. Dispersed users do not internalize the impact of their data contribution on (i) service quality for other users, (ii) market concentration, and (iii) platforms’ incentives to invest in data infrastructure, causing inefficient over- or under-collection of data. Data sharing proposals, user privacy protections, platform commitments, and markets for data cannot fully address these inefficiencies. We propose and analyze user union, which represents and coordinates users, as an effective solution for antitrust and consumer protection in the digital era…(More)”.

Charting an Equity-Centered Public Health Data System


Introduction to Special Issue by Alonzo L. Plough: “…The articles in this special issue were written with that vision in mind; several of them even informed the commission’s deliberations. Each article addresses an issue essential to the challenge of building an equity-focused public health data system:

  • Why Equity Matters in Public Health Data. Authors Anita Chandra, Laurie T. Martin, Joie D. Acosta, Christopher Nelson, Douglas Yeung, Nabeel Qureshi, and Tara Blagg explore where and how equity has been lacking in public health data and the implications of considering equity to the tech and data sectors.
  • What is Public Health Data? As authors Joie D. Acosta, Anita Chandra, Douglas Yeung, Christopher Nelson, Nabeel Qureshi, Tara Blagg, and Laurie T. Martin explain, good public health data are more than just health data. We need to reimagine the types of data we collect and from where, as well data precision, granularity, timeliness, and more.
  • Public Health Data and Special Populations. People of color, women, people with disabilities, and people who are lesbian, gay bisexual trans-gendered queer are among the populations that have been inconsistently represented in public health data over time. This article by authors Tina J. Kauh and Maryam Khojasteh reviews findings for each population, as well as commonalities across populations.
  • Public health data interoperability and connectedness. What are challenges to connecting public health data swiftly yet accurately? What gaps need to be filled? How can the data and tech sector help address these issues? These are some of the questions explored in this article by authors Laurie T. Martin, Christopher Nelson, Douglas Yeung, Joie D. Acosta, Nabeel Qureshi, Tara Blagg, and Anita Chandra.
  • Integrating Tech and Data Expertise into the Public Health Workforce. This article by authors Laurie T. Martin, Anita Chandra, Christopher Nelson, Douglas Yeung, Joie D. Acosta, Nabeel Qureshi, and Tara Blag envisions what a tech-savvy public health workforce will look like and how it can be achieved through new workforce models, opportunities to expand capacity, and training….(More)”.

Wicked Problems Might Inspire Greater Data Sharing


Paper by Susan Ariel Aaronson: “In 2021, the United Nations Development Program issued a plea in their 2021 Digital Economy Report. “ Global data-sharing can help address major global development challenges such as poverty, health, hunger and climate change. …Without global cooperation on data and information, research to develop the vaccine and actions to tackle the impact of the pandemic would have been a much more difficult task. Thus, in the same way as some data can be public goods, there is a case for some data to be considered as global public goods, which need to be addressed and provided through global governance.” (UNDP: 2021, 178). Global public goods are goods and services with benefits and costs that potentially extend to all countries, people, and generations. Global data sharing can also help solve what scholars call wicked problems—problems so complex that they require innovative, cost effective and global mitigating strategies. Wicked problems are problems that no one knows how to solve without
creating further problems. Hence, policymakers must find ways to encourage greater data sharing among entities that hold large troves of various types of data, while protecting that data from theft, manipulation etc. Many factors impede global data sharing for public good purposes; this analysis focuses on two.
First, policymakers generally don’t think about data as a global public good; they view data as a commercial asset that they should nurture and control. While they may understand that data can serve the public interest, they are more concerned with using data to serve their country’s economic interest. Secondly, many leaders of civil society and business see the data they have collected as proprietary data. So far many leaders of private entities with troves of data are not convinced that their organization will benefit from such sharing. At the same time, companies voluntarily share some data for social good purposes.

However, data cannot meet its public good purpose if data is not shared among societal entities. Moreover, if data as a sovereign asset, policymakers are unlikely to encourage data sharing across borders oriented towards addressing shared problems. Consequently, society will be less able to use data as both a commercial asset and as a resource to enhance human welfare. As the Bennet Institute and ODI have argued, “value comes from data being brought together, and that requires organizations to let others use the data they hold.” But that also means the entities that collected the data may not accrue all of the benefits from that data (Bennett Institute and ODI: 2020a: 4). In short, private entities are not sufficiently incentivized to share data in the global public good…(More)”.

Addressing ethical gaps in ‘Technology for Good’: Foregrounding care and capabilities


Paper by Alison B. Powell et al: “This paper identifies and addresses persistent gaps in the consideration of ethical practice in ‘technology for good’ development contexts. Its main contribution is to model an integrative approach using multiple ethical frameworks to analyse and understand the everyday nature of ethical practice, including in professional practice among ‘technology for good’ start-ups. The paper identifies inherent paradoxes in the ‘technology for good’ sector as well as ethical gaps related to (1) the sometimes-misplaced assignment of virtuousness to an individual; (2) difficulties in understanding social constraints on ethical action; and (3) the often unaccounted for mismatch between ethical intentions and outcomes in everyday practice, including in professional work associated with an ‘ethical turn’ in technology. These gaps persist even in contexts where ethics are foregrounded as matters of concern. To address the gaps, the paper suggests systemic, rather than individualized, considerations of care and capability applied to innovation settings, in combination with considerations of virtue and consequence. This paper advocates for addressing these challenges holistically in order to generate renewed capacity for change at a systemic level…(More)”.

Does AI Debias Recruitment? Race, Gender, and AI’s “Eradication of Difference”


Paper by Eleanor Drage & Kerry Mackereth: “In this paper, we analyze two key claims offered by recruitment AI companies in relation to the development and deployment of AI-powered HR tools: (1) recruitment AI can objectively assess candidates by removing gender and race from their systems, and (2) this removal of gender and race will make recruitment fairer, help customers attain their DEI goals, and lay the foundations for a truly meritocratic culture to thrive within an organization. We argue that these claims are misleading for four reasons: First, attempts to “strip” gender and race from AI systems often misunderstand what gender and race are, casting them as isolatable attributes rather than broader systems of power. Second, the attempted outsourcing of “diversity work” to AI-powered hiring tools may unintentionally entrench cultures of inequality and discrimination by failing to address the systemic problems within organizations. Third, AI hiring tools’ supposedly neutral assessment of candidates’ traits belie the power relationship between the observer and the observed. Specifically, the racialized history of character analysis and its associated processes of classification and categorization play into longer histories of taxonomical sorting and reflect the current demands and desires of the job market, even when not explicitly conducted along the lines of gender and race. Fourth, recruitment AI tools help produce the “ideal candidate” that they supposedly identify through by constructing associations between words and people’s bodies. From these four conclusions outlined above, we offer three key recommendations to AI HR firms, their customers, and policy makers going forward…(More)”.

Nudging the Nudger: A Field Experiment on the Effect of Performance Feedback to Service Agents on Increasing Organ Donor Registrations


Paper by Julian House, Nicola Lacetera, Mario Macis & Nina Mazar: “We conducted a randomized controlled trial involving nearly 700 customer-service representatives (CSRs) in a Canadian government service agency to study whether providing CSRs with performance feedback with or without peer comparison affected their subsequent organ donor registration rates. Despite having no tie to remuneration or promotion, the provision of individual performance feedback three times over one year resulted in a 25% increase in daily signups, compared to otherwise similar encouragement and reminders. Adding benchmark information that compared CSRs performance to average and top peer performance did not further enhance this effect. Registrations increased more among CSRs whose performance was already above average, and there was no negative effect on lower-performing CSRs. A post-intervention survey showed that CSRs found the information included in the treatments helpful and encouraging. However, performance feedback without benchmark information increased perceived pressure to perform…(More)”.

Global healthcare fairness: We should be sharing more, not less, data


Paper by Kenneth P. Seastedt et al: “The availability of large, deidentified health datasets has enabled significant innovation in using machine learning (ML) to better understand patients and their diseases. However, questions remain regarding the true privacy of this data, patient control over their data, and how we regulate data sharing in a way that does not encumber progress or further potentiate biases for underrepresented populations. After reviewing the literature on potential reidentifications of patients in publicly available datasets, we argue that the cost—measured in terms of access to future medical innovations and clinical software—of slowing ML progress is too great to limit sharing data through large publicly available databases for concerns of imperfect data anonymization. This cost is especially great for developing countries where the barriers preventing inclusion in such databases will continue to rise, further excluding these populations and increasing existing biases that favor high-income countries. Preventing artificial intelligence’s progress towards precision medicine and sliding back to clinical practice dogma may pose a larger threat than concerns of potential patient reidentification within publicly available datasets. While the risk to patient privacy should be minimized, we believe this risk will never be zero, and society has to determine an acceptable risk threshold below which data sharing can occur—for the benefit of a global medical knowledge system….(More)”.

Legal Dynamism


Paper by Sandy Pentland and Robert Mahari: “Shortly after the start of the French Revolution, Thomas Jefferson wrote a now famous letter to James Madison. He argued that no society could make a perpetual constitution, or indeed a perpetual law, that binds future generations. Every law ought to expire after nineteen years. Jefferson’s argument rested on the view that it is fundamentally unjust for people in the present to create laws for those in the future, but his argument is also appealing from a purely pragmatic perspective. As the state of the world changes, laws become outdated, and forcing future generations to abide by outdated laws is unjust and inefficient.

Today, the law appears to be at the cusp of its own revolution. Longer than most other disciplines, it has resisted technical transformation. Increasingly, however, computational approaches are finding their way into the creation and implementation of law and the field of computational law is rapidly expanding. One of the most exciting promises of computational law is the idea of legal dynamism: the concept that a law, by means of computational tools, can be expressed not as a static rule statement but rather as a dynamic object that includes system performance goals, metrics for success, and the ability to adapt the law in response to its performance…

The image of laws as algorithms goes back to at least the 1980s when the application of expert systems to legal reasoning was first explored. Whether applied by a machine learning system or a human, legal algorithms rely on inputs from society and produce outputs that affect social behavior and that are intended to produce social outcomes. As such, it appears that legal algorithms are akin to other human-machine systems and so the law may benefit from insights from the general study of these systems. Various design frameworks for human-machine systems have been proposed, many of which focus on the importance of measuring system performance and iterative redesign. In our view, these frameworks can also be applied to the design of legal systems.

A basic design framework consists of five components..(More)”.