Book by Yves Sintomer: “Electoral democracies are struggling. Sintomer, in this instructive book, argues for democratic innovations. One such innovation is using random selection to create citizen bodies with advisory or decisional political power. ‘Sortition’ has a long political history. Coupled with elections, it has represented an important yet often neglected dimension of Republican and democratic government, and has been reintroduced in the Global North, China and Mexico. The Government of Chance explores why sortation is returning, how it is coupled with deliberation, and why randomly selected ‘minipublics’ and citizens’ assemblies are flourishing. Relying on a growing international and interdisciplinary literature, Sintomer provides the first systematic and theoretical reconstruction of the government of chance from Athens to the present. At what conditions can it be rational? What lessons can be drawn from history? The Government of Chance therefore clarifies the democratic imaginaries at stake: deliberative, antipolitical, and radical, making a plaidoyer for the latter….(More)”.
Reclaiming Participatory Governance
Book edited by Adrian Bua and Sonia Bussu: “…offers empirical and theoretical perspectives on how the relationship between social movements and state institutions is emerging and developing through new modes of participatory governance.
One of the most interesting political developments of the past decade has been the adoption by social movements of strategies seeking to change political institutions through participatory governance. These strategies have flourished in a variety of contexts, from anti-austerity and pro-social justice protests in Spain, to movements demanding climate transition and race equality in the UK and the USA, to constitutional reforms in Belgium and Iceland. The chief ambition and challenge of these new forms of participatory governance is to institutionalise the prefigurative politics and social justice values that inspired them in the first place, by mobilising the bureaucracy to respond to their claims for reforms and rights. The authors of this volume assess how participatory governance is being transformed and explore the impact of such changes, providing timely critical reflections on: the constraints imposed by cultural, economic and political power relations on these new empowered participatory spaces; the potential of this new “wave” of participatory democracy to reimagine the relationship between citizens and traditional institutions towards more radical democratic renewal; where and how these new democratisation efforts sit within the representative state; and how tensions between the different demands of lay citizens, organised civil society and public officials are being managed….(More)”.
Your Data Is Diminishing Your Freedom
Interview by David Marchese: “It’s no secret — even if it hasn’t yet been clearly or widely articulated — that our lives and our data are increasingly intertwined, almost indistinguishable. To be able to function in modern society is to submit to demands for ID numbers, for financial information, for filling out digital fields and drop-down boxes with our demographic details. Such submission, in all senses of the word, can push our lives in very particular and often troubling directions. It’s only recently, though, that I’ve seen someone try to work through the deeper implications of what happens when our data — and the formats it’s required to fit — become an inextricable part of our existence, like a new limb or organ to which we must adapt. ‘‘I don’t want to claim we are only data and nothing but data,’’ says Colin Koopman, chairman of the philosophy department at the University of Oregon and the author of ‘‘How We Became Our Data.’’ ‘‘My claim is you are your data, too.’’ Which at the very least means we should be thinking about this transformation beyond the most obvious data-security concerns. ‘‘We’re strikingly lackadaisical,’’ says Koopman, who is working on a follow-up book, tentatively titled ‘‘Data Equals,’’ ‘‘about how much attention we give to: What are these data showing? What assumptions are built into configuring data in a given way? What inequalities are baked into these data systems? We need to be doing more work on this.’’
Can you explain more what it means to say that we have become our data? Because a natural reaction to that might be, well, no, I’m my mind, I’m my body, I’m not numbers in a database — even if I understand that those numbers in that database have real bearing on my life. The claim that we are data can also be taken as a claim that we live our lives through our data in addition to living our lives through our bodies, through our minds, through whatever else. I like to take a historical perspective on this. If you wind the clock back a couple hundred years or go to certain communities, the pushback wouldn’t be, ‘‘I’m my body,’’ the pushback would be, ‘‘I’m my soul.’’ We have these evolving perceptions of our self. I don’t want to deny anybody that, yeah, you are your soul. My claim is that your data has become something that is increasingly inescapable and certainly inescapable in the sense of being obligatory for your average person living out their life. There’s so much of our lives that are woven through or made possible by various data points that we accumulate around ourselves — and that’s interesting and concerning. It now becomes possible to say: ‘‘These data points are essential to who I am. I need to tend to them, and I feel overwhelmed by them. I feel like it’s being manipulated beyond my control.’’ A lot of people have that relationship to their credit score, for example. It’s both very important to them and very mysterious…(More)”.
The Law of AI for Good
Paper by Orly Lobel: “Legal policy and scholarship are increasingly focused on regulating technology to safeguard against risks and harms, neglecting the ways in which the law should direct the use of new technology, and in particular artificial intelligence (AI), for positive purposes. This article pivots the debates about automation, finding that the focus on AI wrongs is descriptively inaccurate, undermining a balanced analysis of the benefits, potential, and risks involved in digital technology. Further, the focus on AI wrongs is normatively and prescriptively flawed, narrowing and distorting the law reforms currently dominating tech policy debates. The law-of-AI-wrongs focuses on reactive and defensive solutions to potential problems while obscuring the need to proactively direct and govern increasingly automated and datafied markets and societies. Analyzing a new Federal Trade Commission (FTC) report, the Biden administration’s 2022 AI Bill of Rights and American and European legislative reform efforts, including the Algorithmic Accountability Act of 2022, the Data Privacy and Protection Act of 2022, the European General Data Protection Regulation (GDPR) and the new draft EU AI Act, the article finds that governments are developing regulatory strategies that almost exclusively address the risks of AI while paying short shrift to its benefits. The policy focus on risks of digital technology is pervaded by logical fallacies and faulty assumptions, failing to evaluate AI in comparison to human decision-making and the status quo. The article presents a shift from the prevailing absolutist approach to one of comparative cost-benefit. The role of public policy should be to oversee digital advancements, verify capabilities, and scale and build public trust in the most promising technologies.
A more balanced regulatory approach to AI also illuminates tensions between current AI policies. Because AI requires better, more representative data, the right to privacy can conflict with the right to fair, unbiased, and accurate algorithmic decision-making. This article argues that the dominant policy frameworks regulating AI risks—emphasizing the right to human decision-making (human-in-the-loop) and the right to privacy (data minimization)—must be complemented with new corollary rights and duties: a right to automated decision-making (human-out-of-the-loop) and a right to complete and connected datasets (data maximization). Moreover, a shift to proactive governance of AI reveals the necessity for behavioral research on how to establish not only trustworthy AI, but also human rationality and trust in AI. Ironically, many of the legal protections currently proposed conflict with existing behavioral insights on human-machine trust. The article presents a blueprint for policymakers to engage in the deliberate study of how irrational aversion to automation can be mitigated through education, private-public governance, and smart policy design…(More)”
Wanted: Data Stewards — Drafting the Job Specs for A Re-imagined Data Stewardship Role

Blog by Stefaan Verhulst: “With the rapid datafication of our world and the ever-growing need to access data for re-use in the public interest, it’s no surprise that the need for data stewards is becoming increasingly more important every day. Organizations across sectors and geographies, from the United Nations Statistics Division to the Government of New Zealand, are all moving towards defining the roles and responsibilities of a data steward within their own unique contexts and use cases.
At The GovLab, we have long advocated for the professionalization of data stewardship through our research into the role of data stewards in fostering data collaboration, as well as our executive education courses at the Data Stewards Academy. The recent launch of The Data Tank, a non-profit dedicated to addressing the challenges and opportunities of datafication, which I co-founded, is another step in the right direction, creating a platform to explore data stewardship in practice and providing additional educational resources.
While these resources are no doubt valuable, we are still often faced with the question: What are the required competencies of a data steward? If I want to hire or train a data steward, what should the job specifications be?
With that in mind, we are initiating a process of crafting a job description for data stewards, outlining the responsibilities, skills, and behaviors of a data steward below. Such a job description may not only help organizations create formal data steward roles internally and recruit externally, but it will also help aspiring data stewards seek out the relevant training and opportunities for them to strengthen their skillset.
The job description below captures our initial thoughts on the role of a data steward, and we would welcome your insights on the roles and skills required to be an effective data steward. It is based on previous presentations shared publicly….(More)”.
Data Collaborative Case Study: NYC Recovery Data Partnership
Report by the Open Data Policy Lab (The GovLab): “In July 2020, following severe economic and social losses due to the COVID-19 pandemic, the administration of New York City Mayor Bill de Blasio announced the NYC Recovery Data Partnership. This data collaborative asked private and civic organizations with assets relevant to New York City to provide their data to the city. Senior city leaders from the First Deputy Mayor’s Office, the Mayor’s Office of Operations, Mayor’s Office of Information Privacy and Mayor’s Office of Data Analytics formed an internal coalition which served as trusted intermediaries, assessing agency requests from city agencies to use the data provided and allocating access accordingly. The data informed internal research conducted by various city agencies, including New York City Emergency Management’s Recovery Team and the NYC…(More)”
Principles for effective beneficial ownership disclosure
Open Ownership: “The Open Ownership Principles (OO Principles) are a framework for considering the elements that influence whether the implementation of reforms to improve the transparency of the beneficial ownership of corporate vehicles will lead to effective beneficial ownership disclosure, that is, it generates high-quality and reliable data, maximising usability for users.
The OO Principles are intended to support governments implementing effective beneficial ownership transparency reforms and guide international institutions, civil society, and private sector actors in understanding and supporting reforms. They are a tool to identify and separate issues affecting implementation, and they provide a framework for assessing and improving existing disclosure regimes. If implemented together, the OO Principles enable disclosure systems to generate actionable and usable data across the widest range of policy applications of beneficial ownership data.
The nine principles are interdependent, but can be broadly grouped by the three main ways they improve data. The Definition, Coverage, and Detail principles enable data disclosure and collection. The Central register, Access, and Structured data principles facilitate data storage and auditability. Finally, the Verification, Up-to-date and historical records, and Sanctions and enforcement principles improve data quality and reliability….Download January 2023 version (translated versions are forthcoming)”
Machine Learning as a Tool for Hypothesis Generation
Paper by Jens Ludwig & Sendhil Mullainathan: “While hypothesis testing is a highly formalized activity, hypothesis generation remains largely informal. We propose a systematic procedure to generate novel hypotheses about human behavior, which uses the capacity of machine learning algorithms to notice patterns people might not. We illustrate the procedure with a concrete application: judge decisions about who to jail. We begin with a striking fact: The defendant’s face alone matters greatly for the judge’s jailing decision. In fact, an algorithm given only the pixels in the defendant’s mugshot accounts for up to half of the predictable variation. We develop a procedure that allows human subjects to interact with this black-box algorithm to produce hypotheses about what in the face influences judge decisions. The procedure generates hypotheses that are both interpretable and novel: They are not explained by demographics (e.g. race) or existing psychology research; nor are they already known (even if tacitly) to people or even experts. Though these results are specific, our procedure is general. It provides a way to produce novel, interpretable hypotheses from any high-dimensional dataset (e.g. cell phones, satellites, online behavior, news headlines, corporate filings, and high-frequency time series). A central tenet of our paper is that hypothesis generation is in and of itself a valuable activity, and hope this encourages future work in this largely “pre-scientific” stage of science…(More)”.
Haste: The Slow Politics of Climate Urgency
Book edited by Håvard Haarstad, Jakob Grandin, Kristin Kjærås, and Eleanor Johnson: “It’s understandable that we tend to present climate change as something urgently requiring action. Every day we fail to act, the potential for catastrophe grows. But is that framing itself a problem? When we hurry, we make more mistakes. We overlook things. We get tunnel vision.
In Haste, a group of distinguished contributors makes the case for a slow politics of urgency. Rather than rushing and speeding up, he argues, the sustainable future is better served by our challenging of the dominant framings through which we understand time and change in society. While recognizing the need for certain types of urgency in climate politics, Haste directs attention to the different and alternative temporalities at play in climate and sustainability politics. Divided into short and accessible chapters, written by both established and emerging scholars from different disciplines, Haste tackles a major problem in contemporary climate change research and offers creative perspectives on pathways out of the climate emergency…(More)”
Authoritarian Privacy
Paper by Mark Jia: “Privacy laws are traditionally associated with democracy. Yet autocracies increasingly have them. Why do governments that repress their citizens also protect their privacy? This Article answers this question through a study of China. China is a leading autocracy and the architect of a massive surveillance state. But China is also a major player in data protection, having enacted and enforced a number of laws on information privacy. To explain how this came to be, the Article first turns to several top-down objectives often said to motivate China’s privacy laws: advancing its digital economy, expanding its global influence, and protecting its national security. Although each has been a factor in China’s turn to privacy law, even together they tell only a partial story.
More fundamental to China’s privacy turn is the party-state’s use of privacy law to shore up its legitimacy against a backdrop of digital abuse. China’s whiplashed transition into the digital age has given rise to significant vulnerabilities and dependencies for ordinary citizens. Through privacy law, China’s leaders have sought to interpose themselves as benevolent guardians of privacy rights against other intrusive actors—individuals, firms, even state agencies and local governments. So framed, privacy law can enhance perceptions of state performance and potentially soften criticism of the center’s own intrusions. China did not enact privacy law in spite of its surveillance state; it embraced privacy law in order to maintain it. The Article adds to our understanding of privacy law, complicates the conceptual relationship between privacy and democracy, and points towards a general theory of authoritarian privacy..(More)”.