Citizens’ Assemblies Are Upgrading Democracy: Fair Algorithms Are Part of the Program


Article by Ariel Procaccia: “…Taken together, these assemblies have demonstrated an impressive capacity to uncover the will of the people and build consensus.

The effectiveness of citizens’ assemblies isn’t surprising. Have you ever noticed how politicians grow a spine the moment they decide not to run for reelection? Well, a citizens’ assembly is a bit like a legislature whose members make a pact barring them from seeking another term in office. The randomly selected members are not beholden to party machinations or outside interests; they are free to speak their mind and vote their conscience.

What’s more, unlike elected bodies, these assemblies are chosen to mirror the population, a property that political theorists refer to as descriptive representation. For example, a typical citizens’ assembly has a roughly equal number of men and women (some also ensure nonbinary participation), whereas the average proportion of seats held by women in national parliaments worldwide was 26 percent in 2021—a marked increase from 12 percent in 1997 but still far from gender balance. Descriptive representation, in turn, lends legitimacy to the assembly: citizens seem to find decisions more acceptable when they are made by people like themselves.

As attractive as descriptive representation is, there are practical obstacles to realizing it while adhering to the principle of random selection. Overcoming these hurdles has been a passion of mine for the past few years. Using tools from mathematics and computer science, my collaborators and I developed an algorithm for the selection of citizens’ assemblies that many practitioners around the world are using. Its story provides a glimpse into the future of democracy—and it begins a long time ago…(More)”.

How Americans View Data Privacy


Pew Research: “…Americans – particularly Republicans – have grown more concerned about how the government uses their data. The share who say they are worried about government use of people’s data has increased from 64% in 2019 to 71% today. That reflects rising concern among Republicans (from 63% to 77%), while Democrats’ concern has held steady. (Each group includes those who lean toward the respective party.)

The public increasingly says they don’t understand what companies are doing with their data. Some 67% say they understand little to nothing about what companies are doing with their personal data, up from 59%.

Most believe they have little to no control over what companies or the government do with their data. While these shares have ticked down compared with 2019, vast majorities feel this way about data collected by companies (73%) and the government (79%).

We’ve studied Americans’ views on data privacy for years. The topic remains in the national spotlight today, and it’s particularly relevant given the policy debates ranging from regulating AI to protecting kids on social media. But these are far from abstract concepts. They play out in the day-to-day lives of Americans in the passwords they choose, the privacy policies they agree to and the tactics they take – or not – to secure their personal information. We surveyed 5,101 U.S. adults using Pew Research Center’s American Trends Panel to give voice to people’s views and experiences on these topics.

In addition to the key findings covered on this page, the three chapters of this report provide more detail on:

What if We Could All Control A.I.?


Kevin Roose at The New York Times: “One of the fiercest debates in Silicon Valley right now is about who should control A.I., and who should make the rules that powerful artificial intelligence systems must follow.

Should A.I. be governed by a handful of companies that try their best to make their systems as safe and harmless as possible? Should regulators and politicians step in and build their own guardrails? Or should A.I. models be made open-source and given away freely, so users and developers can choose their own rules?

A new experiment by Anthropic, the maker of the chatbot Claude, offers a quirky middle path: What if an A.I. company let a group of ordinary citizens write some rules, and trained a chatbot to follow them?

The experiment, known as “Collective Constitutional A.I.,” builds on Anthropic’s earlier work on Constitutional A.I., a way of training large language models that relies on a written set of principles. It is meant to give a chatbot clear instructions for how to handle sensitive requests, what topics are off-limits and how to act in line with human values.

If Collective Constitutional A.I. works — and Anthropic’s researchers believe there are signs that it might — it could inspire other experiments in A.I. governance, and give A.I. companies more ideas for how to invite outsiders to take part in their rule-making processes.

That would be a good thing. Right now, the rules for powerful A.I. systems are set by a tiny group of industry insiders, who decide how their models should behave based on some combination of their personal ethics, commercial incentives and external pressure. There are no checks on that power, and there is no way for ordinary users to weigh in.

Opening up A.I. governance could increase society’s comfort with these tools, and give regulators more confidence that they’re being skillfully steered. It could also prevent some of the problems of the social media boom of the 2010s, when a handful of Silicon Valley titans ended up controlling vast swaths of online speech.

In a nutshell, Constitutional A.I. works by using a written set of rules (a “constitution”) to police the behavior of an A.I. model. The first version of Claude’s constitution borrowed rules from other authoritative documents, including the United Nations’ Universal Declaration of Human Rights and Apple’s terms of service…(More)”.

Evidence-Based Government Is Alive and Well


Article by Zina Hutton: “A desire to discipline the whimsical rule of despots.” That’s what Gary Banks, a former chairman of Australia’s Productivity Commission, attributed the birth of evidence-based policy to back in the 14th century in a speech from 2009. Evidence-based policymaking isn’t a new style of government, but it’s one with well-known roadblocks that elected officials have been working around in order to implement it more widely.

Evidence-based policymaking relies on evidence — facts, data, expert analysis — to shape aspects of long- and short-term policy decisions. It’s not just about collecting data, but also applying it and experts’ analysis to shape future policy. Whether it’s using school enrollment numbers to justify building a new park in a neighborhood or scientists collaborating on analysis of wastewater to try to “catch” illness spread in a community before it becomes unmanageable, evidence-based policy uses facts to help elected and appointed officials decide what funds and other resources to allocate in their communities.

Problems with evidence-based governing have been around for years. They range from a lack of communication between the people designing the policy and its related programs and the people implementing them, to the way that local government struggles to recruit and maintain employees. Resource allocation also shapes the decisions some cities make when it comes to seeking out and using data. This can be seen in the way larger cities, with access to proportionately larger budgets, research from state universities within city limits and a larger workforce, have had more success with evidence-based policymaking.
“The largest cities have more personnel, more expertise, more capacity, whether that’s for collecting administrative data and monitoring it, whether that’s doing open data portals, or dashboards, or whether that’s doing things like policy analysis or program evaluation,” says Karen Mossberger, the Frank and June Sackton Professor in the School of Public Affairs at Arizona State University. “It takes expert personnel, it takes people within government with the skills and the capacity, it takes time.”

Roadblocks aside, state and local governments are finding innovative ways to collaborate with one another on data-focused projects and policy, seeking ways to make up for the problems that impacted early efforts at evidence-based governance. More state and local governments now recruit data experts at every level to collect, analyze and explain the data generated by residents, aided by advances in technology and increased access to researchers…(More)”.

Democratic self-government and the algocratic shortcut: the democratic harms in algorithmic governance of society


Paper by Nardine Alnemr: “Algorithms are used to calculate and govern varying aspects of public life for efficient use of the vast data available about citizens. Assuming that algorithms are neutral and efficient in data-based decision making, algorithms are used in areas such as criminal justice and welfare. This has ramifications on the ideal of democratic self-government as algorithmic decisions are made without democratic deliberation, scrutiny or justification. In the book Democracy without Shortcuts, Cristina Lafont argued against “shortcutting” democratic self-government. Lafont’s critique of shortcuts turns to problematise taken-for-granted practices in democracies that bypass citizen inclusion and equality in authoring decisions governing public life. In this article, I extend Lafont’s argument to another shortcut: the algocratic shortcut. The democratic harms attributable to the algocratic shortcut include diminishing the role of voice in politics and reducing opportunities for civic engagement. In this article, I define the algocratic shortcut and discuss the democratic harms of this shortcut, its relation to other shortcuts to democracy and the limitations of using shortcuts to remedy algocratic harms. Finally, I reflect on remedy through “aspirational deliberation”…(More)”.

When is a Decision Automated? A Taxonomy for a Fundamental Rights Analysis


Paper by Francesca Palmiotto: “This paper addresses the pressing issues surrounding the use of automated systems in public decision-making, with a specific focus on the field of migration, asylum, and mobility. Drawing on empirical research conducted for the AFAR project, the paper examines the potential and limitations of the General Data Protection Regulation and the proposed Artificial Intelligence Act in effectively addressing the challenges posed by automated decision making (ADM). The paper argues that the current legal definitions and categorizations of ADM fail to capture the complexity and diversity of real-life applications, where automated systems assist human decision-makers rather than replace them entirely. This discrepancy between the legal framework and practical implementation highlights the need for a fundamental rights approach to legal protection in the automation age. To bridge the gap between ADM in law and practice, the paper proposes a taxonomy that provides theoretical clarity and enables a comprehensive understanding of ADM in public decision-making. This taxonomy not only enhances our understanding of ADM but also identifies the fundamental rights at stake for individuals and the sector-specific legislation applicable to ADM. The paper finally calls for empirical observations and input from experts in other areas of public law to enrich and refine the proposed taxonomy, thus ensuring clearer conceptual frameworks to safeguard individuals in our increasingly algorithmic society…(More)”.

Deliberation is no silver bullet for the ‘problem’ of populism


Article by Kristof Jacobs: “Populists are not satisfied with the way democracy works nowadays. They do not reject liberal democracy outright, but want it to change. Indeed, they feel the political elite is unresponsive. Not surprisingly, then, populist parties thrive in settings where there is widespread feeling that politicians do not listen to the people.

What if… decision-makers gave citizens a voice in the decision-making process? In fact, this is happening across the globe. Democratic innovations, that is: decision-making processes that aim to deepen citizens’ participation and engagement in political decision-making, are ever more popular. They come in many shapes and forms, such as referendums, deliberative mini-publics or participatory budgeting. Deliberative democratic innovations in particular are popular, as is evidenced by the many nation-level citizens’ assemblies on climate change. We have seen such assemblies not only in France, but also in the UK, Germany, Ireland, Luxembourg, Denmark, Spain and Austria.

Several prominent scholars of deliberation contend that deliberation promotes considered judgment and counteracts populism

Scholars of deliberation are optimistic about the potential of such deliberative events. In one often-cited piece in Science, several prominent scholars of deliberation contend that ‘[d]eliberation promotes considered judgment and counteracts populism’.

But is that optimism warranted? What does the available empirical research tell us? To examine this, one must distinguish between populist citizens and populist parties…(More)”.

Towards a Considered Use of AI Technologies in Government 


Report by the Institute on Governance and Think Digital: “… undertook a case study-based research project, where 24 examples of AI technology projects and governance frameworks across a dozen jurisdictions were scanned. The purpose of this report is to provide policymakers and practitioners in government with an overview of controversial deployments of Artificial Intelligence (AI) technologies in the public sector, and to highlight some of the approaches being taken to govern the responsible use of these technologies in government. 

Two environmental scans make up the majority of the report. The first scan presents relevant use cases of public sector applications of AI technologies and automation, with special attention given to controversial projects and program/policy failures. The second scan surveys existing governance frameworks employed by international organizations and governments around the world. Each scan is then analyzed to determine common themes across use cases and governance frameworks respectively. The final section of the report provides risk considerations related to the use of AI by public sector institutions across use cases…(More)”.

FickleFormulas: The Political Economy of Macroeconomic Measurement


About: “Statistics about economic activities are critical to governance. Measurements of growth, unemployment and inflation rates, public debts – they all tell us ‘how our economies are doing’ and inform policy. Citizens punish politicians who fail to deliver on them.

FickleFormulas has integrated two research projects at the University of Amsterdam that ran from 2014 to 2020. Its researchers have studied the origins of the formulas behind these indicators: why do we measure our economies the way we do? After all, it is far from self-evident how to define and measure economic indicators. Our choices have deeply distributional consequences, producing winners and losers, and they shape our future, for example when GDP figures hide the cost of environmental destruction.

Criticisms of particular measures are hardly new. GDP in particular has been denounced as a deeply deficient measure of production at best and a fundamentally misleading guidepost for human development at worst. But also measures of inflation, balances of payments and trade, unemployment figures, productivity or public debt hide unsolved and maybe insoluble problems. In FickleFormulas we have asked: which social, political and economic factors shape the formulas used to calculate macroeconomic indicators?

In our quest for answers we have mobilized scholarship and expertise scattered across academic disciplines – a wealth of knowledge brought together for example here. We have reconstructed expert-deliberations of past decades, but mostly we wanted to learn from those who actually design macroeconomic indicators: statisticians at national statistical offices or organizations such as the OECD, the UN, the IMF, or the World Bank. For us, understanding macroeconomic indicators has been impossible without talking to the people who live and breathe them….(More)”.

The contested role of AI ethics boards in smart societies: a step towards improvement based on board composition by sortition


Paper by Ludovico Giacomo Conti & Peter Seele: “The recent proliferation of AI scandals led private and public organisations to implement new ethics guidelines, introduce AI ethics boards, and list ethical principles. Nevertheless, some of these efforts remained a façade not backed by any substantive action. Such behaviour made the public question the legitimacy of the AI industry and prompted scholars to accuse the sector of ethicswashing, machinewashing, and ethics trivialisation—criticisms that spilt over to institutional AI ethics boards. To counter this widespread issue, contributions in the literature have proposed fixes that do not consider its systemic character and are based on a top-down, expert-centric governance. To fill this gap, we propose to make use of qualified informed lotteries: a two-step model that transposes the documented benefits of the ancient practice of sortition into the selection of AI ethics boards’ members and combines them with the advantages of a stakeholder-driven, participative, and deliberative bottom-up process typical of Citizens’ Assemblies. The model permits increasing the public’s legitimacy and participation in the decision-making process and its deliverables, curbing the industry’s over-influence and lobbying, and diminishing the instrumentalisation of ethics boards. We suggest that this sortition-based approach may provide a sound base for both public and private organisations in smart societies for constructing a decentralised, bottom-up, participative digital democracy…(More)”.