Paper by Simone Chambers & Mark E. Warren: “The field of deliberative democracy now generally recognizes the co-dependence of deliberation and voting. The field tends to emphasize what deliberation accomplishes for vote-based decisions. In this paper, we reverse this now common view to ask: In what ways does voting benefit deliberation? We discuss seven ways voting can complement and sometimes enhance deliberation. First, voting furnishes deliberation with a feasible and fair closure mechanism. Second, the power to vote implies equal recognition and status, both morally and strategically, which is a condition of democratic deliberation. Third, voting politicizes deliberation by injecting the strategic features of politics into deliberation—effectively internalizing conflict into deliberative processes, without which they can become detached from their political environments. Fourth, anticipation of voting may induce authenticity by revealing preferences, as what one says will count. Fifth, voting preserves expressions of dissent, helping to push back against socially induced pressures for consensus. Sixth, voting defines the issues, such that deliberation is focused, and thus more likely to be effective. And, seventh, within contexts where votes are public—as in representative contexts, voting can induce accountability, particularly for one’s claims. We then use these points to discuss four general types of institutions—general elections, legislatures, minipublics, and minipublics embedded in referendum processes—that combine talking and voting, with the aim of identifying designs that do a better or worse job of capitalizing upon the strengths of each…(More)”.
Setting Democratic Ground Rules for AI: Civil Society Strategies
Report by Beth Kerley: “…analyzes priorities, challenges, and promising civil society strategies for advancing democratic approaches to governing artificial intelligence (AI). The report is based on conversations from a private Forum workshop in Buenos Aires, Argentina that brought together Latin American and global researchers and civil society practitioners.
With recent leaps in the development of AI, we are experiencing a seismic shift in the balance of power between people and governments, posing new challenges to democratic principles such as privacy, transparency, and non-discrimination. We know that AI will shape the political world we inhabit–but how can we ensure that democratic norms and institutions shape the trajectory of AI?
Drawing on global civil society perspectives, this report surveys what stakeholders need to know about AI systems and the human relationships behind them. It delves into the obstacles– from misleading narratives to government opacity to gaps in technical expertise–that hinder democratic engagement on AI governance, and explores how new thinking, new institutions, and new collaborations can better equip societies to set democratic ground rules for AI technologies…(More)”.
Europe wants to get better at planning for the worst
Article by Sarah Anne Aarup: “The European Union is beset by doom and gloom — from wars on its doorstep to inflation and the climate crisis — not to mention political instability in the U.S. and rivalry with China.
All too often, the EU has been overtaken by events, which makes the task of getting better at planning for the worst all the more pressing.
As European leaders fought political fires at their informal summit last week in Granada, unaware that Palestinian militants would launch their devastating raid on Israel a day later, they quietly started a debate on strategic foresight.
At this stage still very much a thought experiment, the concept of “open strategic autonomy” is being championed by host Spain, the current president of the Council of the EU. The idea reflects a shift in priorities to navigate an increasingly uncertain world, and a departure from the green and digital transitions that have dominated the agenda in recent years.
To the uninitiated, the concept of open strategic autonomy sounds like an oxymoron — that’s because it is.
After the hyper globalized early 2000s, trust in liberalism started to erode. Then the Trump-era trade wars, COVID-19 pandemic and Russia’s invasion of Ukraine exposed Europe’s economic reliance on powerful nations that are either latent — or overt — strategic rivals.
“The United States and China are becoming more self-reliant, and some voices were saying that this is what we have to do,” an official with the Spanish presidency told POLITICO. “But that’s not a good idea for Europe.”
Instead, open strategic autonomy is about shielding the EU just enough to protect its economic security while remaining an international player. In other words, it means “cooperating multilaterally wherever we can, acting autonomously wherever we must.”
It’s a grudging acceptance that great power politics now dominate economics…
The open strategic autonomy push is about countering an inward turn that was all about cutting dependencies, such as the EU’s reliance on Russian energy, after President Vladimir Putin ordered the invasion of Ukraine.
“[We’re] missing a more balanced and forward-looking strategy” following the Versailles Declaration, the Spanish official said, referring to a first response by EU leaders to the Russian attack of February 24, 2022.
Spain delivered its contribution to the debate in the form of a thick paper drafted by its foresight office, in coordination with over 80 ministries across the EU…(More)”.
AI-tocracy
Article by Peter Dizikes: “It’s often believed that authoritarian governments resist technical innovation in a way that ultimately weakens them both politically and economically. But a more complicated story emerges from a new study on how China has embraced AI-driven facial recognition as a tool of repression.
“What we found is that in regions of China where there is more unrest, that leads to greater government procurement of facial-recognition AI,” says coauthor Martin Beraja, an MIT economist. Not only has use of the technology apparently worked to suppress dissent, but it has spurred software development. The scholars call this mutually reinforcing situation an “AI-tocracy.”
In fact, they found, firms that were granted a government contract for facial-recognition technologies produce about 49% more software products in the two years after gaining the contract than before. “We examine if this leads to greater innovation by facial-recognition AI firms, and indeed it does,” Beraja says.
Adding it all up, the case of China indicates how autocratic governments can potentially find their political power enhanced, rather than upended, when they harness technological advances—and even generate more economic growth than they would have otherwise…(More)”.
Citizens’ Assemblies Are Upgrading Democracy: Fair Algorithms Are Part of the Program
Article by Ariel Procaccia: “…Taken together, these assemblies have demonstrated an impressive capacity to uncover the will of the people and build consensus.
The effectiveness of citizens’ assemblies isn’t surprising. Have you ever noticed how politicians grow a spine the moment they decide not to run for reelection? Well, a citizens’ assembly is a bit like a legislature whose members make a pact barring them from seeking another term in office. The randomly selected members are not beholden to party machinations or outside interests; they are free to speak their mind and vote their conscience.
What’s more, unlike elected bodies, these assemblies are chosen to mirror the population, a property that political theorists refer to as descriptive representation. For example, a typical citizens’ assembly has a roughly equal number of men and women (some also ensure nonbinary participation), whereas the average proportion of seats held by women in national parliaments worldwide was 26 percent in 2021—a marked increase from 12 percent in 1997 but still far from gender balance. Descriptive representation, in turn, lends legitimacy to the assembly: citizens seem to find decisions more acceptable when they are made by people like themselves.
As attractive as descriptive representation is, there are practical obstacles to realizing it while adhering to the principle of random selection. Overcoming these hurdles has been a passion of mine for the past few years. Using tools from mathematics and computer science, my collaborators and I developed an algorithm for the selection of citizens’ assemblies that many practitioners around the world are using. Its story provides a glimpse into the future of democracy—and it begins a long time ago…(More)”.
How Americans View Data Privacy
Pew Research: “…Americans – particularly Republicans – have grown more concerned about how the government uses their data. The share who say they are worried about government use of people’s data has increased from 64% in 2019 to 71% today. That reflects rising concern among Republicans (from 63% to 77%), while Democrats’ concern has held steady. (Each group includes those who lean toward the respective party.)
The public increasingly says they don’t understand what companies are doing with their data. Some 67% say they understand little to nothing about what companies are doing with their personal data, up from 59%.
Most believe they have little to no control over what companies or the government do with their data. While these shares have ticked down compared with 2019, vast majorities feel this way about data collected by companies (73%) and the government (79%).
We’ve studied Americans’ views on data privacy for years. The topic remains in the national spotlight today, and it’s particularly relevant given the policy debates ranging from regulating AI to protecting kids on social media. But these are far from abstract concepts. They play out in the day-to-day lives of Americans in the passwords they choose, the privacy policies they agree to and the tactics they take – or not – to secure their personal information. We surveyed 5,101 U.S. adults using Pew Research Center’s American Trends Panel to give voice to people’s views and experiences on these topics.
In addition to the key findings covered on this page, the three chapters of this report provide more detail on:
- Views of data privacy risks, personal data and digital privacy laws (Chapter 1). Concerns, feelings and trust, plus children’s online privacy, social media companies and views of law enforcement.
- How Americans protect their online data (Chapter 2). Data breaches and hacks, passwords, cybersecurity and privacy policies.
- A deep dive into online privacy choices (Chapter 3). How knowledge, confidence and concern relate to online privacy choices…(More)”.
What if We Could All Control A.I.?
Kevin Roose at The New York Times: “One of the fiercest debates in Silicon Valley right now is about who should control A.I., and who should make the rules that powerful artificial intelligence systems must follow.
Should A.I. be governed by a handful of companies that try their best to make their systems as safe and harmless as possible? Should regulators and politicians step in and build their own guardrails? Or should A.I. models be made open-source and given away freely, so users and developers can choose their own rules?
A new experiment by Anthropic, the maker of the chatbot Claude, offers a quirky middle path: What if an A.I. company let a group of ordinary citizens write some rules, and trained a chatbot to follow them?
The experiment, known as “Collective Constitutional A.I.,” builds on Anthropic’s earlier work on Constitutional A.I., a way of training large language models that relies on a written set of principles. It is meant to give a chatbot clear instructions for how to handle sensitive requests, what topics are off-limits and how to act in line with human values.
If Collective Constitutional A.I. works — and Anthropic’s researchers believe there are signs that it might — it could inspire other experiments in A.I. governance, and give A.I. companies more ideas for how to invite outsiders to take part in their rule-making processes.
That would be a good thing. Right now, the rules for powerful A.I. systems are set by a tiny group of industry insiders, who decide how their models should behave based on some combination of their personal ethics, commercial incentives and external pressure. There are no checks on that power, and there is no way for ordinary users to weigh in.
Opening up A.I. governance could increase society’s comfort with these tools, and give regulators more confidence that they’re being skillfully steered. It could also prevent some of the problems of the social media boom of the 2010s, when a handful of Silicon Valley titans ended up controlling vast swaths of online speech.
In a nutshell, Constitutional A.I. works by using a written set of rules (a “constitution”) to police the behavior of an A.I. model. The first version of Claude’s constitution borrowed rules from other authoritative documents, including the United Nations’ Universal Declaration of Human Rights and Apple’s terms of service…(More)”.
Evidence-Based Government Is Alive and Well
Article by Zina Hutton: “A desire to discipline the whimsical rule of despots.” That’s what Gary Banks, a former chairman of Australia’s Productivity Commission, attributed the birth of evidence-based policy to back in the 14th century in a speech from 2009. Evidence-based policymaking isn’t a new style of government, but it’s one with well-known roadblocks that elected officials have been working around in order to implement it more widely.
Evidence-based policymaking relies on evidence — facts, data, expert analysis — to shape aspects of long- and short-term policy decisions. It’s not just about collecting data, but also applying it and experts’ analysis to shape future policy. Whether it’s using school enrollment numbers to justify building a new park in a neighborhood or scientists collaborating on analysis of wastewater to try to “catch” illness spread in a community before it becomes unmanageable, evidence-based policy uses facts to help elected and appointed officials decide what funds and other resources to allocate in their communities.
Problems with evidence-based governing have been around for years. They range from a lack of communication between the people designing the policy and its related programs and the people implementing them, to the way that local government struggles to recruit and maintain employees. Resource allocation also shapes the decisions some cities make when it comes to seeking out and using data. This can be seen in the way larger cities, with access to proportionately larger budgets, research from state universities within city limits and a larger workforce, have had more success with evidence-based policymaking.
“The largest cities have more personnel, more expertise, more capacity, whether that’s for collecting administrative data and monitoring it, whether that’s doing open data portals, or dashboards, or whether that’s doing things like policy analysis or program evaluation,” says Karen Mossberger, the Frank and June Sackton Professor in the School of Public Affairs at Arizona State University. “It takes expert personnel, it takes people within government with the skills and the capacity, it takes time.”
Roadblocks aside, state and local governments are finding innovative ways to collaborate with one another on data-focused projects and policy, seeking ways to make up for the problems that impacted early efforts at evidence-based governance. More state and local governments now recruit data experts at every level to collect, analyze and explain the data generated by residents, aided by advances in technology and increased access to researchers…(More)”.
Democratic self-government and the algocratic shortcut: the democratic harms in algorithmic governance of society
Paper by Nardine Alnemr: “Algorithms are used to calculate and govern varying aspects of public life for efficient use of the vast data available about citizens. Assuming that algorithms are neutral and efficient in data-based decision making, algorithms are used in areas such as criminal justice and welfare. This has ramifications on the ideal of democratic self-government as algorithmic decisions are made without democratic deliberation, scrutiny or justification. In the book Democracy without Shortcuts, Cristina Lafont argued against “shortcutting” democratic self-government. Lafont’s critique of shortcuts turns to problematise taken-for-granted practices in democracies that bypass citizen inclusion and equality in authoring decisions governing public life. In this article, I extend Lafont’s argument to another shortcut: the algocratic shortcut. The democratic harms attributable to the algocratic shortcut include diminishing the role of voice in politics and reducing opportunities for civic engagement. In this article, I define the algocratic shortcut and discuss the democratic harms of this shortcut, its relation to other shortcuts to democracy and the limitations of using shortcuts to remedy algocratic harms. Finally, I reflect on remedy through “aspirational deliberation”…(More)”.
When is a Decision Automated? A Taxonomy for a Fundamental Rights Analysis
Paper by Francesca Palmiotto: “This paper addresses the pressing issues surrounding the use of automated systems in public decision-making, with a specific focus on the field of migration, asylum, and mobility. Drawing on empirical research conducted for the AFAR project, the paper examines the potential and limitations of the General Data Protection Regulation and the proposed Artificial Intelligence Act in effectively addressing the challenges posed by automated decision making (ADM). The paper argues that the current legal definitions and categorizations of ADM fail to capture the complexity and diversity of real-life applications, where automated systems assist human decision-makers rather than replace them entirely. This discrepancy between the legal framework and practical implementation highlights the need for a fundamental rights approach to legal protection in the automation age. To bridge the gap between ADM in law and practice, the paper proposes a taxonomy that provides theoretical clarity and enables a comprehensive understanding of ADM in public decision-making. This taxonomy not only enhances our understanding of ADM but also identifies the fundamental rights at stake for individuals and the sector-specific legislation applicable to ADM. The paper finally calls for empirical observations and input from experts in other areas of public law to enrich and refine the proposed taxonomy, thus ensuring clearer conceptual frameworks to safeguard individuals in our increasingly algorithmic society…(More)”.