Five dimensions of scaling democratic deliberation: With and beyond AI


Paper by Sammy McKinney and Claudia Chwalisz: “In the study and practice of deliberative democracy, academics and practitioners are increasingly exploring the role that Artificial Intelligence (AI) can play in scaling democratic deliberation. From claims by leading deliberative democracy scholars that AI can bring deliberation to the ‘mass’, or ‘global’, scale, to cutting-edge innovations from technologists aiming to support scalability in practice, AI’s role in scaling deliberation is capturing the energy and imagination of many leading thinkers and practitioners.

There are many reasons why people may be interested in ‘scaling deliberation’. One is that there is evidence that deliberation has numerous benefits for the people involved in deliberations – strengthening their individual and collective agency, political efficacy, and trust in one another and in institutions. Another is that the decisions and actions that result are arguably higher-quality and more legitimate. Because the benefits of deliberation are so great, there is significant interest around how we could scale these benefits to as many people and decisions as possible.

Another motivation stems from the view that one weakness of small-scale deliberative processes results from their size. Increasing the sheer numbers involved is perceived as a source of legitimacy for some. Others argue that increasing the numbers will also increase the quality of the outputs and outcome.

Finally, deliberative processes that are empowered and/or institutionalised are able to shift political power. Many therefore want to replicate the small-scale model of deliberation in more places, with an emphasis on redistributing power and influencing decision-making.

When we consider how to leverage technology for deliberation, we emphasise that we should not lose sight of the first-order goals of strengthening collective agency. Today there are deep geo-political shifts; in many places, there is a movement towards authoritarian measures, a weakening of civil society, and attacks on basic rights and freedoms. We see the debate about how to ‘scale deliberation’ through this political lens, where our goals are focused on how we can enable a citizenry that is resilient to the forces of autocracy – one that feels and is more powerful and connected, where people feel heard and empathise with others, where citizens have stronger interpersonal and societal trust, and where public decisions have greater legitimacy and better alignment with collective values…(More)”

Fixing the US statistical infrastructure


Article by Nancy Potok and Erica L. Groshen: “Official government statistics are critical infrastructure for the information age. Reliable, relevant, statistical information helps businesses to invest and flourish; governments at the local, state, and national levels to make critical decisions on policy and public services; and individuals and families to invest in their futures. Yet surrounded by all manner of digitized data, one can still feel inadequately informed. A major driver of this disconnect in the US context is delayed modernization of the federal statistical system. The disconnect will likely worsen in coming months as the administration shrinks statistical agencies’ staffing, terminates programs (notably for health and education statistics), and eliminates unpaid external advisory groups. Amid this upheaval, might the administration’s appetite for disruption be harnessed to modernize federal statistics?

Federal statistics, one of the United States’ premier public goods, differ from privately provided data because they are privacy protected, aggregated to address relevant questions for decision-makers, constructed transparently, and widely available without a subscription. The private sector cannot be expected to adequately supply such statistical infrastructure. Yes, some companies collect and aggregate some economic data, such as credit card purchases and payroll information. But without strong underpinnings of a modern, federal information infrastructure, there would be large gaps in nationally consistent, transparent, trustworthy data. Furthermore, most private providers rely on public statistics for their internal analytics, to improve their products. They are among the many data users asking for more from statistical agencies…(More)”.

The Loyalty Trap


Book by Jaime Lee Kucinskas: “…explores how civil servants navigated competing pressures and duties amid the chaos of the Trump administration, drawing on in-depth interviews with senior officials in the most contested agencies over the course of a tumultuous term. Jaime Lee Kucinskas argues that the professional culture and ethical obligations of the civil service stabilize the state in normal times but insufficiently prepare bureaucrats to cope with a president like Trump. Instead, federal employees became ensnared in intractable ethical traps, caught between their commitment to nonpartisan public service and the expectation of compliance with political directives. Kucinskas shares their quandaries, recounting attempts to preserve the integrity of government agencies, covert resistance, and a few bold acts of moral courage in the face of organizational decline and politicized leadership. A nuanced sociological account of the lessons of the Trump administration for democratic governance, The Loyalty Trap offers a timely and bracing portrait of the fragility of the American state…(More)”.

Manipulation: What It Is, Why It’s Bad, What to Do About It


Book by Cass Sunstein: “New technologies are offering companies, politicians, and others unprecedented opportunity to manipulate us. Sometimes we are given the illusion of power – of freedom – through choice, yet the game is rigged, pushing us in specific directions that lead to less wealth, worse health, and weaker democracy. In, Manipulation, nudge theory pioneer and New York Times bestselling author, Cass Sunstein, offers a new definition of manipulation for the digital age, explains why it is wrong; and shows what we can do about it. He reveals how manipulation compromises freedom and personal agency, while threatening to reduce our well-being; he explains the difference between manipulation and unobjectionable forms of influence, including ‘nudges’; and he lifts the lid on online manipulation and manipulation by artificial intelligence, algorithms, and generative AI, as well as threats posed by deepfakes, social media, and ‘dark patterns,’ which can trick people into giving up time and money. Drawing on decades of groundbreaking research in behavioral science, this landmark book outlines steps we can take to counteract manipulation in our daily lives and offers guidance to protect consumers, investors, and workers…(More)”.

Blueprint on Prosocial Tech Design Governance


Blueprint by Lisa Schirch: “… lays out actionable recommendations for governments, civil society, researchers, and industry to design digital platforms that reduce harm and increase benefit to society.

The Blueprint on Prosocial Tech Design Governance responds to the crisis in the scale and impact of digital platform harms. Digital platforms are fueling a systemic crisis by amplifying misinformation, harming mental health, eroding privacy, promoting polarization, exploiting children, and concentrating unaccountable power through manipulative design.

Prosocial tech design governance is a framework for regulating digital platforms based on how their design choices— such as algorithms and interfaces—impact society. It shifts focus “upstream” to address the root causes of digital harms and the structural incentives influencing platform design…(More)”.

Spaces for democracy with generative artificial intelligence: public architecture at stake


Paper by Ingrid Campo-Ruiz: “Urban space is an important infrastructure for democracy and fosters democratic engagement, such as meetings, discussions, and protests. Artificial Intelligence (AI) systems could affect democracy through urban space, for example, by breaching data privacy, hindering political equality and engagement, or manipulating information about places. This research explores the urban places that promote democratic engagement according to the outputs generated with ChatGPT-4o. This research moves beyond the dominant framework of discussions on AI and democracy as a form of spreading misinformation and fake news. Instead, it provides an innovative framework, combining architectural space as an infrastructure for democracy and the way in which generative AI tools provide a nuanced view of democracy that could potentially influence millions of people. This article presents a new conceptual framework for understanding AI for democracy from the perspective of architecture. For the first case study in Stockholm, Sweden, AI outputs were later combined with GIS maps and a theoretical framework. The research then analyzes the results obtained for Madrid, Spain, and Brussels, Belgium. This analysis provides deeper insights into the outputs obtained with AI, the places that facilitate democratic engagement and those that are overlooked, and the ensuing consequences.Results show that urban space for democratic engagement obtained with ChatGPT-4o for Stockholm is mainly composed of governmental institutions and non-governmental organizations for representative or deliberative democracy and the education of individuals in public buildings in the city centre. The results obtained with ChatGPT-40 barely reflect public open spaces, parks, or routes. They also prioritize organized rather than spontaneous engagement and do not reflect unstructured events like demonstrations, and powerful actors, such as political parties, or workers’ unions. The places listed by ChatGPT-4o for Madrid and Brussels give major prominence to private spaces like offices that house organizations with political activities. While cities offer a broad and complex array of places for democratic engagement, outputs obtained with AI can narrow users’ perspectives on their real opportunities, while perpetuating powerful agents by not making them sufficiently visible to be accountable for their actions. In conclusion, urban space is a fundamental infrastructure for democracy, and AI outputs could be a valid starting point for understanding the plethora of interactions. These outputs should be complemented with other forms of knowledge to produce a more comprehensive framework that adjusts to reality for developing AI in a democratic context. Urban space should be protected as a shared space and as an asset for societies to fully develop democracy in its multiple forms. Democracy and urban spaces influence each other and are subject to pressures from different actors including AI. AI systems should, therefore, be monitored to enhance democratic values through urban space…(More)”.

AI-Ready Federal Statistical Data: An Extension of Communicating Data Quality


Article by By Hoppe, Travis et al : “Generative Artificial Intelligence (AI) is redefining how people interact with public information and shaping how public data are consumed. Recent advances in large language models (LLMs) mean that more Americans are getting answers from AI chatbots and other AI systems, which increasingly draw on public datasets. The federal statistical community can take action to advance the use of federal statistics with generative AI to ensure that official statistics are front-and-center, powering these AIdriven experiences.
The Federal Committee on Statistical Methodology (FCSM) developed the Framework for Data Quality to help analysts and the public assess fitness for use of data sets. AI-based queries present new challenges, and the framework should be enhanced to meet them. Generative AI acts as an intermediary in the consumption of public statistical information, extracting and combining data with logical strategies that differ from the thought processes and judgments of analysts. For statistical data to be accurately represented and trustworthy, they need to be machine understandable and be able to support models that measure data quality and provide contextual information.
FCSM is working to ensure that federal statistics used in these AI-driven interactions meet the data quality dimensions of the Framework including, but not limited to, accessibility, timeliness, accuracy, and credibility. We propose a new collaborative federal effort to establish best practices for optimizing APIs, metadata, and data accessibility to support accurate and trusted generative AI results…(More)”.

Making Civic Trust Less Abstract: A Framework for Measuring Trust Within Cities


Report by Stefaan Verhulst, Andrew J. Zahuranec, and Oscar Romero: “Trust is foundational to effective governance, yet its inherently abstract nature has made it difficult to measure and operationalize, especially in urban contexts. This report proposes a practical framework for city officials to diagnose and strengthen civic trust through observable indicators and actionable interventions.

Rather than attempting to quantify trust as an abstract concept, the framework distinguishes between the drivers of trust—direct experiences and institutional interventions—and its manifestations, both emotional and behavioral. Drawing on literature reviews, expert workshops, and field engagement with the New York City Civic Engagement Commission (CEC), we present a three-phase approach: (1) baseline assessment of trust indicators, (2) analysis of causal drivers, and (3) design and continuous evaluation of targeted interventions. The report illustrates the framework’s applicability through a hypothetical case involving the NYC Parks Department and a real-world case study of the citywide participatory budgeting initiative, The People’s Money. By providing a structured, context-sensitive, and iterative model for measuring civic trust, this report seeks to equip public institutions and city officials with a framework for meaningful measurement of civic trust…(More)“.

Hamburg Declaration on Responsible AI


Declaration by the United Nations Development Programme (UNDP), in partnership with the German Federal Ministry for Economic Cooperation and Development (BMZ): “We are at a crossroads. Despite the progress made in recent years, we need renewed commitment andvengagement to advance toward and achieve the Sustainable Development Goals (SDGs). Digital technologies, such as Artificial Intelligence (AI), can play a significant role in this regard. AI presents opportunities and risks in a world of rapid social, political, economic, ecological, and technological shifts. If developed and deployed responsibly, AI can drive sustainable development and benefit society, the economy, and the planet. Yet, without safeguards throughout the AI value chain, it may widen inequalities within and between countries and contribute to direct harm through inappropriate, illegal, or deliberate misuse. It can also contribute to human rights violations, fuel disinformation, homogenize creative and cultural expression, and harm the environment. These risks are likely to disproportionately affect low-income countries, vulnerable groups, and future generations. Geopolitical competition and market dependencies further amplify these risks…(More)”.

Silicon Valley Is at an Inflection Point


Article by Karen Hao: “…In the decade that I have observed Silicon Valley — first as an engineer, then as a journalist — I’ve watched the industry shift to a new paradigm. Tech companies have long reaped the benefits of a friendly U.S. government, but the Trump administration has made clear that it will now grant new firepower to the industry’s ambitions. The Stargate announcement was just one signal. Another was the Republican tax bill that the House passed last week, which would prohibit states from regulating A.I. for the next 10 years.

The leading A.I. giants are no longer merely multinational corporations; they are growing into modern-day empires. With the full support of the federal government, soon they will be able to reshape most spheres of society as they please, from the political to the economic to the production of science…(More)”.