Five dimensions of scaling democratic deliberation: With and beyond AI


Paper by Sammy McKinney and Claudia Chwalisz: “In the study and practice of deliberative democracy, academics and practitioners are increasingly exploring the role that Artificial Intelligence (AI) can play in scaling democratic deliberation. From claims by leading deliberative democracy scholars that AI can bring deliberation to the ‘mass’, or ‘global’, scale, to cutting-edge innovations from technologists aiming to support scalability in practice, AI’s role in scaling deliberation is capturing the energy and imagination of many leading thinkers and practitioners.

There are many reasons why people may be interested in ‘scaling deliberation’. One is that there is evidence that deliberation has numerous benefits for the people involved in deliberations – strengthening their individual and collective agency, political efficacy, and trust in one another and in institutions. Another is that the decisions and actions that result are arguably higher-quality and more legitimate. Because the benefits of deliberation are so great, there is significant interest around how we could scale these benefits to as many people and decisions as possible.

Another motivation stems from the view that one weakness of small-scale deliberative processes results from their size. Increasing the sheer numbers involved is perceived as a source of legitimacy for some. Others argue that increasing the numbers will also increase the quality of the outputs and outcome.

Finally, deliberative processes that are empowered and/or institutionalised are able to shift political power. Many therefore want to replicate the small-scale model of deliberation in more places, with an emphasis on redistributing power and influencing decision-making.

When we consider how to leverage technology for deliberation, we emphasise that we should not lose sight of the first-order goals of strengthening collective agency. Today there are deep geo-political shifts; in many places, there is a movement towards authoritarian measures, a weakening of civil society, and attacks on basic rights and freedoms. We see the debate about how to ‘scale deliberation’ through this political lens, where our goals are focused on how we can enable a citizenry that is resilient to the forces of autocracy – one that feels and is more powerful and connected, where people feel heard and empathise with others, where citizens have stronger interpersonal and societal trust, and where public decisions have greater legitimacy and better alignment with collective values…(More)”

Fixing the US statistical infrastructure


Article by Nancy Potok and Erica L. Groshen: “Official government statistics are critical infrastructure for the information age. Reliable, relevant, statistical information helps businesses to invest and flourish; governments at the local, state, and national levels to make critical decisions on policy and public services; and individuals and families to invest in their futures. Yet surrounded by all manner of digitized data, one can still feel inadequately informed. A major driver of this disconnect in the US context is delayed modernization of the federal statistical system. The disconnect will likely worsen in coming months as the administration shrinks statistical agencies’ staffing, terminates programs (notably for health and education statistics), and eliminates unpaid external advisory groups. Amid this upheaval, might the administration’s appetite for disruption be harnessed to modernize federal statistics?

Federal statistics, one of the United States’ premier public goods, differ from privately provided data because they are privacy protected, aggregated to address relevant questions for decision-makers, constructed transparently, and widely available without a subscription. The private sector cannot be expected to adequately supply such statistical infrastructure. Yes, some companies collect and aggregate some economic data, such as credit card purchases and payroll information. But without strong underpinnings of a modern, federal information infrastructure, there would be large gaps in nationally consistent, transparent, trustworthy data. Furthermore, most private providers rely on public statistics for their internal analytics, to improve their products. They are among the many data users asking for more from statistical agencies…(More)”.

The Loyalty Trap


Book by Jaime Lee Kucinskas: “…explores how civil servants navigated competing pressures and duties amid the chaos of the Trump administration, drawing on in-depth interviews with senior officials in the most contested agencies over the course of a tumultuous term. Jaime Lee Kucinskas argues that the professional culture and ethical obligations of the civil service stabilize the state in normal times but insufficiently prepare bureaucrats to cope with a president like Trump. Instead, federal employees became ensnared in intractable ethical traps, caught between their commitment to nonpartisan public service and the expectation of compliance with political directives. Kucinskas shares their quandaries, recounting attempts to preserve the integrity of government agencies, covert resistance, and a few bold acts of moral courage in the face of organizational decline and politicized leadership. A nuanced sociological account of the lessons of the Trump administration for democratic governance, The Loyalty Trap offers a timely and bracing portrait of the fragility of the American state…(More)”.

5 Ways AI is Boosting Citizen Engagement in Africa’s Democracies


Article by Peter Agbesi Adivor: “Artificial Intelligence (AI) is increasingly influencing democratic participation across Africa. From campaigning to voter education, AI is transforming electoral processes across the continent. While concerns about misinformation and government overreach persist, AI also offers promising avenues to enhance citizen engagement. This article explores five key ways AI is fostering more inclusive and participatory democracies in Africa.

1. AI-Powered Voter Education and Campaign

AI-driven platforms are revolutionizing voter education by providing accessible, real-time information. These platforms ensure citizens receive standardized electoral information delivered to them on their digital devices regardless of their geographical location, significantly reducing the cost for political actors as well as state and non-state actors who focus on voter education. They also ensure that those who can navigate these tools easily access the needed information, allowing authorities to focus limited resources on citizens on the other side of the digital divide.

 In Nigeria, ChatVE developed CitiBot, an AI-powered chatbot deployed during the 2024 Edo State elections to educate citizens on their civic rights and responsibilities via WhatsApp and Telegram. The bot offered information on voting procedures, eligibility, and the importance of participation.

Similarly, in South Africa, the Rivonia Circle introduced Thoko the Bot, an AI chatbot designed to answer voters’ questions about the electoral process, including where and how to vote, and the significance of participating in elections.

These AI tools enhance voter understanding and engagement by providing personalized, easily accessible information, thereby encouraging greater participation in democratic processes…(More)”.

Blueprint on Prosocial Tech Design Governance


Blueprint by Lisa Schirch: “… lays out actionable recommendations for governments, civil society, researchers, and industry to design digital platforms that reduce harm and increase benefit to society.

The Blueprint on Prosocial Tech Design Governance responds to the crisis in the scale and impact of digital platform harms. Digital platforms are fueling a systemic crisis by amplifying misinformation, harming mental health, eroding privacy, promoting polarization, exploiting children, and concentrating unaccountable power through manipulative design.

Prosocial tech design governance is a framework for regulating digital platforms based on how their design choices— such as algorithms and interfaces—impact society. It shifts focus “upstream” to address the root causes of digital harms and the structural incentives influencing platform design…(More)”.

5 Ways AI Supports City Adaptation to Extreme Heat


Article by Urban AI: “Cities stand at the frontline of climate change, confronting some of its most immediate and intense consequences. Among these, extreme heat has emerged as one of the most pressing and rapidly escalating threats. As we enter June 2025, Europe is already experiencing its first major and long-lasting heatwave of the summer season with temperatures surpassing 40°C in parts of Spain, France, and Portugal — and projections indicate that this extreme event could persist well into mid-June.

This climate event is not an isolated incident. By 2050, the number of cities exposed to dangerous levels of heat is expected to triple, with peak temperatures of 48°C (118°F) potentially becoming the new normal in some regions. Such intensifying conditions place unprecedented stress on urban infrastructure, public health systems, and the overall livability of cities — especially for vulnerable communities.

In this context, Artificial Intelligence (AI) is emerging as a vital tool in the urban climate adaptation toolbox. Urban AI — defined as the application of AI technologies to urban systems and decision-making — can help cities anticipate, manage, and mitigate the effects of extreme heat in more targeted and effective ways.

Cooling the Metro with AI-Driven Ventilation, in Barcelona

With over 130 stations and a century-old metro network, the city of Barcelona faces increasing pressure to ensure passenger comfort and safety — especially underground, where heat and air quality are harder to manage. In response, Transports Metropolitans de Barcelona (TMB), in partnership with SENER Engineering, developed and implemented the RESPIRA® system, an AI-powered ventilation control platform. First introduced in 2020 on Line 1, RESPIRA® demonstrated its effectiveness by lowering ambient temperatures, improving air circulation during the COVID-19 pandemic, and achieving a notable 25.1% reduction in energy consumption along with a 10.7% increase in passenger satisfaction…(More)”

Data Integration, Sharing, and Management for Transportation Planning and Traffic Operations


Report by the National Academies of Sciences, Engineering, and Medicine: “Planning and operating transportation systems involves the exchange of large volumes of data that must be shared between partnering transportation agencies, private-sector interests, travelers, and intelligent devices such as traffic signals, ramp meters, and connected vehicles.

NCHRP Research Report 1121: Data Integration, Sharing, and Management for Transportation Planning and Traffic Operations, from TRB’s National Cooperative Highway Research Program, presents tools, methods, and guidelines for improving data integration, sharing, and management practices through case studies, proof-of-concept product developments, and deployment assistance…(More)”.

Can AI Agents Be Trusted?


Article by Blair Levin and Larry Downes: “Agentic AI has quickly become one of the most active areas of artificial intelligence development. AI agents are a level of programming on top of large language models (LLMs) that allow them to work towards specific goals. This extra layer of software can collect data, make decisions, take action, and adapt its behavior based on results. Agents can interact with other systems, apply reasoning, and work according to priorities and rules set by you as the principal.

Companies such as Salesforce have already deployed agents that can independently handle customer queries in a wide range of industries and applications, for example, and recognize when human intervention is required.

But perhaps the most exciting future for agentic AI will come in the form of personal agents, which can take self-directed action on your behalf. These agents will act as your personal assistant, handling calendar management, performing directed research and analysis, finding, negotiating for, and purchasing goods and services, curating content and taking over basic communications, learning and optimizing themselves along the way.

The idea of personal AI agents goes back decades, but the technology finally appears ready for prime-time. Already, leading companies are offering prototype personal AI agents to their customers, suppliers, and other stakeholders, raising challenging business and technical questions. Most pointedly: Can AI agents be trusted to act in our best interests? Will they work exclusively for us, or will their loyalty be split between users, developers, advertisers, and service providers? And how will be know?

The answers to these questions will determine whether and how quickly users embrace personal AI agents, and if their widespread deployment will enhance or damage business relationships and brand value…(More)”.

AI-Ready Federal Statistical Data: An Extension of Communicating Data Quality


Article by By Hoppe, Travis et al : “Generative Artificial Intelligence (AI) is redefining how people interact with public information and shaping how public data are consumed. Recent advances in large language models (LLMs) mean that more Americans are getting answers from AI chatbots and other AI systems, which increasingly draw on public datasets. The federal statistical community can take action to advance the use of federal statistics with generative AI to ensure that official statistics are front-and-center, powering these AIdriven experiences.
The Federal Committee on Statistical Methodology (FCSM) developed the Framework for Data Quality to help analysts and the public assess fitness for use of data sets. AI-based queries present new challenges, and the framework should be enhanced to meet them. Generative AI acts as an intermediary in the consumption of public statistical information, extracting and combining data with logical strategies that differ from the thought processes and judgments of analysts. For statistical data to be accurately represented and trustworthy, they need to be machine understandable and be able to support models that measure data quality and provide contextual information.
FCSM is working to ensure that federal statistics used in these AI-driven interactions meet the data quality dimensions of the Framework including, but not limited to, accessibility, timeliness, accuracy, and credibility. We propose a new collaborative federal effort to establish best practices for optimizing APIs, metadata, and data accessibility to support accurate and trusted generative AI results…(More)”.

Children’s Voice Privacy: First Steps And Emerging Challenges


Paper by Ajinkya Kulkarni, et al: “Children are one of the most under-represented groups in speech technologies, as well as one of the most vulnerable in terms of privacy. Despite this, anonymization techniques targeting this population have received little attention. In this study, we seek to bridge this gap, and establish a baseline for the use of voice anonymization techniques designed for adult speech when applied to children’s voices. Such an evaluation is essential, as children’s speech presents a distinct set of challenges when compared to that of adults. This study comprises three children’s datasets, six anonymization methods, and objective and subjective utility metrics for evaluation. Our results show that existing systems for adults are still able to protect children’s voice privacy, but suffer from much higher utility degradation. In addition, our subjective study displays the challenges of automatic evaluation methods for speech quality in children’s speech, highlighting the need for further research…(More)”. See also: Responsible Data for Children.