Supporting Scientific Citizens


Article by Lisa Margonelli: “What do nuclear fusion power plants, artificial intelligence, hydrogen infrastructure, and drinking water recycled from human waste have in common? Aside from being featured in this edition of Issues, they all require intense public engagement to choose among technological tradeoffs, safety profiles, and economic configurations. Reaching these understandings requires researchers, engineers, and decisionmakers who are adept at working with the public. It also requires citizens who want to engage with such questions and can articulate what they want from science and technology.

This issue offers a glimpse into what these future collaborations might look like. To train engineers with the “deep appreciation of the social, cultural, and ethical priorities and implications of the technological solutions engineers are tasked with designing and deploying,” University of Michigan nuclear engineer Aditi Verma and coauthors Katie Snyder and Shanna Daly asked their first-year engineering students to codesign nuclear power plants in collaboration with local community members. Although traditional nuclear engineering classes avoid “getting messy,” Verma and colleagues wanted students to engage honestly with the uncertainties of the profession. In the process of working with communities, the students’ vocabulary changed; they spoke of trust, respect, and “love” for community—even when considering deep geological waste repositories…(More)”.

Citizens should be asked to do more


Article by Martin Wolf: “In an excellent “Citizens’ White Paper”, in partnership with participation charity Involve, Demos describes the needed revolution as follows, “We don’t just need new policies for these challenging times. We need new ways to tackle the policy challenges we face — from national missions to everyday policymaking. We need new ways to understand and negotiate what the public will tolerate. We need new ways to build back trust in politicians”. In sum, it states, “if government wants to be trusted by the people, it must itself start to trust the people.”

Bar chart of agreement that public should be involved in decision making on these issues (%) showing the public has clear ideas on where it should be most involved

The fundamental aim is to change the perception of government from something that politicians and bureaucrats do to us into an activity that involves not everyone, which is impossible, but ordinary people selected by lot. This, as I have noted, would be the principle of the jury imported into public life.

How might this work? The idea is to select representative groups of ordinary people affected by policies into official discussion on problems and solutions. This could be at the level of central, devolved or local government. The participants would not just be asked for opinions, but be actively engaged in considering issues and shaping (though not making) decisions upon them. The paper details a number of different approaches — panels, assemblies, juries, workshops and wider community conversations. Which would be appropriate would depend on the task…(More)”.

The Risks of Empowering “Citizen Data Scientists”


Article by Reid Blackman and Tamara Sipes: “Until recently, the prevailing understanding of artificial intelligence (AI) and its subset machine learning (ML) was that expert data scientists and AI engineers were the only people that could push AI strategy and implementation forward. That was a reasonable view. After all, data science generally, and AI in particular, is a technical field requiring, among other things, expertise that requires many years of education and training to obtain.

Fast forward to today, however, and the conventional wisdom is rapidly changing. The advent of “auto-ML” — software that provides methods and processes for creating machine learning code — has led to calls to “democratize” data science and AI. The idea is that these tools enable organizations to invite and leverage non-data scientists — say, domain data experts, team members very familiar with the business processes, or heads of various business units — to propel their AI efforts.

In theory, making data science and AI more accessible to non-data scientists (including technologists who are not data scientists) can make a lot of business sense. Centralized and siloed data science units can fail to appreciate the vast array of data the organization has and the business problems that it can solve, particularly with multinational organizations with hundreds or thousands of business units distributed across several continents. Moreover, those in the weeds of business units know the data they have, the problems they’re trying to solve, and can, with training, see how that data can be leveraged to solve those problems. The opportunities are significant.

In short, with great business insight, augmented with auto-ML, can come great analytic responsibility. At the same time, we cannot forget that data science and AI are, in fact, very difficult, and there’s a very long journey from having data to solving a problem. In this article, we’ll lay out the pros and cons of integrating citizen data scientists into your AI strategy and suggest methods for optimizing success and minimizing risks…(More)”.

Governance of deliberative mini-publics: emerging consensus and divergent views


Paper by Lucy J. Parry, Nicole Curato, and , and John S. Dryzek: “Deliberative mini-publics are forums for citizen deliberation composed of randomly selected citizens convened to yield policy recommendations. These forums have proliferated in recent years but there are no generally accepted standards to govern their practice. Should there be? We answer this question by bringing the scholarly literature on citizen deliberation into dialogue with the lived experience of the people who study, design and implement mini-publics. We use Q methodology to locate five distinct perspectives on the integrity of mini-publics, and map the structure of agreement and dispute across them. We find that, across the five viewpoints, there is emerging consensus as well as divergence on integrity issues, with disagreement over what might be gained or lost by adapting common standards of practice, and possible sources of integrity risks. This article provides an empirical foundation for further discussion on integrity standards in the future…(More)”.

Integrating Artificial Intelligence into Citizens’ Assemblies: Benefits, Concerns and Future Pathways


Paper by Sammy McKinney: “Interest in how Artificial Intelligence (AI) could be used within citizens’ assemblies (CAs) is emerging amongst scholars and practitioners alike. In this paper, I make four contributions at the intersection of these burgeoning fields. First, I propose an analytical framework to guide evaluations of the benefits and limitations of AI applications in CAs. Second, I map out eleven ways that AI, especially large language models (LLMs), could be used across a CAs full lifecycle. This introduces novel ideas for AI integration into the literature and synthesises existing proposals to provide the most detailed analytical breakdown of AI applications in CAs to date. Third, drawing on relevant literature, four key informant interviews, and the Global Assembly on the Ecological and Climate crisis as a case study, I apply my analytical framework to assess the desirability of each application. This provides insight into how AI could be deployed to address existing  challenges facing CAs today as well as the concerns that arise with AI integration. Fourth, bringing my analyses together, I argue that AI integration into CAs brings the potential to enhance their democratic quality and institutional capacity, but realising this requires the deliberative community to proceed cautiously, effectively navigate challenging trade-offs, and mitigate important concerns that arise with AI integration. Ultimately, this paper provides a foundation that can guide future research concerning AI integration into CAs and other forms of democratic innovation…(More)”.

Drivers of Trust in Public Institutions


Press Release: “In an increasingly challenging environment – marked by successive economic shocks, rising protectionism, the war in Europe and ongoing conflicts in the Middle East, as well as structural challenges and disruptions caused by rapid technological developments, climate change and population aging – 44% of respondents now have low or no trust in their national government, surpassing the 39% of respondents who express high or moderately high trust in national government, according to a new OECD report.  

OECD Survey on Drivers of Trust in Public Institutions – 2024 Results, presents findings from the second OECD Trust Survey, conducted in October and November 2023 across 30 Member countries. The biennial report offers a comprehensive analysis of current trust levels and their drivers across countries and public institutions. 

This edition of the Trust Survey confirms the previous finding that socio-economic and demographic factors, as well as a sense of having a say in decision making, affect trust. For example, 36% of women reported high or moderately high trust in government, compared to 43% of men. The most significant drop in trust since 2021 is seen among women and those with lower levels of education. The trust gap is largest between those who feel they have a say and those who feel they do not have a say in what the government does. Among those who report they have a say, 69% report high or moderately high trust in their national government, whereas among those who feel they do not only 22% do…(More)”.

Big Tech-driven deliberative projects


Report by Canning Malkin and Nardine Alnemr: “Google, Meta, OpenAI and Anthropic have commissioned projects based on deliberative democracy. What was the purpose of each project? How was deliberation designed and implemented, and what were the outcomes? In this Technical Paper, Malkin and Alnemr describe the commissioning context, the purpose and remit, and the outcomes of these deliberative projects. Finally, they offer insights on contextualising projects within the broader aspirations of deliberative democracy…(More)”.

Bringing Communities In, Achieving AI for All


Article by Shobita Parthasarathy and Jared Katzman: “…To this end, public and philanthropic research funders, universities, and the tech industry should be seeking out partnerships with struggling communities, to learn what they need from AI and build it. Regulators, too, should have their ears to the ground, not just the C-suite. Typical members of a marginalized community—or, indeed, any nonexpert community—may not know the technical details of AI, but they understand better than anyone else the power imbalances at the root of concerns surrounding AI bias and discrimination. And so it is from communities marginalized by AI, and from scholars and organizations focused on understanding and ameliorating social disadvantage, that AI designers and regulators most need to hear.

Progress toward AI equity begins at the agenda-setting stage, when funders, engineers, and corporate leaders make decisions about research and development priorities. This is usually seen as a technical or management task, to be carried out by experts who understand the state of scientific play and the unmet needs of the market… A heartening example comes from Carnegie Mellon University, where computer scientists worked with residents in the institution’s home city of Pittsburgh to build a technology that monitored and visualized local air quality. The collaboration began when researchers attended community meetings where they heard from residents who were suffering the effects of air pollution from a nearby factory. The residents had struggled to get the attention of local and national officials because they were unable to provide the sort of data that would motivate interest in their case. The researchers got to work on prototype systems that could produce the needed data and refined their technology in response to community input. Eventually their system brought together heterogeneous information, including crowdsourced smell reports, video footage of factory smokestacks, and air-quality and wind data, which the residents then submitted to government entities. After reviewing the data, administrators at the Environmental Protection Agency agreed to review the factory’s compliance, and within a year the factory’s parent company announced that the facility would close…(More)”.

How to build a Collective Mind that speaks for humanity in real-time


Blog by Louis Rosenberg: “This begs the question — could large human groups deliberate in real-time with the efficiency of fish schools and quickly reach optimized decisions?

For years this goal seemed impossible. That’s because conversational deliberations have been shown to be most productive in small groups of 4 to 7 people and quickly degrade as groups grow larger. This is because the “airtime per person” gets progressively squeezed and the wait-time to respond to others steadily increases. By 12 to 15 people, the conversational dynamics change from thoughtful debate to a series of monologues that become increasingly disjointed. By 20 people, the dialog ceases to be a conversation at all. This problem seemed impenetrable until recent advances in Generative AI opened up new solutions.

The resulting technology is called Conversational Swarm Intelligence and it promises to allow groups of almost any size (200, 2000, or even 2 million people) to discuss complex problems in real-time and quickly converge on solutions with significantly amplified intelligence. The first step is to divide the population into small subgroups, each sized for thoughtful dialog. For example, a 1000-person group could be divided into 200 subgroups of 5, each routed into their own chat room or video conferencing session. Of course, this does not create a single unified conversation — it creates 200 parallel conversations…(More)”.

Citizen engagement


European Union Report: “…considers how to approach citizen engagement for the EU missions. Engagement and social dialogue should aim to ensure that innovation is human-centred and that missions maintain wide public legitimacy. But citizen engagement is complex and significantly changes the traditional responsibilities of the research and innovation community and calls for new capabilities. This report provides insights to build these capabilities and explores effective ways to help citizens understand their role within the EU missions, showing how to engage them throughout the various stages of implementation. The report considers both the challenges and administrative burdens of citizen engagement and sets out how to overcome them, as well as demonstrated the wider opportunity of “double additionality” where citizen engagement methods serve to fundamentally transform an entire research and innovation portfolio…(More)”.