Design Thinking as a Strategic Approach to E-Participation


Book by Ilaria Mariani et al: “This open access book examines how the adoption of Design Thinking (DT) can support public organisations in overcoming some of the current barriers in e-participation. Scholars have discussed the adoption of technology to strengthen public engagement through e-participation, streamline and enhance the relationship between government and society, and improve accessibility and effectiveness. However, barriers persist, necessitating further research in this area. By analysing e-participation barriers emerging from the literature and aligning them with notions in the DT literature, this book identifies five core DT practices to enhance e-participation: (i) Meaning creation and sense-making, (ii) Publics formation, (iii) Co-production, (iv) Experimentation and prototyping, and (v) Changing organisational culture. As a result, this book provides insights into enhancing tech-aided public engagement and promoting inclusivity for translating citizen input into tangible service implementations. The book triangulates qualitative analysis of relevant literature in the fields of e-participation and DT with knowledge from European projects experimenting with public participation activities implying experimentation with digital tools. This research aims to bridge the gap between theoretical frameworks and practical application, ultimately contributing to more effective e-participation and digital public services…(More)”.

When combinations of humans and AI are useful: A systematic review and meta-analysis


Paper by Michelle Vaccaro, Abdullah Almaatouq & Thomas Malone: “Inspired by the increasing use of artificial intelligence (AI) to augment humans, researchers have studied human–AI systems involving different tasks, systems and populations. Despite such a large body of work, we lack a broad conceptual understanding of when combinations of humans and AI are better than either alone. Here we addressed this question by conducting a preregistered systematic review and meta-analysis of 106 experimental studies reporting 370 effect sizes. We searched an interdisciplinary set of databases (the Association for Computing Machinery Digital Library, the Web of Science and the Association for Information Systems eLibrary) for studies published between 1 January 2020 and 30 June 2023. Each study was required to include an original human-participants experiment that evaluated the performance of humans alone, AI alone and human–AI combinations. First, we found that, on average, human–AI combinations performed significantly worse than the best of humans or AI alone (Hedges’ g = −0.23; 95% confidence interval, −0.39 to −0.07). Second, we found performance losses in tasks that involved making decisions and significantly greater gains in tasks that involved creating content. Finally, when humans outperformed AI alone, we found performance gains in the combination, but when AI outperformed humans alone, we found losses. Limitations of the evidence assessed here include possible publication bias and variations in the study designs analysed. Overall, these findings highlight the heterogeneity of the effects of human–AI collaboration and point to promising avenues for improving human–AI systems…(More)”.

Make it make sense: the challenge of data analysis in global deliberation


Blog by Iñaki Goñi: “From climate change to emerging technologies to economic justice to space, global and transnational deliberation is on the rise. Global deliberative processes aim to bring citizen-centred governance to issues that no single nation can resolve alone. Running deliberative processes at this scale poses a unique set of challenges. How to select participants, make the forums accountableimpactfulfairly designed, and aware of power imbalances, are all crucial and open questions….

Massifying participation will be key to invigorating global deliberation. Assemblies will have a better chance of being seen as legitimate, fair, and publicly supported if they involve thousands or even millions of diverse participants. This raises an operational challenge: how to systematise political ideas from many people across the globe.

In a centralised global assembly, anything from 50 to 500 citizens from various countries engage in a single deliberation and produce recommendations or political actions by crossing languages and cultures. In a distributed assembly, multiple gatherings are convened locally that share a common but flexible methodology, allowing participants to discuss a common issue applied both to local and global contexts. Either way, a global deliberation process demands the organisation and synthesis of possibly thousands of ideas from diverse languages and cultures around the world.

How could we ever make sense of all that data to systematise citizens’ ideas and recommendations? Most people turn their heads to computational methods to help reduce complexity and identify patterns. First up, one technique for analysing text amounts to little more than simple counting, through which we can produce something like a frequency table or a wordcloud…(More)”.

AI can help humans find common ground in democratic deliberation


Paper by Michael Henry Tessler et al: “We asked whether an AI system based on large language models (LLMs) could successfully capture the underlying shared perspectives of a group of human discussants by writing a “group statement” that the discussants would collectively endorse. Inspired by Jürgen Habermas’s theory of communicative action, we designed the “Habermas Machine” to iteratively generate group statements that were based on the personal opinions and critiques from individual users, with the goal of maximizing group approval ratings. Through successive rounds of human data collection, we used supervised fine-tuning and reward modeling to progressively enhance the Habermas Machine’s ability to capture shared perspectives. To evaluate the efficacy of AI-mediated deliberation, we conducted a series of experiments with over 5000 participants from the United Kingdom. These experiments investigated the impact of AI mediation on finding common ground, how the views of discussants changed across the process, the balance between minority and majority perspectives in group statements, and potential biases present in those statements. Lastly, we used the Habermas Machine for a virtual citizens’ assembly, assessing its ability to support deliberation on controversial issues within a demographically representative sample of UK residents…(More)”.

Exploring New Frontiers of Citizen Participation in the Policy Cycle


OECD Discussion Paper: “… starts from the premise that democracies are endowed with valuable assets and that putting citizens at the heart of policy making offers an opportunity to strengthen democratic resilience. It draws on data, evidence and insights generated through a wide range of work underway at the OECD to identify systemic challenges and propose lines of action for the future. It calls for greater attention to, and investments in, citizen participation in policy making as one of the core functions of the state and the ‘life force’ of democratic governance. In keeping with the OECD’s strong commitment to providing a platform for diverse perspectives on challenging policy issues, it also offers a collection of thoughtprovoking opinion pieces by leading practitioners whose position as elected officials, academics and civil society leaders provides them with a unique vantage point from which to scan the horizon. As a contribution to an evolving field, this Discussion Paper offers neither a prescriptive framework nor a roadmap for governments but represents a step towards reaching a shared understanding of the very real challenges that lie ahead. It is also a timely invitation to all interested actors to join forces and take concerted action to embed meaningful citizen participation in policy making…(More)”.

The illusion of information adequacy


Paper by Hunter Gehlbach , Carly D. Robinson, Angus Fletcher: “How individuals navigate perspectives and attitudes that diverge from their own affects an array of interpersonal outcomes from the health of marriages to the unfolding of international conflicts. The finesse with which people negotiate these differing perceptions depends critically upon their tacit assumptions—e.g., in the bias of naïve realism people assume that their subjective construal of a situation represents objective truth. The present study adds an important assumption to this list of biases: the illusion of information adequacy. Specifically, because individuals rarely pause to consider what information they may be missing, they assume that the cross-section of relevant information to which they are privy is sufficient to adequately understand the situation. Participants in our preregistered study (N = 1261) responded to a hypothetical scenario in which control participants received full information and treatment participants received approximately half of that same information. We found that treatment participants assumed that they possessed comparably adequate information and presumed that they were just as competent to make thoughtful decisions based on that information. Participants’ decisions were heavily influenced by which cross-section of information they received. Finally, participants believed that most other people would make a similar decision to the one they made. We discuss the implications in the context of naïve realism and other biases that implicate how people navigate differences of perspective…(More)”.

Ensuring citizens’ assemblies land


Article by Graham Smith: “…the evidence shows that while the recommendations of assemblies are well considered and could help shape more robust policy, too often they fail to land. Why is this?

The simple answer is that so much time, resources and energy is spent on organising the assembly itself – ensuring the best possible experience for citizens – that the relationship with the local authority and its decision-making processes is neglected.

First, the question asked of the assembly does not always relate to a specific set of decisions about to be made by an authority. Is the relevant policy process open and ready for input? On a number of occasions assemblies have taken place just after a new policy or strategy has been agreed. Disastrous timing.

This does not mean assemblies should only be run when they are tied to a particular decision-making process. Sometimes it is important to open up a policy area with a broad question. And sometimes it makes sense to empower citizens to set the agenda and focus on the issues they find most compelling

The second element is the failure of authorities to prepare to receive recommendations from citizens.

One story is where the first a public official knew about an assembly was when its recommendations landed on their desk. They were not received in the best spirit.

Too often assemblies are commissioned by enthusiastic politicians and public officials who have not done the necessary work to ensure their colleagues are willing to give a considered response to the citizens’ recommendations. Too often an assembly will be organised by a department or ministry where the results require others in the authority to respond – but those other politicians and officials feel no connection to the process.

And too often, an assembly ends, and it is not clear who within the public authority has the responsibility to take the recommendations forward to ensure they are given a fair hearing across the authority.

For citizens’ assemblies to be effective requires political and administrative work well beyond just organising the assembly. If this is not done, it is not only a waste of resources, but it can do serious damage to democracy and trust as those citizens who have invested their time and energy into the process become disillusioned.

Those authorities where citizens’ assemblies have had meaningful impacts are those that have not only invested in the assembly, but also into preparing the authority to receive the recommendations. Often this has meant continuing support and resourcing for assembly members after the process. They are the best advocates for their work…(More)”


Orphan Articles: The Dark Matter of Wikipedia


Paper by Akhil Arora, Robert West, Martin Gerlach: “With 60M articles in more than 300 language versions, Wikipedia is the largest platform for open and freely accessible knowledge. While the available content has been growing continuously at a rate of around 200K new articles each month, very little attention has been paid to the accessibility of the content. One crucial aspect of accessibility is the integration of hyperlinks into the network so the articles are visible to readers navigating Wikipedia. In order to understand this phenomenon, we conduct the first systematic study of orphan articles, which are articles without any incoming links from other Wikipedia articles, across 319 different language versions of Wikipedia. We find that a surprisingly large extent of content, roughly 15\% (8.8M) of all articles, is de facto invisible to readers navigating Wikipedia, and thus, rightfully term orphan articles as the dark matter of Wikipedia. We also provide causal evidence through a quasi-experiment that adding new incoming links to orphans (de-orphanization) leads to a statistically significant increase of their visibility in terms of the number of pageviews. We further highlight the challenges faced by editors for de-orphanizing articles, demonstrate the need to support them in addressing this issue, and provide potential solutions for developing automated tools based on cross-lingual approaches. Overall, our work not only unravels a key limitation in the link structure of Wikipedia and quantitatively assesses its impact, but also provides a new perspective on the challenges of maintenance associated with content creation at scale in Wikipedia…(More)”.

Citizen scientists will be needed to meet global water quality goals


University College London: “Sustainable development goals for water quality will not be met without the involvement of citizen scientists, argues an international team led by a UCL researcher, in a new policy brief.

The policy brief and attached technical brief are published by Earthwatch Europe on behalf of the United Nations Environment Program (UNEP)-coordinated World Water Quality Alliance that has supported citizen science projects in Kenya, Tanzania and Sierra Leone. The reports detail how policymakers can learn from examples where citizen scientists (non-professionals engaged in the scientific process, such as by collecting data) are already making valuable contributions.

The report authors focus on how to meet one of the UN’s Sustainable Development Goals around improving water quality, which the UN states is necessary for the health and prosperity of people and the planet…

“Locals who know the water and use the water are both a motivated and knowledgeable resource, so citizen science networks can enable them to provide large amounts of data and act as stewards of their local water bodies and sources. Citizen science has the potential to revolutionize the way we manage water resources to improve water quality.”…

The report authors argue that improving water quality data will require governments and organizations to work collaboratively with locals who collect their own data, particularly where government monitoring is scarce, but also where there is government support for citizen science schemes. Water quality improvement has a particularly high potential for citizen scientists to make an impact, as professionally collected data is often limited by a shortage of funding and infrastructure, while there are effective citizen science monitoring methods that can provide reliable data.

The authors write that the value of citizen science goes beyond the data collected, as there are other benefits pertaining to education of volunteers, increased community involvement, and greater potential for rapid response to water quality issues…(More)”.

AI-enhanced collective intelligence


Paper by Hao Cui and Taha Yasseri: “Current societal challenges exceed the capacity of humans operating either alone or collectively. As AI evolves, its role within human collectives will vary from an assistive tool to a participatory member. Humans and AI possess complementary capabilities that, together, can surpass the collective intelligence of either humans or AI in isolation. However, the interactions in humanAI systems are inherently complex, involving intricate processes and interdependencies. This review incorporates perspectives from complex network science to conceptualize a multilayer representation of human-AI collective intelligence, comprising cognition, physical, and information layers. Within this multilayer network, humans and AI agents exhibit varying characteristics; humans differ in diversity from surface-level to deep-level attributes, while AI agents range in degrees of functionality and anthropomorphism. We explore how agents’ diversity and interactions influence the system’s collective intelligence and analyze real-world instances of AI-enhanced collective intelligence. We conclude by considering potential challenges and future developments in this field….(More)” See also: Where and When AI and CI Meet: Exploring the Intersection of Artificial and Collective Intelligence