2025 Ratings for Digital Participation Tools


People-Powered Report: The latest edition of our Digital Participation Tool Ratings evaluates 30 comprehensive tools that have been used to support digital participation all over the world. This year’s ratings offer more information and insights on each tool to help you select a suitable tool for your context and needs. We also researched how AI tools and features fit into the current digital participation landscape. 

For the last four years, People Powered has been committed to providing governments and organizations with digital participation guidance, to enable people leading participatory programs and citizen engagement efforts to effectively select and use digital participation tools by providing guidance and ratings for tools. These ratings are the latest edition of the evaluations first launched in 2022. Further guidance about how to use these tools is available from our Guide to Digital Participation Platforms and Online Training on Digital Participation…(More)”.

Designing New Institutions and Renewing Existing Ones – A Playbook


UNDP Report: “The world has long depended on public institutions to solve problems and meet needs — from running schools to building roads, taking care of public health to defense. Today, global challenges like climate change, election security, forced migration, and AI-induced unemployment demand new institutional responses, especially in the Global South.

The bad news? Many institutions now struggle with public distrust, being seen as too wasteful
and inefficient, unresponsive and ineffective, and sometimes corrupt and outdated.
The good news? Fresh methods and models inspired by innovations in government, business, and civil
society are now available that can help us rethink institutions — making them more public results
oriented, agile, transparent, and fit for purpose. And ready for the future…(More)”.

Global population data is in crisis – here’s why that matters


Article by Andrew J Tatem and Jessica Espey: “Every day, decisions that affect our lives depend on knowing how many people live where. For example, how many vaccines are needed in a community, where polling stations should be placed for elections or who might be in danger as a hurricane approaches. The answers rely on population data.

But counting people is getting harder.

For centuries, census and household surveys have been the backbone of population knowledge. But we’ve just returned from the UN’s statistical commission meetings in New York, where experts reported that something alarming is happening to population data systems globally.

Census response rates are declining in many countries, resulting in large margins of error. The 2020 US census undercounted America’s Latino population by more than three times the rate of the 2010 census. In Paraguay, the latest census revealed a population one-fifth smaller than previously thought.

South Africa’s 2022 census post-enumeration survey revealed a likely undercount of more than 30%. According to the UN Economic Commission for Africa, undercounts and census delays due to COVID-19, conflict or financial limitations have resulted in an estimated one in three Africans not being counted in the 2020 census round.

When people vanish from data, they vanish from policy. When certain groups are systematically undercounted – often minorities, rural communities or poorer people – they become invisible to policymakers. This translates directly into political underrepresentation and inadequate resource allocation…(More)”.

How social media and online communities influence climate change beliefs


Article by James Rice: “Psychological, social, and political forces all shape beliefs about climate change. Climate scientists bear a responsibility — not only as researchers and educators, but as public communicators — to guard against climate misinformation. This responsibility should be foundational, supported by economists, sociologists, and industry leaders.

While fake news manifests in various forms, not all forms of misinformation are created with the intent to deceive. Regardless of intent, climate misinformation threatens policy integrity. Strengthening environmental communication is thus crucial to counteract ideological divides that distort scientific discourse and weaken public trust.

Political polarisation, misinformation, and the erosion of scientific authority pose challenges demanding rigorous scholarship and proactive public engagement. Climate scientists, policymakers, and climate justice advocates must ensure scientific integrity while recognising that climate science operates in a politically charged landscape. Agnosticism and resignation, rather than resisting climate misinformation, are as dangerous as outright denial of climate science. Combating this extends beyond scientific accuracy. It requires strategic communication, engagement with advocacy groups, and the reinforcement of public trust in environmental expertise…(More)”.

Political Responsibility and Tech Governance


Book by Jude Browne: “Not a day goes by without a new story on the perils of technology: from increasingly clever machines that surpass human capability and comprehension to genetic technologies capable of altering the human genome in ways we cannot predict. How can we respond? What should we do politically? Focusing on the rise of robotics and artificial intelligence (AI), and the impact of new reproductive and genetic technologies (Repro-tech), Jude Browne questions who has political responsibility for the structural impacts of these technologies and how we might go about preparing for the far-reaching societal changes they may bring. This thought-provoking book tackles some of the most pressing issues of our time and offers a compelling vision for how we can respond to these challenges in a way that is both politically feasible and socially responsible…(More)”.

Trump Admin Plans to Cut Team Responsible for Critical Atomic Measurement Data


Article by Louise Matsakis and Will Knight: “The US National Institute of Standards and Technology (NIST) is discussing plans to eliminate an entire team responsible for publishing and maintaining critical atomic measurement data in the coming weeks, as the Trump administration continues its efforts to reduce the US federal workforce, according to a March 18 email sent to dozens of outside scientists. The data in question underpins advanced scientific research around the world in areas like semiconductor manufacturing and nuclear fusion…(More)”.

Web 3.0 Requires Data Integrity


Article by Bruce Schneier and Davi Ottenheimer: “If you’ve ever taken a computer security class, you’ve probably learned about the three legs of computer security—confidentiality, integrity, and availability—known as the CIA triad.a When we talk about a system being secure, that’s what we’re referring to. All are important, but to different degrees in different contexts. In a world populated by artificial intelligence (AI) systems and artificial intelligent agents, integrity will be paramount.

What is data integrity? It’s ensuring that no one can modify data—that’s the security angle—but it’s much more than that. It encompasses accuracy, completeness, and quality of data—all over both time and space. It’s preventing accidental data loss; the “undo” button is a primitive integrity measure. It’s also making sure that data is accurate when it’s collected—that it comes from a trustworthy source, that nothing important is missing, and that it doesn’t change as it moves from format to format. The ability to restart your computer is another integrity measure.

The CIA triad has evolved with the Internet. The first iteration of the Web—Web 1.0 of the 1990s and early 2000s—prioritized availability. This era saw organizations and individuals rush to digitize their content, creating what has become an unprecedented repository of human knowledge. Organizations worldwide established their digital presence, leading to massive digitization projects where quantity took precedence over quality. The emphasis on making information available overshadowed other concerns.

As Web technologies matured, the focus shifted to protecting the vast amounts of data flowing through online systems. This is Web 2.0: the Internet of today. Interactive features and user-generated content transformed the Web from a read-only medium to a participatory platform. The increase in personal data, and the emergence of interactive platforms for e-commerce, social media, and online everything demanded both data protection and user privacy. Confidentiality became paramount.

We stand at the threshold of a new Web paradigm: Web 3.0. This is a distributed, decentralized, intelligent Web. Peer-to-peer social-networking systems promise to break the tech monopolies’ control on how we interact with each other. Tim Berners-Lee’s open W3C protocol, Solid, represents a fundamental shift in how we think about data ownership and control. A future filled with AI agents requires verifiable, trustworthy personal data and computation. In this world, data integrity takes center stage…(More)”.

Cloze Encounters: The Impact of Pirated Data Access on LLM Performance


Paper by Stella Jia & Abhishek Nagaraj: “Large Language Models (LLMs) have demonstrated remarkable capabilities in text generation, but their performance may be influenced by the datasets on which they are trained, including potentially unauthorized or pirated content. We investigate the extent to which data access through pirated books influences LLM responses. We test the performance of leading foundation models (GPT, Claude, Llama, and Gemini) on a set of books that were and were not included in the Books3 dataset, which contains full-text pirated books and could be used for LLM training. We assess book-level performance using the “name cloze” word-prediction task. To examine the causal effect of Books3 inclusion we employ an instrumental variables strategy that exploits the pattern of book publication years in the Books3 dataset. In our sample of 12,916 books, we find significant improvements in LLM name cloze accuracy on books available within the Books3 dataset compared to those not present in these data. These effects are more pronounced for less popular books as compared to more popular books and vary across leading models. These findings have crucial implications for the economics of digitization, copyright policy, and the design and training of AI systems…(More)”.

Getting the Public on Side: How to Make Reforms Acceptable by Design


OECD Report: “Public acceptability is a crucial condition for the successful implementation of reforms. The challenges raised by the green, digital and demographic transitions call for urgent and ambitious policy action. Despite this, governments often struggle to build sufficiently broad public support for the reforms needed to promote change. Better information and effective public communication have a key role to play. But policymakers cannot get the public to choose the side of reform without a proper understanding of people’s views and how they can help strengthen the policy process.

Perceptual and behavioural data provide an important source of insights on the perceptions, attitudes and preferences that constitute the “demand-side” of reform. The interdisciplinary OECD Expert Group on New Measures of the Public Acceptability of Reforms was set up in 2021 to take stock of these insights and explore their potential for improving policy. This report reflects the outcomes of the Expert Group’s work. It looks at and assesses (i) the available data and what they can tell policymakers about people’s views; (ii) the analytical frameworks through which these data are interpreted; and (iii) the policy tools through which considerations of public acceptability are integrated into the reform process…(More)”.

Should AGI-preppers embrace DOGE?


Blog by Henry Farrell: “…AGI-prepping is reshaping our politics. Wildly ambitious claims for AGI have not only shaped America’s grand strategy, but are plausibly among the justifying reasons for DOGE.

After the announcement of DOGE, but before it properly got going, I talked to someone who was not formally affiliated, but was very definitely DOGE adjacent. I put it to this individual that tearing out the decision making capacities of government would not be good for America’s ability to do things in the world. Their response (paraphrased slightly) was: so what? We’ll have AGI by late 2026. And indeed, one of DOGE’s major ambitions, as described in a new article in WIRED, appears to have been to pull as much government information as possible into a large model that could then provide useful information across the totality of government.

The point – which I don’t think is understood nearly widely enough – is that radical institutional revolutions such as DOGE follow naturally from the AGI-prepper framework. If AGI is right around the corner, we don’t need to have a massive federal government apparatus, organizing funding for science via the National Science Foundation and the National Institute for Health. After all, in Amodei and Pottinger’s prediction:

By 2027, AI developed by frontier labs will likely be smarter than Nobel Prize winners across most fields of science and engineering. … It will be able to … complete complex tasks that would take people months or years, such as designing new weapons or curing diseases.

Who needs expensive and cumbersome bureaucratic institutions for organizing funding scientists in a near future where a “country of geniuses [will be] contained in a data center,” ready to solve whatever problems we ask them to? Indeed, if these bottled geniuses are cognitively superior to humans across most or all tasks, why do we need human expertise at all, beyond describing and explaining human wants? From this perspective, most human based institutions are obsolescing assets that need to be ripped out, and DOGE is only the barest of beginnings…(More)”.