Emerging Practices in Participatory AI Design in Public Sector Innovation


Paper by Devansh Saxena, et al: “Local and federal agencies are rapidly adopting AI systems to augment or automate critical decisions, efficiently use resources, and improve public service delivery. AI systems are being used to support tasks associated with urban planning, security, surveillance, energy and critical infrastructure, and support decisions that directly affect citizens and their ability to access essential services. Local governments act as the governance tier closest to citizens and must play a critical role in upholding democratic values and building community trust especially as it relates to smart city initiatives that seek to transform public services through the adoption of AI. Community-centered and participatory approaches have been central for ensuring the appropriate adoption of technology; however, AI innovation introduces new challenges in this context because participatory AI design methods require more robust formulation and face higher standards for implementation in the public sector compared to the private sector. This requires us to reassess traditional methods used in this space as well as develop new resources and methods. This workshop will explore emerging practices in participatory algorithm design – or the use of public participation and community engagement – in the scoping, design, adoption, and implementation of public sector algorithms…(More)”.

Data equity and official statistics in the age of private sector data proliferation


Paper by Pietro Gennari: “Over the last few years, the private sector has become a primary generator of data due to widespread digitisation of the economy and society, the use of social media platforms, and advancements of technologies like the Internet of Things and AI. Unlike traditional sources, these new data streams often offer real-time information and unique insights into people’s behaviour, social dynamics, and economic trends. However, the proprietary nature of most private sector data presents challenges for public access, transparency, and governance that have led to fragmented, often conflicting, data governance arrangements worldwide. This lack of coherence can exacerbate inequalities, limit data access, and restrict data’s utility as a global asset.

Within this context, data equity has emerged as one of the key principles at the basis of any proposal of new data governance framework. The term “data equity” refers to the fair and inclusive access, use, and distribution of data so that it benefits all sections of society, regardless of socioeconomic status, race, or geographic location. It involves making sure that the collection, processing, and use of data does not disproportionately benefit or harm any particular group and seeks to address disparities in data access and quality that can perpetuate social and economic inequalities. This is important because data systems significantly influence access to resources and opportunities in society. In this sense, data equity aims to correct imbalances that have historically affected various groups and to ensure that decision-making based on data does not perpetuate these inequities…(More)”.

The Data Innovation Toolkit


Toolkit by Maria Claudia Bodino, Nathan da Silva Carvalho, Marcelo Cogo, Arianna Dafne Fini Storchi, and Stefaan Verhulst: “Despite the abundance of data, the excitement around AI, and the potential for transformative insights, many public administrations struggle to translate data into actionable strategies and innovations. 

Public servants working with data-related initiatives, need practical, easy-to-use resources designed to enhance the management of data innovation initiatives. 

In order to address these needs, the iLab of DG DIGIT from the European Commission is developing an initial set of practical tools designed to facilitate and enhance the implementation of data-driven initiatives. The main building blocks of the first version of the of the Digital Innovation Toolkit include: 

  1. Repository of educational materials and resources on the latest data innovation approaches from public sector, academia, NGOs and think tanks 
  2. An initial set of practical resources, some examples: 
  3. Workshop Templates to offer structured formats for conducting productive workshops that foster collaboration, ideation, and problem-solving. 
  4. Checklists to ensure that all data journey aspects and steps are properly assessed. 
  5. Interactive Exercises to engage team members in hands-on activities that build skills and facilitate understanding of key concepts and methodologies. 
  6. Canvas Models to provide visual frameworks for planning and brainstorming….(More)”.

In Online Democracy, Fun Is Imperative


Essay by Joe Mathews: “Governments around the world, especially those at the subnational and local levels, find themselves stuck in a vise. Planetary problems like climate change, disease, and technological disruption are not being addressed adequately by national governments. Everyday people, whose lives have been disrupted by those planetary problems, press the governments closer to them to step up and protect them. But those governments lack the technical capacity and popular trust to act effectively against bigger problems.

To build trust and capacity, many governments are moving governance into the digital world and asking their residents to do more of the work of government themselves. Some cities, provinces, and political institutions have tried to build digital platforms and robust digital environments where residents can improve service delivery and make government policy themselves.

However, most of these experiments have been failures. The trouble is that most of these platforms cannot keep the attention of the people who are supposed to use them. Too few of the platforms are designed to make online engagement compelling. So, figuring out how to make online engagement in government fun is actually a serious question for governments seeking to work more closely with their people.

What does fun look like in this sphere? I first witnessed a truly fun and engaging digital tool for citizen governance in Rome in 2018. While running a democracy conference with Mayor Virginia Raggi and her team, they were always on their phones, and not just to answer emails or texts. They were constantly on a digital environment called Rousseau.

Rousseau was named after Jean-Jacques Rousseau, the eighteenth-century philosopher and author of The Social Contract. In that 1762 book, Rousseau argued that city-states (like his hometown of Geneva) were more naturally suited to democracy than nation-states (especially big nations like France). He also wrote that the people themselves, not elected representatives, were the best rulers through what we today call direct democracy…(More)”.

Introduction to the Foundations and Regulation of Generative AI


Chapter by Philipp Hacker, Andreas Engel, Sarah Hammer and Brent Mittelstadt: “… introduces The Oxford Handbook of the Foundations and Regulation of Generative AI, outlining the key themes and questions surrounding the technical development, regulatory governance, and societal implications of generative AI. It highlights the historical context of generative AI, distinguishes it from traditional AI, and explores its diverse applications across multiple domains, including text, images, music, and scientific discovery. The discussion critically assesses whether generative AI represents a paradigm shift or a temporary hype. Furthermore, the chapter extensively surveys both emerging and established regulatory frameworks, including the EU AI Act, the GDPR, privacy and personality rights, and copyright, as well as global legal responses. We conclude that, for now, the “Old Guard” of legal frameworks regulates generative AI more tightly and effectively than the “Newcomers,” but that may change as the new laws fully kick in. The chapter concludes by mapping the structure of the Handbook…(More)”

Reimagining the Policy Cycle in the Age of Artificial Intelligence


Paper by Sara Marcucci and Stefaan Verhulst: “The increasing complexity of global challenges, such as climate change, public health crises, and socioeconomic inequalities, underscores the need for a more sophisticated and adaptive policymaking approach. Evidence-Informed Decision-Making (EIDM) has emerged as a critical framework, leveraging data and research to guide policy design, implementation, and impact assessment. However, traditional evidence-based approaches, such as reliance on Randomized Controlled Trials (RCTs) and systematic reviews, face limitations, including resource intensity, contextual constraints, and difficulty in addressing real-time challenges. Artificial Intelligence offers transformative potential to enhance EIDM by enabling large-scale data analysis, pattern recognition, predictive modeling, and stakeholder engagement across the policy cycle. While generative AI has attracted significant attention, this paper emphasizes the broader spectrum of AI applications (beyond Generative AI) —such as natural language processing (NLP), decision trees, and basic machine learning algorithms—that continue to play a critical role in evidence-informed policymaking. These models, often more transparent and resource-efficient, remain highly relevant in supporting data analysis, policy simulations, and decision-support.

This paper explores AI’s role in three key phases of the policy cycle: (1) problem identification, where AI can support issue framing, trend detection, and scenario creation; (2) policy design, where AI-driven simulations and decision-support tools can improve solution alignment with real-world contexts; and (3) policy implementation and impact assessment, where AI can enhance monitoring, evaluation, and adaptive decision-making. Despite its promise, AI adoption in policymaking remains limited due to challenges such as algorithmic bias, lack of explainability, resource demands, and ethical concerns related to data privacy and environmental impact. To ensure responsible and effective AI integration, this paper highlights key recommendations: prioritizing augmentation over automation, embedding human oversight throughout AI-driven processes, facilitating policy iteration, and combining AI with participatory governance models…(More)”.

Gather, Share, Build


Article by Nithya Ramanathan & Jim Fruchterman: “Recent milestones in generative AI have sent nonprofits, social enterprises, and funders alike scrambling to understand how these innovations can be harnessed for global good. Along with this enthusiasm, there is also warranted concern that AI will greatly increase the digital divide and fail to improve the lives of 90 percent of the people on our planet. The current focus on funding AI intelligently and strategically in the social sector is critical, and it will help ensure that money has the largest impact.

So how can the social sector meet the current moment?

AI is already good at a lot of things. Plenty of social impact organizations are using AI right now, with positive results. Great resources exist for developing a useful understanding of the current landscape and how existing AI tech can serve your mission, including this report from Stanford HAI and Project Evident and this AI Treasure Map for Nonprofits from Tech Matters.

While some tech-for-good companies are creating AI and thriving—Digital Green, Khan Academy, and Jacaranda Health, among many—most social sector companies are not ready to build AI solutions. But even organizations that don’t have AI on their radar need to be thinking about how to address one of the biggest challenges to harnessing AI to solve social sector problems: insufficient data…(More)”.

Policymaking assessment framework


Guide by the Susan McKinnon Foundation: “This assessment tool supports the measurement of the quality of policymaking processes – both existing and planned – across  sectors. It provides a flexible framework for rating public policy processes using information available in the public domain. The framework’s objective is to simplify the path towards best practice, evidence-informed policy.

It is intended to accommodate the complexity of policymaking processes and reflect the realities and context within which policymaking is undertaken. The criteria can be tailored for different policy problems and policy types and applied across sectors and levels of government.

The framework is structured around five key domains:

  1. understanding the problem
  2. engagement with stakeholders and partners
  3. outcomes focus
  4. evidence for the solution, and
  5. design and communication…(More)”.

On conspiracy theories of ignorance


Essay by In “On the Sources of Knowledge and Ignorance”, Karl Popper identifies a kind of “epistemological optimism”—an optimism about “man’s power to discern truth and to acquire knowledge”—that has played a significant role in the history of philosophy. At the heart of this optimistic view, Popper argues, is the “doctrine that truth is manifest”:

“Truth may perhaps be veiled, and removing the veil may not be easy. But once the naked truth stands revealed before our eyes, we have the power to see it, to distinguish it from falsehood, and to know that it is truth.”

According to Popper, this doctrine inspired the birth of modern science, technology, and liberalism. If the truth is manifest, there is “no need for any man to appeal to authority in matters of truth because each man carried the sources of knowledge in himself”:

“Man can know: thus he can be free. This is the formula which explains the link between epistemological optimism and the ideas of liberalism.”

Although a liberal himself, Popper argues that the doctrine of manifest truth is false. “The simple truth,” he writes, “is that truth is often hard to come by, and that once found it may easily be lost again.” Moreover, he argues that the doctrine is pernicious. If we think the truth is manifest, we create “the need to explain falsehood”:

“Knowledge, the possession of truth, need not be explained. But how can we ever fall into error if truth is manifest? The answer is: through our own sinful refusal to see the manifest truth; or because our minds harbour prejudices inculcated by education and tradition, or other evil influences which have perverted our originally pure and innocent minds.”

In this way, the doctrine of manifest truth inevitably gives rise to “the conspiracy theory of ignorance”…

In previous work, I have criticised how the concept of “misinformation” is applied by researchers and policy-makers. Roughly, I think that narrow applications of the term (e.g., defined in terms of fake news) are legitimate but focus on content that is relatively rare and largely symptomatic of other problems, at least in Western democracies. In contrast, broad definitions inevitably get applied in biased and subjective ways, transforming misinformation research and policy-making into “partisan combat by another name”…(More)”

Conflicts over access to Americans’ personal data emerging across federal government


Article by Caitlin Andrews: “The Trump administration’s fast-moving efforts to limit the size of the U.S. federal bureaucracy, primarily through the recently minted Department of Government Efficiency, are raising privacy and data security concerns among current and former officials across the government, particularly as the administration scales back positions charged with privacy oversight. Efforts to limit the independence of a host of federal agencies through a new executive order — including the independence of the Federal Trade Commission and Securities and Exchange Commission — are also ringing alarm bells among civil society and some legal experts.

According to CNN, several staff within the Office of Personnel Management’s privacy and records keeping department were fired last week. Staff who handle communications and respond to Freedom of Information Act requests were also let go. Though the entire privacy team was not fired, according to the OPM, details about what kind of oversight will remain within the department were limited. The report also states the staff’s termination date is 15 April.

It is one of several moves the Trump administration has made in recent days reshaping how entities access and provide oversight to government agencies’ information.

The New York Times reports on a wide range of incidents within the government where DOGE’s efforts to limit fraudulent government spending by accessing sensitive agency databases have run up against staffers who are concerned about the privacy of Americans’ personal information. In one incident, Social Security Administration acting Commissioner Michelle King was fired after resisting a request from DOGE to access the agency’s database. “The episode at the Social Security Administration … has played out repeatedly across the federal government,” the Times reported…(More)”.