Reimagining the Policy Cycle in the Age of Artificial Intelligence


Paper by Sara Marcucci and Stefaan Verhulst: “The increasing complexity of global challenges, such as climate change, public health crises, and socioeconomic inequalities, underscores the need for a more sophisticated and adaptive policymaking approach. Evidence-Informed Decision-Making (EIDM) has emerged as a critical framework, leveraging data and research to guide policy design, implementation, and impact assessment. However, traditional evidence-based approaches, such as reliance on Randomized Controlled Trials (RCTs) and systematic reviews, face limitations, including resource intensity, contextual constraints, and difficulty in addressing real-time challenges. Artificial Intelligence offers transformative potential to enhance EIDM by enabling large-scale data analysis, pattern recognition, predictive modeling, and stakeholder engagement across the policy cycle. While generative AI has attracted significant attention, this paper emphasizes the broader spectrum of AI applications (beyond Generative AI) —such as natural language processing (NLP), decision trees, and basic machine learning algorithms—that continue to play a critical role in evidence-informed policymaking. These models, often more transparent and resource-efficient, remain highly relevant in supporting data analysis, policy simulations, and decision-support.

This paper explores AI’s role in three key phases of the policy cycle: (1) problem identification, where AI can support issue framing, trend detection, and scenario creation; (2) policy design, where AI-driven simulations and decision-support tools can improve solution alignment with real-world contexts; and (3) policy implementation and impact assessment, where AI can enhance monitoring, evaluation, and adaptive decision-making. Despite its promise, AI adoption in policymaking remains limited due to challenges such as algorithmic bias, lack of explainability, resource demands, and ethical concerns related to data privacy and environmental impact. To ensure responsible and effective AI integration, this paper highlights key recommendations: prioritizing augmentation over automation, embedding human oversight throughout AI-driven processes, facilitating policy iteration, and combining AI with participatory governance models…(More)”.

Gather, Share, Build


Article by Nithya Ramanathan & Jim Fruchterman: “Recent milestones in generative AI have sent nonprofits, social enterprises, and funders alike scrambling to understand how these innovations can be harnessed for global good. Along with this enthusiasm, there is also warranted concern that AI will greatly increase the digital divide and fail to improve the lives of 90 percent of the people on our planet. The current focus on funding AI intelligently and strategically in the social sector is critical, and it will help ensure that money has the largest impact.

So how can the social sector meet the current moment?

AI is already good at a lot of things. Plenty of social impact organizations are using AI right now, with positive results. Great resources exist for developing a useful understanding of the current landscape and how existing AI tech can serve your mission, including this report from Stanford HAI and Project Evident and this AI Treasure Map for Nonprofits from Tech Matters.

While some tech-for-good companies are creating AI and thriving—Digital Green, Khan Academy, and Jacaranda Health, among many—most social sector companies are not ready to build AI solutions. But even organizations that don’t have AI on their radar need to be thinking about how to address one of the biggest challenges to harnessing AI to solve social sector problems: insufficient data…(More)”.

Policymaking assessment framework


Guide by the Susan McKinnon Foundation: “This assessment tool supports the measurement of the quality of policymaking processes – both existing and planned – across  sectors. It provides a flexible framework for rating public policy processes using information available in the public domain. The framework’s objective is to simplify the path towards best practice, evidence-informed policy.

It is intended to accommodate the complexity of policymaking processes and reflect the realities and context within which policymaking is undertaken. The criteria can be tailored for different policy problems and policy types and applied across sectors and levels of government.

The framework is structured around five key domains:

  1. understanding the problem
  2. engagement with stakeholders and partners
  3. outcomes focus
  4. evidence for the solution, and
  5. design and communication…(More)”.

On conspiracy theories of ignorance


Essay by In “On the Sources of Knowledge and Ignorance”, Karl Popper identifies a kind of “epistemological optimism”—an optimism about “man’s power to discern truth and to acquire knowledge”—that has played a significant role in the history of philosophy. At the heart of this optimistic view, Popper argues, is the “doctrine that truth is manifest”:

“Truth may perhaps be veiled, and removing the veil may not be easy. But once the naked truth stands revealed before our eyes, we have the power to see it, to distinguish it from falsehood, and to know that it is truth.”

According to Popper, this doctrine inspired the birth of modern science, technology, and liberalism. If the truth is manifest, there is “no need for any man to appeal to authority in matters of truth because each man carried the sources of knowledge in himself”:

“Man can know: thus he can be free. This is the formula which explains the link between epistemological optimism and the ideas of liberalism.”

Although a liberal himself, Popper argues that the doctrine of manifest truth is false. “The simple truth,” he writes, “is that truth is often hard to come by, and that once found it may easily be lost again.” Moreover, he argues that the doctrine is pernicious. If we think the truth is manifest, we create “the need to explain falsehood”:

“Knowledge, the possession of truth, need not be explained. But how can we ever fall into error if truth is manifest? The answer is: through our own sinful refusal to see the manifest truth; or because our minds harbour prejudices inculcated by education and tradition, or other evil influences which have perverted our originally pure and innocent minds.”

In this way, the doctrine of manifest truth inevitably gives rise to “the conspiracy theory of ignorance”…

In previous work, I have criticised how the concept of “misinformation” is applied by researchers and policy-makers. Roughly, I think that narrow applications of the term (e.g., defined in terms of fake news) are legitimate but focus on content that is relatively rare and largely symptomatic of other problems, at least in Western democracies. In contrast, broad definitions inevitably get applied in biased and subjective ways, transforming misinformation research and policy-making into “partisan combat by another name”…(More)”

Conflicts over access to Americans’ personal data emerging across federal government


Article by Caitlin Andrews: “The Trump administration’s fast-moving efforts to limit the size of the U.S. federal bureaucracy, primarily through the recently minted Department of Government Efficiency, are raising privacy and data security concerns among current and former officials across the government, particularly as the administration scales back positions charged with privacy oversight. Efforts to limit the independence of a host of federal agencies through a new executive order — including the independence of the Federal Trade Commission and Securities and Exchange Commission — are also ringing alarm bells among civil society and some legal experts.

According to CNN, several staff within the Office of Personnel Management’s privacy and records keeping department were fired last week. Staff who handle communications and respond to Freedom of Information Act requests were also let go. Though the entire privacy team was not fired, according to the OPM, details about what kind of oversight will remain within the department were limited. The report also states the staff’s termination date is 15 April.

It is one of several moves the Trump administration has made in recent days reshaping how entities access and provide oversight to government agencies’ information.

The New York Times reports on a wide range of incidents within the government where DOGE’s efforts to limit fraudulent government spending by accessing sensitive agency databases have run up against staffers who are concerned about the privacy of Americans’ personal information. In one incident, Social Security Administration acting Commissioner Michelle King was fired after resisting a request from DOGE to access the agency’s database. “The episode at the Social Security Administration … has played out repeatedly across the federal government,” the Times reported…(More)”.

Tab the lab: A typology of public sector innovation labs


Paper by Aline Stoll and Kevin C Andermatt: “Many public sector organizations set up innovation laboratories in response to the pressure to tackle societal problems and the high expectations placed on them to innovate public services. Our understanding of the public sector innovation laboratories’ role in enhancing the innovation capacity of administrations is still limited. It is challenging to assess or compare the impact of innovation laboratories because of how they operate and what they do. This paper closes this research gap by offering a typology that organizes the diverse nature of innovation labs and makes it possible to compare various lab settings. The proposed typology gives possible relevant factors to increase the innovation capacity of public organizations. The findings are based on a literature review of primarily explorative papers and case studies, which made it possible to identify the relevant criteria. The proposed typology covers three dimensions: (1) value (intended innovation impact of the labs); (2) governance (role of government and financing model); and (3) network (stakeholders in the collaborative arrangements). Comparing European countries and regions with regards to the repartition of labs shows that Nordic and British countries tend to have broader scope than continental European countries…(More)”.

On Privacy and Technology


Book by Daniel J. Solove: “With the rapid rise of new digital technologies and artificial intelligence, is privacy dead? Can anything be done to save us from a dystopian world without privacy?

In this short and accessible book, internationally renowned privacy expert Daniel J. Solove draws from a range of fields, from law to philosophy to the humanities, to illustrate the profound changes technology is wreaking upon our privacy, why they matter, and what can be done about them. Solove provides incisive examinations of key concepts in the digital sphere, including control, manipulation, harm, automation, reputation, consent, prediction, inference, and many others.

Compelling and passionate, On Privacy and Technology teems with powerful insights that will transform the way you think about privacy and technology…(More)”.

Handbook on Governance and Data Science


Handbook edited by Sarah Giest, Bram Klievink, Alex Ingrams, and Matthew M. Young: “This book is based on the idea that there are quite a few overlaps and connections between the field of governance studies and data science. Data science, with its focus on extracting insights from large datasets through sophisticated algorithms and analytics (Provost and Fawcett 2013), provides government with tools to potentially make more informed decisions, enhance service delivery, and foster transparency and accountability. Governance studies, concerned with the processes and structures through which public policy and services are formulated and delivered (Osborne 2006), increasingly rely on data-driven insights to address complex societal challenges, optimize resource allocation, and engage citizens more effectively (Meijer and Bolívar 2016). However, research insights in journals or at conferences remain quite separate, and thus there are limited spaces for having interconnected conversations. In addition, unprecedented societal challenges demand not only innovative solutions but new approaches to problem-solving.

In this context, data science techniques emerge as a crucial element in crafting a modern governance paradigm, offering predictive insights, revealing hidden patterns, and enabling real-time monitoring of public sentiment and service effectiveness, which are invaluable for public administrators (Kitchin 2014). However, the integration of data science into public governance also raises important considerations regarding data privacy, ethical use of data, and the need for transparency in algorithmic decision-making processes (Zuiderwijk and Janssen 2014). In short, this book is a space where governance and data science studies intersect and highlight relevant opportunities and challenges in this space at the intersection of both fields. Contributors to this book discuss the types of data science techniques applied in a governance context and the implications these have for government decisions and services. This also includes questions around the types of data that are used in government and how certain processes and challenges are measured…(More)”.

Public participation in policymaking: exploring and understanding impact


Report by the Scottish Government: “This research builds on that framework and seeks to explore how Scottish Government might better understand the impact of public participation on policy decision-making. As detailed above, there is a plethora of potential, and anticipated, benefits which may arise from increased citizen participation in policy decision-making, as well as lots of participatory activity already taking place across the organisation. Now is an opportune time to consider impact, to support the design and delivery of participatory engagements that are impactful and that are more likely to realise the benefits of public participation. Through a review of academic and grey literature along with stakeholder engagement, this study aims to answer the following questions:

  • 1. How is impact conceptualised in literature related to public participation, and what are some practice examples?
  • 2. How is impact conceptualised by stakeholders and what do they perceive as the current blockers, challenges or facilitators in a Scottish Government setting?
  • 3. What evaluation tools or frameworks are used to evaluate the impact of public participation processes, and which ones might be applicable /usable in a Scottish Government setting?…(More)”.

Policy design labs and uncertainty: can they innovate, and retain and circulate learning?


Paper by Jenny Lewis: “Around the world in recent times, numerous policy design labs have been established, related to a rising focus on the need for public sector innovation. These labs are a response to the challenging nature of many societal problems and often have a purpose of navigating uncertainty. They do this by “labbing” ill-structured problems through moving them into an experimental environment, outside of traditional government structures, and using a design-for-policy approach. Labs can, therefore, be considered as a particular type of procedural policy tool, used in attempts to change how policy is formulated and implemented to address uncertainty. This paper considers the role of policy design labs in learning and explores the broader governance context they are embedded within. It examines whether labs have the capacity to innovate and also retain and circulate learning to other policy actors. It argues that labs have considerable potential to change the spaces of policymaking at the micro level and innovate, but for learning to be kept rather than lost, innovation needs to be institutionalized in governing structures at higher levels…(More)”.