Reimagining the Policy Cycle in the Age of Artificial Intelligence


Paper by Sara Marcucci and Stefaan Verhulst: “The increasing complexity of global challenges, such as climate change, public health crises, and socioeconomic inequalities, underscores the need for a more sophisticated and adaptive policymaking approach. Evidence-Informed Decision-Making (EIDM) has emerged as a critical framework, leveraging data and research to guide policy design, implementation, and impact assessment. However, traditional evidence-based approaches, such as reliance on Randomized Controlled Trials (RCTs) and systematic reviews, face limitations, including resource intensity, contextual constraints, and difficulty in addressing real-time challenges. Artificial Intelligence offers transformative potential to enhance EIDM by enabling large-scale data analysis, pattern recognition, predictive modeling, and stakeholder engagement across the policy cycle. While generative AI has attracted significant attention, this paper emphasizes the broader spectrum of AI applications (beyond Generative AI) —such as natural language processing (NLP), decision trees, and basic machine learning algorithms—that continue to play a critical role in evidence-informed policymaking. These models, often more transparent and resource-efficient, remain highly relevant in supporting data analysis, policy simulations, and decision-support.

This paper explores AI’s role in three key phases of the policy cycle: (1) problem identification, where AI can support issue framing, trend detection, and scenario creation; (2) policy design, where AI-driven simulations and decision-support tools can improve solution alignment with real-world contexts; and (3) policy implementation and impact assessment, where AI can enhance monitoring, evaluation, and adaptive decision-making. Despite its promise, AI adoption in policymaking remains limited due to challenges such as algorithmic bias, lack of explainability, resource demands, and ethical concerns related to data privacy and environmental impact. To ensure responsible and effective AI integration, this paper highlights key recommendations: prioritizing augmentation over automation, embedding human oversight throughout AI-driven processes, facilitating policy iteration, and combining AI with participatory governance models…(More)”.

Policymaking assessment framework


Guide by the Susan McKinnon Foundation: “This assessment tool supports the measurement of the quality of policymaking processes – both existing and planned – across  sectors. It provides a flexible framework for rating public policy processes using information available in the public domain. The framework’s objective is to simplify the path towards best practice, evidence-informed policy.

It is intended to accommodate the complexity of policymaking processes and reflect the realities and context within which policymaking is undertaken. The criteria can be tailored for different policy problems and policy types and applied across sectors and levels of government.

The framework is structured around five key domains:

  1. understanding the problem
  2. engagement with stakeholders and partners
  3. outcomes focus
  4. evidence for the solution, and
  5. design and communication…(More)”.

On conspiracy theories of ignorance


Essay by In “On the Sources of Knowledge and Ignorance”, Karl Popper identifies a kind of “epistemological optimism”—an optimism about “man’s power to discern truth and to acquire knowledge”—that has played a significant role in the history of philosophy. At the heart of this optimistic view, Popper argues, is the “doctrine that truth is manifest”:

“Truth may perhaps be veiled, and removing the veil may not be easy. But once the naked truth stands revealed before our eyes, we have the power to see it, to distinguish it from falsehood, and to know that it is truth.”

According to Popper, this doctrine inspired the birth of modern science, technology, and liberalism. If the truth is manifest, there is “no need for any man to appeal to authority in matters of truth because each man carried the sources of knowledge in himself”:

“Man can know: thus he can be free. This is the formula which explains the link between epistemological optimism and the ideas of liberalism.”

Although a liberal himself, Popper argues that the doctrine of manifest truth is false. “The simple truth,” he writes, “is that truth is often hard to come by, and that once found it may easily be lost again.” Moreover, he argues that the doctrine is pernicious. If we think the truth is manifest, we create “the need to explain falsehood”:

“Knowledge, the possession of truth, need not be explained. But how can we ever fall into error if truth is manifest? The answer is: through our own sinful refusal to see the manifest truth; or because our minds harbour prejudices inculcated by education and tradition, or other evil influences which have perverted our originally pure and innocent minds.”

In this way, the doctrine of manifest truth inevitably gives rise to “the conspiracy theory of ignorance”…

In previous work, I have criticised how the concept of “misinformation” is applied by researchers and policy-makers. Roughly, I think that narrow applications of the term (e.g., defined in terms of fake news) are legitimate but focus on content that is relatively rare and largely symptomatic of other problems, at least in Western democracies. In contrast, broad definitions inevitably get applied in biased and subjective ways, transforming misinformation research and policy-making into “partisan combat by another name”…(More)”

Tab the lab: A typology of public sector innovation labs


Paper by Aline Stoll and Kevin C Andermatt: “Many public sector organizations set up innovation laboratories in response to the pressure to tackle societal problems and the high expectations placed on them to innovate public services. Our understanding of the public sector innovation laboratories’ role in enhancing the innovation capacity of administrations is still limited. It is challenging to assess or compare the impact of innovation laboratories because of how they operate and what they do. This paper closes this research gap by offering a typology that organizes the diverse nature of innovation labs and makes it possible to compare various lab settings. The proposed typology gives possible relevant factors to increase the innovation capacity of public organizations. The findings are based on a literature review of primarily explorative papers and case studies, which made it possible to identify the relevant criteria. The proposed typology covers three dimensions: (1) value (intended innovation impact of the labs); (2) governance (role of government and financing model); and (3) network (stakeholders in the collaborative arrangements). Comparing European countries and regions with regards to the repartition of labs shows that Nordic and British countries tend to have broader scope than continental European countries…(More)”.

Social Informatics


Book edited by Noriko Hara, and Pnina Fichman: “Social informatics examines how society is influenced by digital technologies and how digital technologies are shaped by political, economic, and socio-cultural forces. The chapters in this edited volume use social informatics approaches to analyze recent issues in our increasingly data-intensive society.

Taking a social informatics perspective, this edited volume investigates the interaction between society and digital technologies and includes research that examines individuals, groups, organizations, and nations, as well as their complex relationships with pervasive mobile and wearable devices, social media platforms, artificial intelligence, and big data. This volume’s contributors range from seasoned and renowned researchers to upcoming researchers in social informatics. The readers of the book will understand theoretical frameworks of social informatics; gain insights into recent empirical studies of social informatics in specific areas such as big data and its effects on privacy, ethical issues related to digital technologies, and the implications of digital technologies for daily practices; and learn how the social informatics perspective informs research and practice…(More)”.

Handbook on Governance and Data Science


Handbook edited by Sarah Giest, Bram Klievink, Alex Ingrams, and Matthew M. Young: “This book is based on the idea that there are quite a few overlaps and connections between the field of governance studies and data science. Data science, with its focus on extracting insights from large datasets through sophisticated algorithms and analytics (Provost and Fawcett 2013), provides government with tools to potentially make more informed decisions, enhance service delivery, and foster transparency and accountability. Governance studies, concerned with the processes and structures through which public policy and services are formulated and delivered (Osborne 2006), increasingly rely on data-driven insights to address complex societal challenges, optimize resource allocation, and engage citizens more effectively (Meijer and Bolívar 2016). However, research insights in journals or at conferences remain quite separate, and thus there are limited spaces for having interconnected conversations. In addition, unprecedented societal challenges demand not only innovative solutions but new approaches to problem-solving.

In this context, data science techniques emerge as a crucial element in crafting a modern governance paradigm, offering predictive insights, revealing hidden patterns, and enabling real-time monitoring of public sentiment and service effectiveness, which are invaluable for public administrators (Kitchin 2014). However, the integration of data science into public governance also raises important considerations regarding data privacy, ethical use of data, and the need for transparency in algorithmic decision-making processes (Zuiderwijk and Janssen 2014). In short, this book is a space where governance and data science studies intersect and highlight relevant opportunities and challenges in this space at the intersection of both fields. Contributors to this book discuss the types of data science techniques applied in a governance context and the implications these have for government decisions and services. This also includes questions around the types of data that are used in government and how certain processes and challenges are measured…(More)”.

Being an Effective Policy Analyst in the Age of Information Overload


Blog by Adam Thierer: “The biggest challenge of being an effective technology policy analyst, academic, or journalist these days is that the shelf life of your products is measured in weeks — and sometimes days — instead of months. Because of that, I’ve been adjusting my own strategies over time to remain effective.

The thoughts and advice I offer here are meant mostly for other technology policy analysts, whether you are a student or young professional just breaking into the field, or someone in the middle of your career looking to take it to the next level. But much of what I’ll say here is generally applicable across the field of policy analysis. It’s just a lot more relevant for people in the field of tech policy because of its fast-moving, ever-changing nature.

This essay will repeatedly reference two realities that have shaped my life both as an average citizen and as an academic and policy analyst: First, we used to live in a world of information scarcity, but we now live in a world of information abundance–and that trend is only accelerating. Second, life and work in a world of information overload is simultaneously a wonderful and awful thing, but one thing is for sure: there is absolutely no going back to the sleepy days of information scarcity.

If you care to be an effective policy analyst today, then you have to come to grips with these new realities. Here are a few tips…(More)”.

How Philanthropy Built, Lost, and Could Reclaim the A.I. Race


Article by Sara Herschander: “How do we know you won’t pull an OpenAI?”

It’s the question Stella Biderman has gotten used to answering when she seeks funding from major foundations for EleutherAI, her two-year-old nonprofit A.I. lab that has developed open-source artificial intelligence models.

The irony isn’t lost on her. Not long ago, she declined a deal dangled by one of Silicon Valley’s most prominent venture capitalists who, with the snap of his fingers, promised to raise $100 million for the fledgling nonprofit lab — over 30 times EleutherAI’s current annual budget — if only the lab’s leaders would agree to drop its 501(c)(3) status.

In today’s A.I. gold rush, where tech giants spend billions on increasingly powerful models and top researchers command seven-figure salaries, to be a nonprofit A.I. lab is to be caught in a Catch-22: defend your mission to increasingly wary philanthropic funders or give in to temptation and become a for-profit company.

Philanthropy once played an outsize role in building major A.I. research centers and nurturing influential theorists — by donating hundreds of millions of dollars, largely to university labs — yet today those dollars are dwarfed by the billions flowing from corporations and venture capitalists. For tech nonprofits and their philanthropic backers, this has meant embracing a new role: pioneering the research and safeguards the corporate world won’t touch.

“If making a lot of money was my goal, that would be easy,” said Biderman, whose employees have seen their pay packages triple or quadruple after being poached by companies like OpenAI, Anthropic, and Google.

But EleutherAI doesn’t want to join the race to build ever-larger models. Instead, backed by grants from Open Philanthropy, Omidyar Network, and A.I. companies Hugging Face and StabilityAI, the group has carved out a different niche: researching how A.I. systems make decisions, maintaining widely used training datasets, and shaping global policy around A.I. safety and transparency…(More)”.

Policy design labs and uncertainty: can they innovate, and retain and circulate learning?


Paper by Jenny Lewis: “Around the world in recent times, numerous policy design labs have been established, related to a rising focus on the need for public sector innovation. These labs are a response to the challenging nature of many societal problems and often have a purpose of navigating uncertainty. They do this by “labbing” ill-structured problems through moving them into an experimental environment, outside of traditional government structures, and using a design-for-policy approach. Labs can, therefore, be considered as a particular type of procedural policy tool, used in attempts to change how policy is formulated and implemented to address uncertainty. This paper considers the role of policy design labs in learning and explores the broader governance context they are embedded within. It examines whether labs have the capacity to innovate and also retain and circulate learning to other policy actors. It argues that labs have considerable potential to change the spaces of policymaking at the micro level and innovate, but for learning to be kept rather than lost, innovation needs to be institutionalized in governing structures at higher levels…(More)”.

Net zero: the role of consumer behaviour


Horizon Scan by the UK Parliament: “According to research from the Centre for Climate Change and Social Transformation, reaching net zero by 2050 will require individual behaviour change, particularly when it comes to aviation, diet and energy use.

The government’s 2023 Powering Up Britain: Net Zero Growth Plan referred to low carbon choices as ‘green choices’, and described them as public and businesses choosing green products, services, and goods. The plan sets out six principles regarding policies to facilitate green choices. Both the Climate Change Committee and the House of Lords Environment and Climate Change Committee have recommended that government strategies should incorporate greater societal and behavioural change policies and guidance.

Contributors to the horizon scan identified managing consumer behaviour and habits to help achieve net zero as a topic of importance for parliament over the next five years. Change in consumer behaviour could result in approximately 60% of required emission reductions to reach net zero.[5] Behaviour change will be needed from the wealthiest in society, who according to Oxfam typically lead higher-carbon lifestyles.

Incorporating behavioural science principles into policy levers is a well-established method of encouraging desired behaviours. Common examples of policies aiming to influence behaviour include subsidies, regulation and information campaigns (see below).

However, others suggest deliberative public engagement approaches, such as the UK Climate Change Assembly,[7] may be needed to determine which pro-environmental policies are acceptable.[8] Repeated public engagement is seen as key to achieve a just transition as different groups will need different support to enable their green choices (PN 706).

Researchers debate the extent to which individuals should be responsible for making green choices as opposed to the regulatory and physical environment facilitating them, or whether markets, businesses and governments should be the main actors responsible for driving action. They highlight the need for different actions based on the context and the different ways individuals act as consumers, citizens, and within organisations and groups. Health, time, comfort and status can strongly influence individual decisions while finance and regulation are typically stronger motivations for organisations (PN 714)…(More)”