Reimagining the Policy Cycle in the Age of Artificial Intelligence


Paper by Sara Marcucci and Stefaan Verhulst: “The increasing complexity of global challenges, such as climate change, public health crises, and socioeconomic inequalities, underscores the need for a more sophisticated and adaptive policymaking approach. Evidence-Informed Decision-Making (EIDM) has emerged as a critical framework, leveraging data and research to guide policy design, implementation, and impact assessment. However, traditional evidence-based approaches, such as reliance on Randomized Controlled Trials (RCTs) and systematic reviews, face limitations, including resource intensity, contextual constraints, and difficulty in addressing real-time challenges. Artificial Intelligence offers transformative potential to enhance EIDM by enabling large-scale data analysis, pattern recognition, predictive modeling, and stakeholder engagement across the policy cycle. While generative AI has attracted significant attention, this paper emphasizes the broader spectrum of AI applications (beyond Generative AI) —such as natural language processing (NLP), decision trees, and basic machine learning algorithms—that continue to play a critical role in evidence-informed policymaking. These models, often more transparent and resource-efficient, remain highly relevant in supporting data analysis, policy simulations, and decision-support.

This paper explores AI’s role in three key phases of the policy cycle: (1) problem identification, where AI can support issue framing, trend detection, and scenario creation; (2) policy design, where AI-driven simulations and decision-support tools can improve solution alignment with real-world contexts; and (3) policy implementation and impact assessment, where AI can enhance monitoring, evaluation, and adaptive decision-making. Despite its promise, AI adoption in policymaking remains limited due to challenges such as algorithmic bias, lack of explainability, resource demands, and ethical concerns related to data privacy and environmental impact. To ensure responsible and effective AI integration, this paper highlights key recommendations: prioritizing augmentation over automation, embedding human oversight throughout AI-driven processes, facilitating policy iteration, and combining AI with participatory governance models…(More)”.

Gather, Share, Build


Article by Nithya Ramanathan & Jim Fruchterman: “Recent milestones in generative AI have sent nonprofits, social enterprises, and funders alike scrambling to understand how these innovations can be harnessed for global good. Along with this enthusiasm, there is also warranted concern that AI will greatly increase the digital divide and fail to improve the lives of 90 percent of the people on our planet. The current focus on funding AI intelligently and strategically in the social sector is critical, and it will help ensure that money has the largest impact.

So how can the social sector meet the current moment?

AI is already good at a lot of things. Plenty of social impact organizations are using AI right now, with positive results. Great resources exist for developing a useful understanding of the current landscape and how existing AI tech can serve your mission, including this report from Stanford HAI and Project Evident and this AI Treasure Map for Nonprofits from Tech Matters.

While some tech-for-good companies are creating AI and thriving—Digital Green, Khan Academy, and Jacaranda Health, among many—most social sector companies are not ready to build AI solutions. But even organizations that don’t have AI on their radar need to be thinking about how to address one of the biggest challenges to harnessing AI to solve social sector problems: insufficient data…(More)”.

Advanced Flood Hub features for aid organizations and govern


Announcement by Alex Diaz: “Floods continue to devastate communities worldwide, and many are pursuing advancements in AI-driven flood forecasting, enabling faster, more efficient detection and response. Over the past few years, Google Research has focused on harnessing AI modeling and satellite imagery to dramatically accelerate the reliability of flood forecasting — while working with partners to expand coverage for people in vulnerable communities around the world.

Today, we’re rolling out new advanced features in Flood Hub designed to allow experts to understand flood risk in a given region via inundation history maps, and to understand how a given flood forecast on Flood Hub might propagate throughout a river basin. With the inundation history maps, Flood Hub expert users can view flood risk areas in high resolution over the map regardless of a current flood event. This is useful for cases where our flood forecasting does not include real time inundation maps or for pre-planning of humanitarian work. You can find more explanations about the inundation history maps and more in the Flood Hub Help Center…(More)”.

Policymaking assessment framework


Guide by the Susan McKinnon Foundation: “This assessment tool supports the measurement of the quality of policymaking processes – both existing and planned – across  sectors. It provides a flexible framework for rating public policy processes using information available in the public domain. The framework’s objective is to simplify the path towards best practice, evidence-informed policy.

It is intended to accommodate the complexity of policymaking processes and reflect the realities and context within which policymaking is undertaken. The criteria can be tailored for different policy problems and policy types and applied across sectors and levels of government.

The framework is structured around five key domains:

  1. understanding the problem
  2. engagement with stakeholders and partners
  3. outcomes focus
  4. evidence for the solution, and
  5. design and communication…(More)”.

What 40 Million Devices Can Teach Us About Digital Literacy in America


Blog by Juan M. Lavista Ferres: “…For the first time, Microsoft is releasing a privacy-protected dataset that provides new insights into digital engagement across the United States. This dataset, built from anonymized usage data from 40 million Windows devices, offers the most comprehensive view ever assembled of how digital tools are being used across the country. It goes beyond surveys and self-reported data to provide a real-world look at software application usage across 28,000 ZIP codes, creating a more detailed and nuanced understanding of digital engagement than any existing commercial or government study.

In collaboration with leading researchers at Harvard University and the University of Pennsylvania, we analyzed this dataset and developed two key indices to measure digital literacy:

  • Media & Information Composite Index (MCI): This index captures general computing activity, including media consumption, information gathering, and usage of productivity applications like word processing, spreadsheets, and presentations.
  • Content Creation & Computation Index (CCI): This index measures engagement with more specialized digital applications, such as content creation tools like Photoshop and software development environments.

By combining these indices with demographic data, several important insights emerge:

Urban-Rural Disparities Exist—But the Gaps Are Uneven While rural areas often lag in digital engagement, disparities within urban areas are just as pronounced. Some city neighborhoods have digital activity levels on par with major tech hubs, while others fall significantly behind, revealing a more complex digital divide than previously understood.

Income and Education Are Key Drivers of Digital Engagement Higher-income and higher-education areas show significantly greater engagement in content creation and computational tasks. This suggests that digital skills—not just access—are critical in shaping economic mobility and opportunity. Even in places where broadband availability is the same, digital usage patterns vary widely, demonstrating that access alone is not enough.

Infrastructure Alone Won’t Close the Digital Divide Providing broadband connectivity is essential, but it is not a sufficient solution to the challenges of digital literacy. Our findings show that even in well-connected regions, significant skill gaps persist. This means that policies and interventions must go beyond infrastructure investments to include comprehensive digital education, skills training, and workforce development initiatives…(More)”.

Patients’ Trust in Health Systems to Use Artificial Intelligence


Paper by Paige Nong and Jodyn Platt: “The growth and development of artificial intelligence (AI) in health care introduces a new set of questions about patient engagement and whether patients trust systems to use AI responsibly and safely. The answer to this question is embedded in patients’ experiences seeking care and trust in health systems. Meanwhile, the adoption of AI technology outpaces efforts to analyze patient perspectives, which are critical to designing trustworthy AI systems and ensuring patient-centered care.

We conducted a national survey of US adults to understand whether they trust their health systems to use AI responsibly and protect them from AI harms. We also examined variables that may be associated with these attitudes, including knowledge of AI, trust, and experiences of discrimination in health care….Most respondents reported low trust in their health care system to use AI responsibly (65.8%) and low trust that their health care system would make sure an AI tool would not harm them (57.7%)…(More)”.

On conspiracy theories of ignorance


Essay by In “On the Sources of Knowledge and Ignorance”, Karl Popper identifies a kind of “epistemological optimism”—an optimism about “man’s power to discern truth and to acquire knowledge”—that has played a significant role in the history of philosophy. At the heart of this optimistic view, Popper argues, is the “doctrine that truth is manifest”:

“Truth may perhaps be veiled, and removing the veil may not be easy. But once the naked truth stands revealed before our eyes, we have the power to see it, to distinguish it from falsehood, and to know that it is truth.”

According to Popper, this doctrine inspired the birth of modern science, technology, and liberalism. If the truth is manifest, there is “no need for any man to appeal to authority in matters of truth because each man carried the sources of knowledge in himself”:

“Man can know: thus he can be free. This is the formula which explains the link between epistemological optimism and the ideas of liberalism.”

Although a liberal himself, Popper argues that the doctrine of manifest truth is false. “The simple truth,” he writes, “is that truth is often hard to come by, and that once found it may easily be lost again.” Moreover, he argues that the doctrine is pernicious. If we think the truth is manifest, we create “the need to explain falsehood”:

“Knowledge, the possession of truth, need not be explained. But how can we ever fall into error if truth is manifest? The answer is: through our own sinful refusal to see the manifest truth; or because our minds harbour prejudices inculcated by education and tradition, or other evil influences which have perverted our originally pure and innocent minds.”

In this way, the doctrine of manifest truth inevitably gives rise to “the conspiracy theory of ignorance”…

In previous work, I have criticised how the concept of “misinformation” is applied by researchers and policy-makers. Roughly, I think that narrow applications of the term (e.g., defined in terms of fake news) are legitimate but focus on content that is relatively rare and largely symptomatic of other problems, at least in Western democracies. In contrast, broad definitions inevitably get applied in biased and subjective ways, transforming misinformation research and policy-making into “partisan combat by another name”…(More)”

Using human mobility data to quantify experienced urban inequalities


Paper by Fengli Xu et al: “The lived experience of urban life is shaped by personal mobility through dynamic relationships and resources, marked not only by access and opportunity, but also inequality and segregation. The recent availability of fine-grained mobility data and context attributes ranging from venue type to demographic mixture offer researchers a deeper understanding of experienced inequalities at scale, and pose many new questions. Here we review emerging uses of urban mobility behaviour data, and propose an analytic framework to represent mobility patterns as a temporal bipartite network between people and places. As this network reconfigures over time, analysts can track experienced inequality along three critical dimensions: social mixing with others from specific demographic backgrounds, access to different types of facilities, and spontaneous adaptation to unexpected events, such as epidemics, conflicts or disasters. This framework traces the dynamic, lived experiences of urban inequality and complements prior work on static inequalities experience at home and work…(More)”.

Why these scientists devote time to editing and updating Wikipedia


Article by Christine Ro: “…A 2018 survey of more than 4,000 Wikipedians (as the site’s editors are called) found that 12% had a doctorate. Scientists made up one-third of the Wikimedia Foundation’s 16 trustees, according to Doronina.

Although Wikipedia is the best-known project under the Wikimedia umbrella, there are other ways for scientists to contribute besides editing Wikipedia pages. For example, an entomologist could upload photos of little-known insect species to Wikimedia Commons, a collection of images and other media. A computer scientist could add a self-published book to the digital textbook site Wikibooks. Or a linguist could explain etymology on the collaborative dictionary Wiktionary. All of these are open access, a key part of Wikimedia’s mission.

Although Wikipedia’s structure might seem daunting for new editors, there are parallels with academic documents.

For instance, Jess Wade, a physicist at Imperial College London, who focuses on creating and improving biographies of female scientists and scientists from low- and middle-income countries, says that the talk page, which is the behind-the-scenes portion of a Wikipedia page on which editors discuss how to improve it, is almost like the peer-review file of an academic paper…However, scientists have their own biases about aspects such as how to classify certain topics. This matters, Harrison says, because “Wikipedia is intended to be a general-purpose encyclopaedia instead of a scientific encyclopaedia.”

One example is a long-standing battle over Wikipedia pages on cryptids and folklore creatures such as Bigfoot. Labels such as ‘pseudoscience’ have angered cryptid enthusiasts and raised questions about different types of knowledge. One suggestion is for the pages to feature a disclaimer that says that a topic is not accepted by mainstream science.

Wade raises a point about resourcing, saying it’s especially difficult for the platform to retain academics who might be enthusiastic about editing Wikipedia initially, but then drop off. One reason is time. For full-time researchers, Wikipedia editing could be an activity best left to evenings, weekends and holidays…(More)”.

Conflicts over access to Americans’ personal data emerging across federal government


Article by Caitlin Andrews: “The Trump administration’s fast-moving efforts to limit the size of the U.S. federal bureaucracy, primarily through the recently minted Department of Government Efficiency, are raising privacy and data security concerns among current and former officials across the government, particularly as the administration scales back positions charged with privacy oversight. Efforts to limit the independence of a host of federal agencies through a new executive order — including the independence of the Federal Trade Commission and Securities and Exchange Commission — are also ringing alarm bells among civil society and some legal experts.

According to CNN, several staff within the Office of Personnel Management’s privacy and records keeping department were fired last week. Staff who handle communications and respond to Freedom of Information Act requests were also let go. Though the entire privacy team was not fired, according to the OPM, details about what kind of oversight will remain within the department were limited. The report also states the staff’s termination date is 15 April.

It is one of several moves the Trump administration has made in recent days reshaping how entities access and provide oversight to government agencies’ information.

The New York Times reports on a wide range of incidents within the government where DOGE’s efforts to limit fraudulent government spending by accessing sensitive agency databases have run up against staffers who are concerned about the privacy of Americans’ personal information. In one incident, Social Security Administration acting Commissioner Michelle King was fired after resisting a request from DOGE to access the agency’s database. “The episode at the Social Security Administration … has played out repeatedly across the federal government,” the Times reported…(More)”.