Reimagining the Policy Cycle in the Age of Artificial Intelligence


Paper by Sara Marcucci and Stefaan Verhulst: “The increasing complexity of global challenges, such as climate change, public health crises, and socioeconomic inequalities, underscores the need for a more sophisticated and adaptive policymaking approach. Evidence-Informed Decision-Making (EIDM) has emerged as a critical framework, leveraging data and research to guide policy design, implementation, and impact assessment. However, traditional evidence-based approaches, such as reliance on Randomized Controlled Trials (RCTs) and systematic reviews, face limitations, including resource intensity, contextual constraints, and difficulty in addressing real-time challenges. Artificial Intelligence offers transformative potential to enhance EIDM by enabling large-scale data analysis, pattern recognition, predictive modeling, and stakeholder engagement across the policy cycle. While generative AI has attracted significant attention, this paper emphasizes the broader spectrum of AI applications (beyond Generative AI) —such as natural language processing (NLP), decision trees, and basic machine learning algorithms—that continue to play a critical role in evidence-informed policymaking. These models, often more transparent and resource-efficient, remain highly relevant in supporting data analysis, policy simulations, and decision-support.

This paper explores AI’s role in three key phases of the policy cycle: (1) problem identification, where AI can support issue framing, trend detection, and scenario creation; (2) policy design, where AI-driven simulations and decision-support tools can improve solution alignment with real-world contexts; and (3) policy implementation and impact assessment, where AI can enhance monitoring, evaluation, and adaptive decision-making. Despite its promise, AI adoption in policymaking remains limited due to challenges such as algorithmic bias, lack of explainability, resource demands, and ethical concerns related to data privacy and environmental impact. To ensure responsible and effective AI integration, this paper highlights key recommendations: prioritizing augmentation over automation, embedding human oversight throughout AI-driven processes, facilitating policy iteration, and combining AI with participatory governance models…(More)”.

Gather, Share, Build


Article by Nithya Ramanathan & Jim Fruchterman: “Recent milestones in generative AI have sent nonprofits, social enterprises, and funders alike scrambling to understand how these innovations can be harnessed for global good. Along with this enthusiasm, there is also warranted concern that AI will greatly increase the digital divide and fail to improve the lives of 90 percent of the people on our planet. The current focus on funding AI intelligently and strategically in the social sector is critical, and it will help ensure that money has the largest impact.

So how can the social sector meet the current moment?

AI is already good at a lot of things. Plenty of social impact organizations are using AI right now, with positive results. Great resources exist for developing a useful understanding of the current landscape and how existing AI tech can serve your mission, including this report from Stanford HAI and Project Evident and this AI Treasure Map for Nonprofits from Tech Matters.

While some tech-for-good companies are creating AI and thriving—Digital Green, Khan Academy, and Jacaranda Health, among many—most social sector companies are not ready to build AI solutions. But even organizations that don’t have AI on their radar need to be thinking about how to address one of the biggest challenges to harnessing AI to solve social sector problems: insufficient data…(More)”.

Advanced Flood Hub features for aid organizations and govern


Announcement by Alex Diaz: “Floods continue to devastate communities worldwide, and many are pursuing advancements in AI-driven flood forecasting, enabling faster, more efficient detection and response. Over the past few years, Google Research has focused on harnessing AI modeling and satellite imagery to dramatically accelerate the reliability of flood forecasting — while working with partners to expand coverage for people in vulnerable communities around the world.

Today, we’re rolling out new advanced features in Flood Hub designed to allow experts to understand flood risk in a given region via inundation history maps, and to understand how a given flood forecast on Flood Hub might propagate throughout a river basin. With the inundation history maps, Flood Hub expert users can view flood risk areas in high resolution over the map regardless of a current flood event. This is useful for cases where our flood forecasting does not include real time inundation maps or for pre-planning of humanitarian work. You can find more explanations about the inundation history maps and more in the Flood Hub Help Center…(More)”.

What 40 Million Devices Can Teach Us About Digital Literacy in America


Blog by Juan M. Lavista Ferres: “…For the first time, Microsoft is releasing a privacy-protected dataset that provides new insights into digital engagement across the United States. This dataset, built from anonymized usage data from 40 million Windows devices, offers the most comprehensive view ever assembled of how digital tools are being used across the country. It goes beyond surveys and self-reported data to provide a real-world look at software application usage across 28,000 ZIP codes, creating a more detailed and nuanced understanding of digital engagement than any existing commercial or government study.

In collaboration with leading researchers at Harvard University and the University of Pennsylvania, we analyzed this dataset and developed two key indices to measure digital literacy:

  • Media & Information Composite Index (MCI): This index captures general computing activity, including media consumption, information gathering, and usage of productivity applications like word processing, spreadsheets, and presentations.
  • Content Creation & Computation Index (CCI): This index measures engagement with more specialized digital applications, such as content creation tools like Photoshop and software development environments.

By combining these indices with demographic data, several important insights emerge:

Urban-Rural Disparities Exist—But the Gaps Are Uneven While rural areas often lag in digital engagement, disparities within urban areas are just as pronounced. Some city neighborhoods have digital activity levels on par with major tech hubs, while others fall significantly behind, revealing a more complex digital divide than previously understood.

Income and Education Are Key Drivers of Digital Engagement Higher-income and higher-education areas show significantly greater engagement in content creation and computational tasks. This suggests that digital skills—not just access—are critical in shaping economic mobility and opportunity. Even in places where broadband availability is the same, digital usage patterns vary widely, demonstrating that access alone is not enough.

Infrastructure Alone Won’t Close the Digital Divide Providing broadband connectivity is essential, but it is not a sufficient solution to the challenges of digital literacy. Our findings show that even in well-connected regions, significant skill gaps persist. This means that policies and interventions must go beyond infrastructure investments to include comprehensive digital education, skills training, and workforce development initiatives…(More)”.

Patients’ Trust in Health Systems to Use Artificial Intelligence


Paper by Paige Nong and Jodyn Platt: “The growth and development of artificial intelligence (AI) in health care introduces a new set of questions about patient engagement and whether patients trust systems to use AI responsibly and safely. The answer to this question is embedded in patients’ experiences seeking care and trust in health systems. Meanwhile, the adoption of AI technology outpaces efforts to analyze patient perspectives, which are critical to designing trustworthy AI systems and ensuring patient-centered care.

We conducted a national survey of US adults to understand whether they trust their health systems to use AI responsibly and protect them from AI harms. We also examined variables that may be associated with these attitudes, including knowledge of AI, trust, and experiences of discrimination in health care….Most respondents reported low trust in their health care system to use AI responsibly (65.8%) and low trust that their health care system would make sure an AI tool would not harm them (57.7%)…(More)”.

Using human mobility data to quantify experienced urban inequalities


Paper by Fengli Xu et al: “The lived experience of urban life is shaped by personal mobility through dynamic relationships and resources, marked not only by access and opportunity, but also inequality and segregation. The recent availability of fine-grained mobility data and context attributes ranging from venue type to demographic mixture offer researchers a deeper understanding of experienced inequalities at scale, and pose many new questions. Here we review emerging uses of urban mobility behaviour data, and propose an analytic framework to represent mobility patterns as a temporal bipartite network between people and places. As this network reconfigures over time, analysts can track experienced inequality along three critical dimensions: social mixing with others from specific demographic backgrounds, access to different types of facilities, and spontaneous adaptation to unexpected events, such as epidemics, conflicts or disasters. This framework traces the dynamic, lived experiences of urban inequality and complements prior work on static inequalities experience at home and work…(More)”.

Why these scientists devote time to editing and updating Wikipedia


Article by Christine Ro: “…A 2018 survey of more than 4,000 Wikipedians (as the site’s editors are called) found that 12% had a doctorate. Scientists made up one-third of the Wikimedia Foundation’s 16 trustees, according to Doronina.

Although Wikipedia is the best-known project under the Wikimedia umbrella, there are other ways for scientists to contribute besides editing Wikipedia pages. For example, an entomologist could upload photos of little-known insect species to Wikimedia Commons, a collection of images and other media. A computer scientist could add a self-published book to the digital textbook site Wikibooks. Or a linguist could explain etymology on the collaborative dictionary Wiktionary. All of these are open access, a key part of Wikimedia’s mission.

Although Wikipedia’s structure might seem daunting for new editors, there are parallels with academic documents.

For instance, Jess Wade, a physicist at Imperial College London, who focuses on creating and improving biographies of female scientists and scientists from low- and middle-income countries, says that the talk page, which is the behind-the-scenes portion of a Wikipedia page on which editors discuss how to improve it, is almost like the peer-review file of an academic paper…However, scientists have their own biases about aspects such as how to classify certain topics. This matters, Harrison says, because “Wikipedia is intended to be a general-purpose encyclopaedia instead of a scientific encyclopaedia.”

One example is a long-standing battle over Wikipedia pages on cryptids and folklore creatures such as Bigfoot. Labels such as ‘pseudoscience’ have angered cryptid enthusiasts and raised questions about different types of knowledge. One suggestion is for the pages to feature a disclaimer that says that a topic is not accepted by mainstream science.

Wade raises a point about resourcing, saying it’s especially difficult for the platform to retain academics who might be enthusiastic about editing Wikipedia initially, but then drop off. One reason is time. For full-time researchers, Wikipedia editing could be an activity best left to evenings, weekends and holidays…(More)”.

Regulatory Markets: The Future of AI Governance


Paper by Gillian K. Hadfield, and Jack Clark: “Appropriately regulating artificial intelligence is an increasingly urgent policy challenge. Legislatures and regulators lack the specialized knowledge required to best translate public demands into legal requirements. Overreliance on industry self-regulation fails to hold producers and users of AI systems accountable to democratic demands. Regulatory markets, in which governments require the targets of regulation to purchase regulatory services from a private regulator, are proposed. This approach to AI regulation could overcome the limitations of both command-and-control regulation and self-regulation. Regulatory market could enable governments to establish policy priorities for the regulation of AI, whilst relying on market forces and industry R&D efforts to pioneer the methods of regulation that best achieve policymakers’ stated objectives…(More)”.

Social Informatics


Book edited by Noriko Hara, and Pnina Fichman: “Social informatics examines how society is influenced by digital technologies and how digital technologies are shaped by political, economic, and socio-cultural forces. The chapters in this edited volume use social informatics approaches to analyze recent issues in our increasingly data-intensive society.

Taking a social informatics perspective, this edited volume investigates the interaction between society and digital technologies and includes research that examines individuals, groups, organizations, and nations, as well as their complex relationships with pervasive mobile and wearable devices, social media platforms, artificial intelligence, and big data. This volume’s contributors range from seasoned and renowned researchers to upcoming researchers in social informatics. The readers of the book will understand theoretical frameworks of social informatics; gain insights into recent empirical studies of social informatics in specific areas such as big data and its effects on privacy, ethical issues related to digital technologies, and the implications of digital technologies for daily practices; and learn how the social informatics perspective informs research and practice…(More)”.

Handbook on Governance and Data Science


Handbook edited by Sarah Giest, Bram Klievink, Alex Ingrams, and Matthew M. Young: “This book is based on the idea that there are quite a few overlaps and connections between the field of governance studies and data science. Data science, with its focus on extracting insights from large datasets through sophisticated algorithms and analytics (Provost and Fawcett 2013), provides government with tools to potentially make more informed decisions, enhance service delivery, and foster transparency and accountability. Governance studies, concerned with the processes and structures through which public policy and services are formulated and delivered (Osborne 2006), increasingly rely on data-driven insights to address complex societal challenges, optimize resource allocation, and engage citizens more effectively (Meijer and Bolívar 2016). However, research insights in journals or at conferences remain quite separate, and thus there are limited spaces for having interconnected conversations. In addition, unprecedented societal challenges demand not only innovative solutions but new approaches to problem-solving.

In this context, data science techniques emerge as a crucial element in crafting a modern governance paradigm, offering predictive insights, revealing hidden patterns, and enabling real-time monitoring of public sentiment and service effectiveness, which are invaluable for public administrators (Kitchin 2014). However, the integration of data science into public governance also raises important considerations regarding data privacy, ethical use of data, and the need for transparency in algorithmic decision-making processes (Zuiderwijk and Janssen 2014). In short, this book is a space where governance and data science studies intersect and highlight relevant opportunities and challenges in this space at the intersection of both fields. Contributors to this book discuss the types of data science techniques applied in a governance context and the implications these have for government decisions and services. This also includes questions around the types of data that are used in government and how certain processes and challenges are measured…(More)”.