Stefaan Verhulst
Primer by Meg Young with Sarah Fox, Vinhcent Le and Oscar J. Romero Jr.: “As government technology increasingly mediates people’s access to essential services — and impacts their rights — technology purchasing has never been more high stakes. Yet government technology decision-making processes rarely feature robust public input. Gear Shift: Driving Change in Public Sector Technology through Community Input argues that such input is essential, and that the most strategically important time to elicit it is before a procurement process begins.
This primer explores why public agencies do not typically look to affected people for input on technology design, and explains why technology purchasing will be a focal point for needed change. We call for a deeper gear shift, in which community input is prioritized before the government has even begun a pilot project, and outline specific opportunities and tactics to this end.
Ultimately, this challenge is not technical but democratic, and requires a reconfiguration of how power is distributed in decisions about the technologies that shape public life…(More)”.
Pew Research Center: “Last year, Google introduced “AI Overviews,” a feature that displays an artificial intelligence-generated result summary at the top of many Google search pages. This feature is available to millions of U.S. Google users. Online publishers recently have attributed declining web traffic to these summaries replacing traditional search results, claiming that many users are relying on the summaries instead of following links to the publishers’ websites.
A Pew Research Center report published this spring analyzed data from 900 U.S. adults who agreed to share their online browsing activity. About six-in-ten respondents (58%) conducted at least one Google search in March 2025 that produced an AI-generated summary. Additional analysis found that Google users were less likely to click on result links when visiting search pages with an AI summary compared with those without one. For searches that resulted in an AI-generated summary, users very rarely clicked on the sources cited…(More)”.
Paper by Aaron Martin: “Technological interventions in aid are both complex and deeply ambiguous. Nonetheless, many contemporary controversies surrounding humanitarian data reflect underlying tensions that stem from competing claims over sovereignty. That is, where disputes arise in humanitarian contexts following the unauthorized access to data by a third party, the unconsented sharing of humanitarian data, or the imposition of interoperability requirements on the technical systems of humanitarian agencies, these disputes regularly exhibit deeper concerns about power and authority that go beyond traditional privacy or data protection claims. This article explains the interpretive value of such a sovereignty lens on humanitarian data. To do so, it first provides an overview of how humanitarian data is shared by different actors involved in aid. Then it unpacks the meanings of sovereignty in the humanitarian domain while highlighting the emergence of “pseudo-sovereigns,” that is, actors who assert sovereignty over data in ways that challenge established norms and practices. The analysis reinterprets recent controversies surrounding the collection and sharing of biometrics, namely concerning the Rohingya in Bangladesh, Houthi in Yemen, “double registered” people in Kenya, and as part of the humanitarian response in Ukraine, through a sovereignty lens to demonstrate the utility of this perspective on humanitarian data. To better account for the complexities of power, I encourage scholars to center sovereignty considerations in their analyses of surveillance and privacy in humanitarian innovation…(More)”.
Article by Ian Leavitt and Margaret Arnesen: “Comprehensive, timely data helps policymakers and public health officials identify, track, prevent, and treat a variety of health issues, from communicable diseases such as measles to maternal and child health concerns to the opioid epidemic. But throughout the country, this data exists in separate agencies and departments, and numerous barriers prevent connecting the different data sources.
Some states are implementing promising approaches to help agencies better share data. A new brief from The Pew Charitable Trusts examines how Massachusetts is supporting the use of cross-sector data from various agencies or departments that is analyzed holistically to target more effective public health efforts.
Fully understanding these types of health threats—where they’re concentrated, how they’re spreading, and who’s at greatest risk—requires many different types of data. Health care providers, public health scientists, social workers, and insurers can analyze their own data but often cannot easily share, connect, and compare information with each other. This makes it difficult to get a more nuanced understanding of a locality’s health—and the threats it faces.
In 2017, the Massachusetts Department of Public Health (DPH) created the Public Health Data Warehouse. By linking data from multiple sources, including health, housing, family services, and other public agencies, the warehouse allows state and local health departments, colleges and universities, health care providers, foundations, private companies, think tanks, and other interested parties to analyze and address priority health and quality of life issues in a comprehensive way.
“The driving force behind [the warehouse] was realizing that we had all of these disparate data points telling a piece of the story, but not the whole story together,” Dana Bernson, director of the Data Science, Research, and Epidemiology Division in the Office of Population Health, Massachusetts DPH, said in an interview with Pew…(More)”.
Paper by Federica Zavattaro, Viktor von Wyl, and Felix Gille: “Show morePublic trust is crucial for the success of health data-sharing initiatives (HDSIs), as it influences public participation. Although the potential for policies to actively foster trust is widely acknowledged, recent policy analyses suggest that this opportunity is often overlooked in practice.
To investigate if and how health policymakers at the European Union level and in France, Italy, and Switzerland prioritise and integrate public trust into their policy work, identifying key gaps and providing preliminary guidance to bridge them.
We conducted 57 semi-structured online interviews with policymakers involved in HDSIs at different stages of the policy process: 20 at the European level, 11 in France, 13 in Italy, and 13 in Switzerland. An inductive thematic approach was employed to identify emerging themes.
Policymakers recognise public trust as crucial for public participation in HDSIs, yet no shared definition of trust in health data-sharing emerged. In France, trust-building is treated as a policy priority and embedded in stakeholder and public engagement processes prior to legislation. At the European, Italian, and Swiss levels, trust remains a vague concept, addressed implicitly without clear strategies. Policymakers highlighted the absence of specific guidance on trust-building and called for its development…(More)”.
Book by Matt Biggar: “…looks at place-based systems change as a real-world solution to our growing environmental and social crises. Distilling lessons from a thirty-year career in the social sector, Matt Biggar offers a practical guide to creating conditions for societal transformation. He presents a vision that reorients people’s daily lives around their neighborhoods and communities, and he shows us how we can get there.
When thinking about place-based systems change, many questions arise. What systems do we change? How do we change them? What outcomes are we seeking? In Connected to Place, Biggar answers these questions for advocates, planners, policymakers, educators, and others interested in systems change. Readers will learn about the ideas, tools, and pathways imperative to creating lasting, regenerative change. By reframing our approach to social progress, Connected to Place outlines the way toward rebuilding connection with nature and local community and revitalizing local and regional economies…(More)”.
Book by Tim Danton: “…tells the story of the birth of the technological world we now live in, all through the origins of twelve influential computers built between 1939 and 1950.
This book transports you back to a time when computers were not mass produced, but lovingly built by hand with electromechanical relays or thermionic valves (aka vacuum tubes). These were large computers, far bigger than a desktop computer. Most would occupy (and warm!) a room. Despite their size, and despite the fact that some of them would help win a war, they had a minuscule fraction of the power of modern computers: back then, a computer with one kilobyte of memory and the ability to process one or two thousand instructions per second was on the cutting edge. The processor in your mobile phone probably processes billions of instructions per second, and has a lot more than one kilobyte of main memory.
In 1940, a computer was someone who ploughed through gruelling calculations each day. A decade later, a computer was a buzzing machine that filled a room. This book tells the story of how our world was reshaped by such computers — and the geniuses who brought them into being, from Alan Turing to John von Neumann.
You’ll discover how these pioneers shortened World War II, and learn hidden truths that governments didn’t want you to know. But this isn’t just a story about how these computers came to be, or the fascinating people behind them: it’s a story about how a new world order, built on technology, sprang into being.

This book is a world tour through the modern history of computing, and it begins in 1939 with the first electronic digital computer, the Atanasoff-Berry computer (ABC). From there, the book moves on to the Berlin-born Zuse Z3 and the Bell Labs’ Complex Number Calculator, before we enter the World War II era with Colossus, Harvard Mark I, and then ENIAC, the first general-purpose digital computer…(More)”
Q and A by George Hobor: “Data help us understand how healthy people and communities are. They show where problems are and help guide support to the right places. They also help us see what’s working and what needs to change. Philanthropy has played a key role in elevating the importance of data.
Over 1.2 million people died during COVID-19, partly because the health system lacked complete and reliable information. The crisis revealed deep flaws in how we collect and use health data—especially for communities of color. In response, the Robert Wood Johnson Foundation (RWJF) created the National Commission to Transform Public Health Data Systems to reimagine a better health data system that represents—and serves—everyone.
Significant progress has been achieved since then, but new threats to public health data have emerged, with the purge and alteration of critical federal data sets. In this Q&A, I reflect on why these data matter, how philanthropy can help and protect them, and what RWJF is doing to respond.
Why are good public health data important for communities?
Public health data track issues that affect us all—from infectious diseases like measles to opioid use to gun violence. These are not rare or isolated events. They are public or social issues and not personal troubles. Thus, they require social interventions to be effectively resolved. Data show us social problems and the limits of personal efficacy…(More)”.
Report by Samantha Shorey: “Public administrators are the primary point of contact between constituents and the government, and their work is crucial to the day-to-day functioning of the state. AI technologies have been touted as a way to increase worker productivity and improve customer service in the public sector, particularly in the face of limited funding for state and local governments. However, previous deployments of automated tools and current AI use cases indicate the reality will be more complicated. This report scans the landscape of AI use in the public sector at the state and local level, evaluating its benefits and harms through the examples of chatbots and automated tools that transcribe audio, summarize policies, and determine eligibility for benefits. These examples reveal how AI can make the experience of work more stressful, devalue workers’ skills, increase individual responsibility, and decrease decision-making quality. Public sector jobs have been an important source of security for middle-class Americans, especially women of color and Indigenous women, for decades. Without an understanding of what is at stake for government workers, what they need to effectively accomplish their tasks, and how hard they already work to provide crucial citizen services, the deployment of AI technologies—sold as a solution in the public sector—will simply create new problems…(More)”.
Article by Eileen Guo: “Millions of images of passports, credit cards, birth certificates, and other documents containing personally identifiable information are likely included in one of the biggest open-source AI training sets, new research has found.
Thousands of images—including identifiable faces—were found in a small subset of DataComp CommonPool, a major AI training set for image generation scraped from the web. Because the researchers audited just 0.1% of CommonPool’s data, they estimate that the real number of images containing personally identifiable information, including faces and identity documents, is in the hundreds of millions. The study that details the breach was published on arXiv earlier this month.
The bottom line, says William Agnew, a postdoctoral fellow in AI ethics at Carnegie Mellon University and one of the coauthors, is that “anything you put online can [be] and probably has been scraped.”
The researchers found thousands of instances of validated identity documents—including images of credit cards, driver’s licenses, passports, and birth certificates—as well as over 800 validated job application documents (including résumés and cover letters), which were confirmed through LinkedIn and other web searches as being associated with real people. (In many more cases, the researchers did not have time to validate the documents or were unable to because of issues like image clarity.)
A number of the résumés disclosed sensitive information including disability status, the results of background checks, birth dates and birthplaces of dependents, and race. When résumés were linked to people with online presences, researchers also found contact information, government identifiers, sociodemographic information, face photographs, home addresses, and the contact information of other people (like references).

When it was released in 2023, DataComp CommonPool, with its 12.8 billion data samples, was the largest existing data set of publicly available image-text pairs, which are often used to train generative text-to-image models…(More)”.