Explore our articles

Stefaan Verhulst

Book by Matt Biggar: “…looks at place-based systems change as a real-world solution to our growing environmental and social crises. Distilling lessons from a thirty-year career in the social sector, Matt Biggar offers a practical guide to creating conditions for societal transformation. He presents a vision that reorients people’s daily lives around their neighborhoods and communities, and he shows us how we can get there.

When thinking about place-based systems change, many questions arise. What systems do we change? How do we change them? What outcomes are we seeking? In Connected to Place, Biggar answers these questions for advocates, planners, policymakers, educators, and others interested in systems change. Readers will learn about the ideas, tools, and pathways imperative to creating lasting, regenerative change. By reframing our approach to social progress, Connected to Place outlines the way toward rebuilding connection with nature and local community and revitalizing local and regional economies…(More)”.

Connected to Place

Book by Tim Danton: “…tells the story of the birth of the technological world we now live in, all through the origins of twelve influential computers built between 1939 and 1950.

This book transports you back to a time when computers were not mass produced, but lovingly built by hand with electromechanical relays or thermionic valves (aka vacuum tubes). These were large computers, far bigger than a desktop computer. Most would occupy (and warm!) a room. Despite their size, and despite the fact that some of them would help win a war, they had a minuscule fraction of the power of modern computers: back then, a computer with one kilobyte of memory and the ability to process one or two thousand instructions per second was on the cutting edge. The processor in your mobile phone probably processes billions of instructions per second, and has a lot more than one kilobyte of main memory.

In 1940, a computer was someone who ploughed through gruelling calculations each day. A decade later, a computer was a buzzing machine that filled a room. This book tells the story of how our world was reshaped by such computers — and the geniuses who brought them into being, from Alan Turing to John von Neumann.

You’ll discover how these pioneers shortened World War II, and learn hidden truths that governments didn’t want you to know. But this isn’t just a story about how these computers came to be, or the fascinating people behind them: it’s a story about how a new world order, built on technology, sprang into being.

Two facing pages from the book, The Computers that Made the World. On the left, a 1997 replica of the Atanasoff–Berry Computer sits behind glass. The setup includes a drum memory unit, a panel with switches and knobs, and visible rows of vacuum tubes beneath the frame. On the right, the text "ABC (Atanasoff–Berry Computer)" near the middle. The text "As difficult as ABC: designing the first electronic digital computer" appears below.

This book is a world tour through the modern history of computing, and it begins in 1939 with the first electronic digital computer, the Atanasoff-Berry computer (ABC). From there, the book moves on to the Berlin-born Zuse Z3 and the Bell Labs’ Complex Number Calculator, before we enter the World War II era with Colossus, Harvard Mark I, and then ENIAC, the first general-purpose digital computer…(More)”

The Computers that Made the World

Q and A by George Hobor: “Data help us understand how healthy people and communities are. They show where problems are and help guide support to the right places. They also help us see what’s working and what needs to change. Philanthropy has played a key role in elevating the importance of data.

Over 1.2 million people died during COVID-19, partly because the health system lacked complete and reliable information. The crisis revealed deep flaws in how we collect and use health data—especially for communities of color. In response, the Robert Wood Johnson Foundation (RWJF) created the National Commission to Transform Public Health Data Systems to reimagine a better health data system that represents—and serves—everyone.

Significant progress has been achieved since then, but new threats to public health data have emerged, with the purge and alteration of critical federal data sets. In this Q&A, I reflect on why these data matter, how philanthropy can help and protect them, and what RWJF is doing to respond.

Why are good public health data important for communities?

Public health data track issues that affect us all—from infectious diseases like measles to opioid use to gun violence. These are not rare or isolated events. They are public or social issues and not personal troubles. Thus, they require social interventions to be effectively resolved. Data show us social problems and the limits of personal efficacy…(More)”.

How Philanthropy Can Stand Up for Public Health Data

Report by Samantha Shorey: “Public administrators are the primary point of contact between constituents and the government, and their work is crucial to the day-to-day functioning of the state. AI technologies have been touted as a way to increase worker productivity and improve customer service in the public sector, particularly in the face of limited funding for state and local governments. However, previous deployments of automated tools and current AI use cases indicate the reality will be more complicated. This report scans the landscape of AI use in the public sector at the state and local level, evaluating its benefits and harms through the examples of chatbots and automated tools that transcribe audio, summarize policies, and determine eligibility for benefits. These examples reveal how AI can make the experience of work more stressful, devalue workers’ skills, increase individual responsibility, and decrease decision-making quality. Public sector jobs have been an important source of security for middle-class Americans, especially women of color and Indigenous women, for decades. Without an understanding of what is at stake for government workers, what they need to effectively accomplish their tasks, and how hard they already work to provide crucial citizen services, the deployment of AI technologies—sold as a solution in the public sector—will simply create new problems…(More)”.

AI and Government Workers: Use Cases in Public Administration

Article by Eileen Guo: “Millions of images of passports, credit cards, birth certificates, and other documents containing personally identifiable information are likely included in one of the biggest open-source AI training sets, new research has found.

Thousands of images—including identifiable faces—were found in a small subset of DataComp CommonPool, a major AI training set for image generation scraped from the web. Because the researchers audited just 0.1% of CommonPool’s data, they estimate that the real number of images containing personally identifiable information, including faces and identity documents, is in the hundreds of millions. The study that details the breach was published on arXiv earlier this month.

The bottom line, says William Agnew, a postdoctoral fellow in AI ethics at Carnegie Mellon University and one of the coauthors, is that “anything you put online can [be] and probably has been scraped.”

The researchers found thousands of instances of validated identity documents—including images of credit cards, driver’s licenses, passports, and birth certificates—as well as over 800 validated job application documents (including résumés and cover letters), which were confirmed through LinkedIn and other web searches as being associated with real people. (In many more cases, the researchers did not have time to validate the documents or were unable to because of issues like image clarity.) 

A number of the résumés disclosed sensitive information including disability status, the results of background checks, birth dates and birthplaces of dependents, and race. When résumés were linked to people with online presences, researchers also found contact information, government identifiers, sociodemographic information, face photographs, home addresses, and the contact information of other people (like references).

""
Examples of identity-related documents found in CommonPool’s small-scale data set show a credit card, a Social Security number, and a driver’s license. For each sample, the type of URL site is shown at the top, the image in the middle, and the caption in quotes below. All personal information has been replaced, and text has been paraphrased to avoid direct quotations. Images have been redacted to show the presence of faces without identifying the individuals.

When it was released in 2023, DataComp CommonPool, with its 12.8 billion data samples, was the largest existing data set of publicly available image-text pairs, which are often used to train generative text-to-image models…(More)”.

A major AI training data set contains millions of examples of personal data

Article by Diyi Liu and Shashank Mohan: “As the world becomes increasingly interconnected, the rules and norms governing technology no longer remain confined to their jurisdictions of origin. Instead, they ripple outward, creating what scholars have labelled national and regional effects that reflect both governance philosophies and geopolitical ambitions. They represent the process of norm externalization—whereby regulations, standards, and governance approaches developed in one jurisdiction influence or are adopted by others, either through market mechanisms, deliberate policy diffusion, or in response to capacity constraints and power asymmetries. 

These effects are not merely academic constructs, but powerful forces reshaping the global digital order: They enable mapping pathways of policy transfer across borders, and shed light on how external influences interact with domestic politics in regulatory outcomes. Further, norm externalization characterizes geopolitical influence and economic and technological leverage of exporting countries, and reveals normative alignments between socio-political systems that seek to adopt these governance models. 

When the European Union (EU) implements stringent data protection standards through the General Data Protection Regulation (GDPR), companies worldwide often find it more efficient and cost-effective to apply these standards globally rather than maintain separate systems for each jurisdiction, as witnessed when Microsoft extended GDPR rights to all users worldwide. As China extends its Digital Silk Road (DSR) through infrastructure investments across the Majority World (e.g., the cross-border cable projects), its technological standards and governance approaches could be transferred to recipient countries. Similarly, as India develops and exports its digital public infrastructure (DPI), it exerts influence over other countries in the Majority World to adopt similar population-wide digital welfare schemes that are based on open-standards and operationalize interoperability. 

The competition between different regulatory models is particularly consequential for countries in the Majority World, which find themselves navigating competing governance frameworks while attempting to assert their own digital sovereignty…(More)”

Regional Power, Policy Shaping and Digital Futures: Norm Externalization through the Delhi and Beijing

Article by Bloomberg Cities Network: “…The following three lessons from Ho’s work offer practical guidance for local leaders interested in realizing this potential.

Inserting AI at key moments to unlock action.

Most city leaders already know about the more common applications of AI. But Ho argues that being more ambitious with the technology doesn’t always mean developing a new “end-to-end solution,” as he describes them. In fact, sometimes, local leaders can achieve massive impact when they insert the technology in a strategic way at a critical point in a complex workflow. And that, in turn, can create space for civil servants to deliver what people need.

For example, as part of a partnership with California’s Santa Clara County, Ho and his team at Stanford RegLab adapted a large language model so it could quickly parse through millions of records. The objective? Identifying property deeds containing discriminatory language that was intended, decades ago, to limit who could purchase certain homes. Doing so manually could consume nearly 10 years of staff time. The new tool did an initial analysis with near-perfect accuracy in just a few days. And while that work still calls for human review, the lesson for Ho is that this sort of approach can help ensure civil servants aren’t so consumed with bureaucracy that they are unavailable for frontline service delivery.

“If you are expending your time on these internal tasks, there are other services that necessarily have to take a hit,” Ho explains. Instead, tools such as this one can help ensure local leaders have capacity to maintain adequate staffing at frontline service counters that provide everything from birth certificates to public benefits.

Creating space for human discretion.

Ho cautions against relying on AI to independently manage some of local government’s most sensitive responsibilities, public benefits chief among them. But even in these complex areas, he argues, AI can play a valuable supporting role by streamlining delivery workflows and freeing up civil servants to focus on the human side of service….

Triggering conversations about how to improve policies.

Ho’s work isn’t only showing how cities can free teams from red-tape tasks. It’s also demonstrating how cities can get to the root of those problems: by improving over-complicated policies and programs for the long term. 

As part of his team’s collaboration focused on San Francisco’s municipal code, the city has been using a search system developed by RegLab to identify every case where legislation requires agency personnel to produce potentially time-consuming reports. Some of the findings would be comical if they weren’t in danger of using precious capacity, such as the rule calling for regular updates on the state of city newspaper racks that no longer exist…(More)”

3 ways AI can help cities add a human touch to service delivery

Press Release: “The U.S. Patent and Trademark Office (USPTO) is launching DesignVision, the first artificial intelligence (AI)-based image search tool available to design patent examiners via the Patents End-to-End (PE2E) search suite. DesignVision is the latest step in the agency’s broader efforts to streamline and modernize examination and reduce application pendency. 

DesignVision is an AI-powered tool that is capable of searching U.S. and foreign industrial design collections using image(s) as an input query. The tool provides centralized access and federated searching of design patents, registrations, trademarks, and industrial designs from over 80 global registers, and returns search results based on, and sortable by, image similarity. 

DesignVision will augment—not replace—design examiners’ other search tools. Examiners can continue using other PE2E search tools and non-patent literature when conducting their research. The complete text of the Official Gazette Notice can be found on the patent related notices page of the USPTO website…(More)“.

USPTO launches new design patent examination AI tool

Reports by GMF Technology: “The People’s Republic of China (PRC) builds and exports digital infrastructure as part of its Belt and Road Initiative. While the PRC sells technology deals as a “win-win” with partner countries, the agreements also serve Beijing’s geopolitical interests.

That makes it imperative for European and US policymakers to understand the PRC’s tech footprint and assess its global economic and security impact.

To support policymakers in this endeavor, GMF Technology, using a “technology stack” or “tech stack” framework, has produced a series of reports that map the presence of the PRC and its affiliated entities across countries’ technology domains.

Newly released reports on Kazakhstan, Kyrgyzstan, Serbia, and Uzbekistan are built on previous work in two studies by GMF’s Alliance for Securing Democracy (ASD) on the future internet and the digital information stack released, respectively, in 2020 and 2022. The new reports on Central Asian countries, where Russia maintains significant influence as a legacy of Soviet rule, also examine Kremlin influence there. 

The “tech stack” framework features five layers:  

  • network Infrastructure: including optical cables (terrestrial and undersea), telecommunications equipment, satellites, and space-based connectivity infrastructure
  • data Infrastructure: including cloud technology and data centers
  • devices: including hand-held consumer instruments such as mobile phones, tablets, and laptops, and more advanced Internet-of-Things and AI-enabled devices such as electric vehicles and surveillance cameras
  • applications: includes hardware, software, data analytics, and digital platforms to deliver tailored solutions to consumers, sectors, and industries (e.g., robotic manufacturing)
  • governance: includes the legal and normative framework that governs technology use across the entire tech stack..(More)”.
Wired for Influence: Assessing the People’s Republic of China’s Technology Footprint in Europe and Central Asia

Article by Ali Shiri: “…There are two categories of emerging LLM-enhanced tools that support academic research:

1. AI research assistants: The number of AI research assistants that support different aspects and steps of the research process is growing at an exponential rate. These technologies have the potential to enhance and extend traditional research methods in academic work. Examples include AI assistants that support:

  • Concept mapping (Kumu, GitMind, MindMeister);
  • Literature and systematic reviews (Elicit, Undermind, NotebookLM, SciSpace);
  • Literature search (Consensus, ResearchRabbit, Connected Papers, Scite);
  • Literature analysis and summarization (Scholarcy, Paper Digest, Keenious);
  • And research topic and trend detection and analysis (Scinapse, tlooto, Dimension AI).

2. ‘Deep research’ AI agents: The field of artificial intelligence is advancing quickly with the rise of “deep research” AI agents. These next-generation agents combine LLMs, retrieval-augmented generation and sophisticated reasoning frameworks to conduct in-depth, multi-step analyses.

Research is currently being conducted to evaluate the quality and effectiveness of deep research tools. New evaluation criteria are being developed to assess their performance and quality.

Criteria include elements such as cost, speed, editing ease and overall user experience — as well as citation and writing quality, and how these deep research tools adhere to prompts…(More)”.

AI in universities: How large language models are transforming research

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday