Explore our articles
View All Results

Stefaan Verhulst

Article by Dane Gambrell: “On a sweltering August afternoon in Williamsburg, Brooklyn, technologist Chris Whong(opens in new window) led a small group of researchers, students, and local community members on an unusual walking tour. We weren’t visiting the neighborhood’s trendy restaurants or thrift shops. Instead, we were hunting for overlooked public spaces: pocket parks(opens in new window), street plazas, and other spaces that many New Yorkers walk past without even realizing they’re open to the public. 

Our map for this expedition was a new app called NYC Public Space(opens in new window). Whong, a former public servant in NYC’s Department of City Planning, built the platform using generative AI tools to write code he didn’t know how to write himself – a practice often called “vibe coding(opens in new window).” The result is a searchable dataset and map of roughly 2,800 public spaces across New York City, from massive green spaces like Flushing Meadows–Corona Park to tiny triangular plazas you’ve probably never noticed.

New York City has no shortage(opens in new window) of places to sit, relax, or eat lunch outside. The city’s public realm includes more than 2,000 parks, hundreds of street plazas, playgrounds, and waterfront areas, as well as roughly 600 privately owned public spaces (POPS) created by developers in exchange for zoning benefits. 

What it lacks is an easy way for people to discover these spaces. Some public spaces appear on Google Maps or Apple Maps, but many don’t. Even when they do, it’s often unclear what amenities they offer and whether they’re actually publicly accessible. You might walk by a building in your neighborhood every day but have no idea that it contains a courtyard or indoor plaza open to the public…(More)”

Vibe Coding the City: How One Developer Used Open Data to Map Every Public Space in New York City

Article by Maxim Samson: “Mountains, meridians, rivers, and borders—these are some of the features that divide the world on our maps and in our minds. But geography is far less set in stone than we might believe, and, as Maxim Samson’s Earth Shapers contends, in our relatively short time on this planet, humans have become experts at fundamentally reshaping our surroundings.

From the Qhapaq Ñan, the Inca’s “great road,” and Mozambique’s colonial railways to a Saudi Arabian smart city, and from Korea’s sacred Baekdu-daegan mountain range and the Great Green Wall in Africa to the streets of Chicago, Samson explores how we mold the world around us. And how, as we etch our needs onto the natural landscape, we alter the course of history. These fascinating stories of connectivity show that in our desire to make geographical connections, humans have broken through boundaries of all kinds, conquered treacherous terrain, and carved up landscapes. We crave linkages, and though we do not always pay attention to the in-between, these pathways—these ways of “earth shaping,” in Samson’s words—are key to understanding our relationship with the planet we call home.

An immense work of cultural geography touching on ecology, sociology, history, and politics, Earth Shapers argues that, far from being constrained by geography, we are instead its creators…(More)”.

Earth Shapers: How We Mapped and Mastered the World, from the Panama Canal to the Baltic Way

Paper by Mattia Mazzoli et al: “The pandemic served as an important test case of complementing traditional public health data with non-traditional data (NTD) such as mobility traces, social media activity, and wearables data to inform decision-making. Drawing on an expert workshop and a targeted survey of European modelers, we assess the promise and persistent limitations of such data in pandemic preparedness and response. We distinguish between “first-mile” (accessing and harmonizing data) and “last-mile” challenges (translating insights into actionable interventions). The expert workshop held in 2024 brought together participants from public health, academia, policymakers, and industry to reflect on lessons learned and define strategies for translating NTD insights into policy making. The survey offers evidence of the barriers faced during COVID-19 and highlights key data unavailability and underuse. Our findings reveal ongoing issues with data access, quality, and interoperability, as well as institutional and cognitive barriers to evidence-based decision-making. Around 66% of datasets suffered access problem, with data sharing reluctance for NTD being double that of traditional data (30% vs 15%). Only 10% reported they could use all the data they needed. We propose a set of recommendations: for first-mile challenges, solutions focus on technical and legal frameworks for data access.; for last-mile challenges, we recommend fusion centers, decision accelerator labs, and networks of scientific ambassadors to bridge the gap between analysis and action. Realizing the full value of NTD requires a sustained investment in institutional readiness, cross-sectoral collaboration, and a shift toward a culture of data solidarity. Grounded in the lessons of COVID-19, the article can be used to design a roadmap for using NTD to confront a broader array of public health emergencies, from climate shocks to humanitarian crises…(More)”

Non-traditional data in pandemic preparedness and response: identifying and addressing first and last-mile challenges

Essay by Dominic Burbidge: “…The starting point for AI ethics must be the recognition that AI is a simple and limited instrument. Until we master this point, we cannot hope to work back toward a type of ethics that best fits the industry.

Unfortunately, we are constantly being bombarded with the exact opposite: an image of AI as neither simple nor limited. We are told instead that AI is an all-purpose tool that is now taking over everything. There are two prominent versions of this image and both are misguided.

The first is the appeal of the technology’s exponential improvement. Moore’s Law is a good example of this kind of widespread sentiment, a law that more or less successfully predicted that the number of transistors in an integrated circuit would double approximately every two years. That looks like a lot, but remember: all you have in front of you is more transistors. The curve of exponential change looks impressive on a graph, but really the most important change was when we had no transistors and then William Shockley, John Bardeen, and Walter Brattain invented one. The multiple of change from zero to one is infinite, so any subsequent “exponential” rate of change is a climb-down from that original invention.

When technology becomes faster, smaller, or lighter, it gives us the impression of ever-faster change, but all we are really doing is failing to come up with new inventions, such that we have to rely on reworking and remarketing our existing products. That is not exactly progress of the innovative kind, and it by no means suggests that a given technology is unlimited in future potential.

The second argument we often hear is that AI is taking on more and more tasks, which is why it is unlimited in a way that is different from other, more single-use technologies of the past. We are also told that AI is likely to adopt ever more cognitively demanding activities, which seems to be further proof of its open-ended possibilities.

This is sort of true but actually a rather banal point, in the sense that technologies typically take on more and more uses than the original designers could have expected. But that is not evidence that the technology itself has changed. The commercially available microwave oven, for example, came about when American electrical engineer Percy Spencer developed it from British radar technology used in the Second World War, allegedly discovering the heating effect when the candy in his pocket melted in front of a radar set. So technology shifts and reapplies itself, and in this way naturally takes on all kinds of unexpected uses. But new uses of something does not mean its possible uses will be infinite…(More)”.

AI Ethics Is Simpler Than You Think

Paper by Nicolas Steinacker-Olsztyn, Devashish Gosain, Ha Dao: “Large Language Models (LLMs) are increasingly relying on web crawling to stay up to date and accurately answer user queries. These crawlers are expected to honor robots.txt files, which govern automated access. In this study, for the first time, we investigate whether reputable news websites and misinformation sites differ in how they configure these files, particularly in relation to AI crawlers. Analyzing a curated dataset, we find a stark contrast: 60.0% of reputable sites disallow at least one AI crawler, compared to just 9.1% of misinformation sites in their robots.txt files. Reputable sites forbid an average of 15.5 AI user agents, while misinformation sites prohibit fewer than one. We then measure active blocking behavior, where websites refuse to return content when HTTP requests include AI crawler user agents, and reveal that both categories of websites utilize it. Notably, the behavior of reputable news websites in this regard aligns more closely with their declared robots.txt directive than that of misinformation websites. Finally, our longitudinal analysis reveals that this gap has widened over time, with AI-blocking by reputable sites rising from 23% in September 2023 to nearly 60% by May 2025. Our findings highlight a growing asymmetry in content accessibility that may shape the training data available to LLMs, raising essential questions for web transparency, data ethics, and the future of AI training practices…(More)”

Is Misinformation More Open? A Study of robots.txt Gatekeeping on the Web

Article by Andy Masley: “Suppose I want to run my own tiny AI model. Not one a lab made, just my own model on a personal device in my home. I go out and buy a second very small computer to run it. I use it a lot. Each day, I ask 100 questions to my mini AI model. Each prompt uses about ten times as much energy as a Google search, but a Google search is so tiny that the prompt also uses a tiny amount of energy. All together, my 100 prompts use the same energy as running a microwave for 4 minutes, or playing a video game for about 10 minutes.

Sending 100 prompts to this AI model every single day adds 1/1000th to my daily emissions.

Is what I’m doing wrong?

I think the average person would say no. This is such a tiny addition that I personally should be able to decide whether it’s worthwhile for me. We don’t go around policing whether people have used microwaves a few seconds too long, or played a video game for a few minutes too long. Why try to decide whether it’s evil for me to spend a tiny fraction of my daily energy on a computer program I personally think is valuable? …

All these tiny hyper-optimized computers in a single central location, serving hundreds of thousands of people at once, is what a data center is. Data centers are concentrations of hyper-efficient computer processes that no one would have any issue with at all if they were in the homes of the individual people using them. If you wouldn’t have a problem with these tiny computers, you shouldn’t have a problem with data centers either. The only difference is the physical location of the computers themselves, and the fact that these tiny computers are actually combined into larger ones to serve multiple people at once. The only reason they stand out are that these processes are concentrated in one building, which makes them look large if you don’t consider how many people are using them. If instead you see data centers as what they really are, building-sized hyper-efficient computers that hundreds of thousands of people are using at any given moment, they stop looking bad for the environment. In fact, they are the most energy-efficient way to do large scale computing, and computing is already very energy efficient. The larger the data center, the more energy-efficient it is…(More)”.

What a data center is

Google Cloud: “A year and a half ago…we published this list for the first time. It numbered 101 entries. 

It felt like a lot at the time, and served as a showcase of how much momentum both Google and the industry were seeing around generative AI adoption. In the brief period then of gen AI being widely available, organizations of all sizes had begun experimenting with it and putting it into production across their work and across the world, doing so at a speed rarely seen with new technology.

What a difference these past months have made. Our list has now grown by 10X. And still, that’s just scratching the surface of what’s becoming possible with AI across the enterprise, or what might be coming in the next year and a half. [Looking for how to build AI use cases just like these? Check out our handy guide with 101 technical blueprints from some of these real-world examples.]

  • Arizona Health Care Cost Containment System (AHCCCS), Arizona’s Medicaid agency serving more than 2 million people, built an Opioid Use Disorder Service Provider Locator using Vertex AI and Gemini to connect residents with local treatment options. Since its late 2021 launch, the platform has reached over 20,000 unique individuals across 120 Arizona cities with a 55%+ engaged session rate.
  • *Arizona State University‘s ASU Prep division developed Archie, a Gemini-powered chatbot that provides real-time math tutoring support for middle school and high school students. The AI tutor identifies errors, provides hints and guidance, and increased students’ first-attempt correct answers by 6%.
  • CareerVillage is building an app called Coach to empower job seekers, especially underrepresented youth, in their career preparedness; already featuring 35 career development activities, the aim is to have more than 100 by next year.
  • The Minnesota Division of Driver and Vehicle Services helps non-English speakers get licenses and other services with two-way, real-time translation.
  • mRelief has built an SMS-accessible AI chatbot to simplify the application process for the SNAP food assistance program in the U.S., featuring easy-to-understand eligibility information and direct assistance within minutes rather than days.
  • *Nanyang Technological University, Singapore, deployed the Lyon Housing chatbot using Dialogflow CX and Gemini to handle student housing queries. The generative AI solution enhances student experience and saves the customer service workforce more than 100 hours per month.
  • The Qatari Ministry of Labour has launched “Ouqoul,” an AI-powered platform designed to connect expatriate university graduates with job opportunities in the private sector. This platform streamlines the hiring process by integrating AI-driven candidate matching with ministry services for contract authentication and work permit issuance.
  • Tabiya has built a conversational interface, Compass, that helps young people find employment opportunities; the platform asks questions and requests information, drawing out skills and experiences and matching those to appropriate roles.
  • *The University of Hawaii System uses Vertex AI and BigQuery to analyze labor market data and build the Hawaii Career Pathways platform, which evaluates student qualifications and interests to create customized profiles. Gemini provides personalized guidance to align students’ academic paths with career opportunities in Hawaii, helping retain graduates in the state.
  • *The University of Hawaii System also uses Google Translate to communicate with Pacific Islander students in their native languages, including Hawaiian, Māori, Samoan, Tongan, Cook Islands Māori, Cantonese, and Marshallese. The AI makes career guidance and communication more accessible to the diverse student population.
  • *West Sussex County Council, which serves 890,000 residents in the UK, uses Dialogflow to power an online chatbot that engages residents in real-time conversations to determine eligibility for adult social care services and benefits. The conversational AI helps residents quickly understand their qualification status among the 5% of inquiries that actually qualify, reducing pressure on the Customer Service Center.
  • The Fulton Theatre cuts grant-writing time in half by using Gemini in Docs to fill in routine information, helping the team focus on growing the theatre and putting on shows that bring communities together…(More)”.
1,001 real-world gen AI use cases from the world’s leading organizations

About: “…a website that uses AI to track and map protests and activist movements around the world. The goal is to provide a clean, visual representation of these events for researchersA non-profit project mapping protests and civic actions globally. We aggregate publicly reported events to help researchers, journalists and communities understand patterns of civic voice…(More)”.

Archive for Voice and Silence

Article by Viktor Mayer-Schönberger: “Much is being said about AI governance these days – by experts and pundits, lobbyists, journalists and policymakers. Convictions run high. Fundamental values are said to be at stake. Increasingly, AI governance statutes are passed. But what do we really mean – or ought to mean – when we speak about AI governance? In the West, at least three distinct categories of meaning can be identified.

The first is by far the most popular and it comes in many different variations and flavours. It’s what has been driving recent legislation, such as the European Union AI Act. And perhaps surprisingly, it has quite little to do with artificial intelligence. Its proponents scrutinise output of AI processing, and find the output wanting, for a variety of reasons. …But expecting near perfection from machines while accepting much less from humans does not lead to better outcomes overall. Rather, it keeps us stuck with more flawed, albeit human outputs… Moreover, terms such as ‘fair’ and ‘responsible’, frequently used in such AI governance debates, offer the advantage of vast interpretative flexibility, facilitating their use by many groups in support of their very diverse agendas. These different AI governance voices mean very different things, when they use the same words – and from their vantage point that’s more often a feature than a bug, because it gives them and their cause anchorage in the public debates.

The second flavour of AI governance offers a very different take. By focusing on the current economic landscape of digital and online services, its proponents suggest that AI governance is less novel and rather a continuation of digital and internet governance debates that have been raging for decades (Mueller Citation2025). They argue that most building blocks of AI have been around for some time – data, processing power and self-learning algorithms – and been utilised quite unevenly in the digital economy, often to the effect that large economic players got larger. ..

The third flavour of AI governance shifts the focus away from how technology affects fairness or markets, yet again. Instead, the attention is on decision-making. If AI is much about helping humans make better decisions, either by guiding them to the supposedly best choice or by choosing for them, AI governance isn’t so much about technology than about how and to what extent individual decision-making processes are shaped by outside influence. It situates the governance question apart from the specifics of a particular technology and asks: How are others, especially society, shaping and altering individual decision-making processes?…(More)”.

Of forests and trees in AI governance

Essay by Andrew Sorota: “…But a quieter danger lies in wait, one that may ultimately prove more corrosive to the human spirit than any killer robot or bioweapon. The risk is that we will come to rely on AI not merely to assist us but to decide for us, surrendering ever larger portions of collective judgment to systems that, by design, cannot acknowledge our dignity.

The tragedy is that we are culturally prepared for such abdication. Our political institutions already depend on what might be called a “paradigm of deference,” in which ordinary citizens are invited to voice preferences episodically — through ballots every few years — while day-to-day decisions are made by elected officials, regulators and technical experts.

Many citizens have even come to defer their civic role entirely by abstaining from voting, whether for symbolic meaning or due to sheer apathy. AI slots neatly into this architecture, promising to supercharge the convenience of deferring while further distancing individuals from the levers of power.

Modern representative democracy itself emerged in the 18th century as a solution to the logistical impossibility of assembling the entire citizenry in one place; it scaled the ancient city-state to the continental republic. That solution carried a price: The experience of direct civic agency was replaced by periodic, symbolic acts of consent. Between elections, citizens mostly observe from the sidelines. Legislative committees craft statutes, administrative agencies draft rules, central banks decide the price of money — all with limited direct public involvement.

This arrangement has normalized an expectation that complex questions belong to specialists. In many domains, that reflex is sensible — neurosurgeons really should make neurosurgical calls. But it also primes us to cede judgment even where the stakes are fundamentally moral or distributive. The democratic story we tell ourselves — that sovereignty rests with the people — persists, but the lived reality is an elaborate hierarchy of custodians. Many citizens have internalized that gap as inevitable…(More)”.

Rescuing Democracy From The Quiet Rule Of AI

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday