Explore our articles
View All Results

Stefaan Verhulst

Article by Thomas R. Karl, Stephen C. Diggs, Franklin Nutter, Kevin Reed, and Terence Thompson: “From farming and engineering to emergency management and insurance, many industries critical to daily life rely on Earth system and related socioeconomic datasets. NOAA has linked its data, information, and services to trillions of dollars in economic activity each year, and roughly three quarters of U.S. Fortune 100 companies use NASA Earth data, according to the space agency.

Such data are collected in droves every day by an array of satellites, aircraft, and surface and subsurface instruments. But for many applications, not just any data will do.

Leaving reference quality datasets (RQDs) to languish, or losing them altogether, would represent a dramatic shift in the country’s approach to managing environmental risk.

Trusted, long-standing datasets known as reference quality datasets (RQDs) form the foundation of hazard prediction and planning and are used in designing safety standards, planning agricultural operations, and performing insurance and financial risk assessments, among many other applications. They are also used to validate weather and climate models, calibrate data from other observations that are of less than reference quality, and ground-truth hazard projections. Without RQDs, risk assessments grow more uncertain, emergency planning and design standards can falter, and potential harm to people, property, and economies becomes harder to avoid.

Yet some well-established, federally supported RQDs in the United States are now slated to be, or already have been, decommissioned, or they are no longer being updated or maintained because of cuts to funding and expert staff. Leaving these datasets to languish, or losing them altogether, would represent a dramatic—and potentially very costly—shift in the country’s approach to managing environmental risk…(More)”.

The Looming Data Loss That Threatens Public Safety and Prosperity

Paper by Cheng-Chun Lee et al: “Using novel data and artificial intelligence (AI) technologies in crisis resilience and management is increasingly prominent. AI technologies have broad applications, from detecting damages to prioritizing assistance, and have increasingly supported human decision-making. Understanding how AI amplifies or diminishes specific values and how responsible AI practices and governance can mitigate harmful outcomes and protect vulnerable populations is critical. This study presents a responsible AI roadmap embedded in the Crisis Information Management Circle. Through three focus groups with participants from diverse organizations and sectors and a literature review, we develop six propositions addressing important challenges and considerations in crisis resilience and management. Our roadmap covers a broad spectrum of interwoven challenges and considerations on collecting, analyzing, sharing, and using information. We discuss principles including equity, fairness, explainability, transparency, accountability, privacy, security, inter-organizational coordination, and public engagement. Through examining issues around AI systems for crisis management, we dissect the inherent complexities of information management, governance, and decision-making in crises and highlight the urgency of responsible AI research and practice. The ideas presented in this paper are among the first attempts to establish a roadmap for actors, including researchers, governments, and practitioners, to address important considerations for responsible AI in crisis resilience and management…(More)”.

Roadmap Towards Responsible AI in Crisis Resilience and Management

Article by Dilek Fraisl et al: “The termination in February 2025 of the Demographic and Health Surveys, a critical source of data on population, health, HIV, and nutrition in over 90 countries, supported by the United States Agency for International Development, constitutes a crisis for official statistics. This is particularly true for low- and middle-income countries that lack their own survey infrastructure1. At a national level, in the United States, proposed cuts to the Environmental Protection Agency by the current administration further threaten the capacity to monitor and achieve environmental sustainability and implement the SDGs2,3. Citizen science—data collected through voluntary public contributions—now can and must step up to fill the gap and play a more central role in official statistics.

Demographic and Health Surveys contribute directly to the calculation of around 30 of the indicators that underpin the Sustainable Development Goals (SDGs)4. More generally, a third of SDG indicators rely on household surveys data5.

Recent political changes, particularly in the United States, have exposed the risks of relying too heavily on a single country or institution to run global surveys and placing minimal responsibility on individual countries for their own data collection.

Many high-income countries, particularly European ones, are experiencing similar challenges and financial pressures on their statistical systems as their national budgets are increasingly prioritizing defense spending6. Along with these budget cuts comes a risk that perceived efficiency gains from artificial intelligence are increasingly viewed as a pretense to put further budgetary pressure on official statistical agencies7.

In this evolving environment, we argue that citizen science can become an essential part of national data gathering efforts. To date, policymakers, researchers, and agencies have viewed it as supplementary to official statistics. Although self-selected participation can introduce bias, citizen science provides fine-scale, timely, cost-efficient, and flexible data that can fill gaps and help validate official statistics. We contend that, rather than an optional complement, citizen science data should be systematically integrated into national and global data ecosystems…(More)”.

Why citizen science is now essential for official statistics

Working Paper by Geoff Mulgan and Caio Werneck: “City governments across the world usually organise much of their work through functional hierarchies – departments or secretariats with specialised responsibility for transport, housing, sanitation, education, environment and so on. Their approaches mirror those of national governments and the traditional multi-divisional business which had separate teams for manufacturing, marketing, sales, and for different product lines. 

Those hierarchical structures became the norm in the late 19th century and they still work well for stable, bounded problems. They ensure clear accountability; a concentration of specialised knowledge; and a means to engage relevant stakeholders. Often, they bring together officials and professionals with a strong shared ethos – whether for policing or education, transport or housing. 

But vertical silos have also always created problems. Many priorities don’t fit them neatly. Sometimes departments clash, or dump costs onto each other. They may fail to share vital information. 

There is a long history of attempts to create more coherent, coordinated ways of working, and as cities face overlapping emergencies (from pandemics to climate disasters), and slow-burning crises (in jobs, care, security and housing) that cut across these silos, many are looking for new ways to coordinate action. 

Some of the new options make the most of digital technologies which make it much easier to organise horizontally – with shared platforms, data or knowledge, or one-stop shops or portals for citizens. Some involve new roles (for digital, heat or resilience), new types of team or task force (such as I-Teams for innovation). And many involve new kinds of partnership or collaboration, with mesh-like structures instead of the traditional pyramid hierarchies of public administration…(More)”

The city as mesh

Paper by Edith Darin: “The digital era has transformed the production and governance of demographic figures, shifting it from a collective, state-led endeavour to one increasingly shaped by private actors and extractive technologies. This paper analyses the implications of these shifts by tracing the evolving status of demographic figures through the lens of Ostrom’s typology of goods: from a club good in royal censuses, to a public good under democratic governance, and now towards a private asset whose collection has become rivalrous and its dissemination excludable. Drawing on case studies involving satellite imagery, mobile phone data, and social media platforms, the study shows how new forms of passive data collection while providing previously unseen data opportunities, disrupt also traditional relationships between states and citizens, raise ethical and epistemic concerns, and challenge the legitimacy of national statistical institutes. In response, the paper advocates for the reconstitution of demographic figures as a common good, proposing a collective governance model that includes increased transparency, the sharing of anonymised aggregates, and the creation of a Public Demographic Data Library to support democratic accountability and technical robustness in demographic knowledge production…(More)”.

Demographic figures at risk in the digital era: Resisting commodification, reclaiming the common good

Report by OpenAI: “More than 5% of all ChatGPT messages globally are about healthcare, averaging billions of messages each week. Of our more than 800 million regular users, one in four submits a prompt about healthcare every week. More than 40 million turn to ChatGPT every day with healthcare questions.
In the United States, the healthcare system is a long-standing and worsening pain point for many. Gallup finds that views of US healthcare quality have sunk to a 24-year-low; that Americans give the system a C+ on access and a D+ on costs; and that a combined 70% believe the system has major problems or is in a state of crisis. In our own research, three in five Americans say the current system is broken, and strong majorities tell us that hospital costs (87%), poor healthcare access (77%), and a lack of nurses (75%) are all serious problems.
For both patients and providers in the US, ChatGPT has become an important ally, helping people navigate the healthcare system, enabling them to self-advocate, and supporting both patients and providers for better health outcomes.

Based on anonymized ChatGPT message data:
– Nearly 2 million messages per week focus on health insurance, including for comparing plans, understanding prices, handling claims and billing, eligibility and enrollment, and coverage and cost-sharing details.
– In underserved rural communities, users send an average of nearly 600,000 healthcare-related messages every week.
– And seven in 10 healthcare conversations in ChatGPT happen outside of normal clinic hours.

This report details: (1) how users are turning to ChatGPT for help in navigating the US healthcare system; (2) how they’re turning to ChatGPT to help them close healthcare access gaps, including in “hospital deserts” across the country; and (3) how healthcare providers and workers are using AI in their roles now…(More)”.

AI as a Healthcare Ally

Paper by Hangcheng Zhao and Ron Berman: “Large language models (LLMs) change how consumers acquire information online; their bots also crawl news publishers’ websites for training data and to answer consumer queries; and they provide tools that can lower the cost of content creation. These changes lead to predictions of adverse impact on news publishers in the form of lowered consumer demand, reduced demand for newsroom employees, and an increase in news “slop.” Consequently, some publishers strategically responded by blocking LLM access to their websites using the robots.txt
file standard.
Using high-frequency granular data, we document four effects related to the predicted shifts in news publishing following the introduction of generative AI (GenAI). First, we find a consistent and moderate decline in traffic to news publishers occurring after August 2024. Second, using a difference-in-differences approach, we find that blocking GenAI bots can have adverse effects on large publishers by reducing total website traffic by 23% and real consumer traffic by 14% compared to not blocking. Third, on the hiring side, we do not find evidence that LLMs are replacing editorial or content-production jobs yet. The share of new editorial and contentproduction job listings increases over time. Fourth, regarding content production, we find no evidence that large publishers increased text volume; instead, they significantly increased rich content and use more advertising and targeting technologies.
Together, these findings provide early evidence of some unforeseen impacts of the introduction of LLMs on news production and consumption…(More)”.

The Impact of LLMs on Online News Consumption and Production

Whitepaper by Frontiers: “…shows that AI has rapidly become part of everyday peer review, with 53% of reviewers now using AI tools. The findings in Unlocking AI’s untapped potential: responsible innovation in research and publishing point to a pivotal moment for research publishing. Adoption is accelerating and the opportunity now is to translate this momentum into stronger, more transparent, and more equitable research practices as demonstrated in Frontiers’ policy outlines.

Drawing on insights from 1,645 active researchers worldwide, the whitepaper identifies a global community eager to use AI confidently and responsibly. While many reviewers currently rely on AI for drafting reports or summarizing findings, the report highlights significant untapped potential for AI to support rigor, reproducibility, and deeper methodological insight.

The study shows broad enthusiasm for using AI more effectively, especially among early-career researchers (87% adoption) and in rapidly growing research regions such as China (77%) and Africa (66%). Researchers in all regions see clear benefits, from reducing workload to improving communication, and many express a desire for clear, consistent policy recommendations that would enable more advanced use…(More)”.

Most peer reviewers now use AI, and publishing policy must keep pace

Press Release by IBM: “Record-setting wildfires across Bolivia last year scorched an area the size of Greece, displacing thousands of people and leading to widespread loss of crops and livestock. The cause of the fires was attributed to land clearing, pasture burning, and a severe drought during what was Earth’s warmest year on record.

The Bolivia wildfires are just one, among hundreds, of extreme flood and wildfire events captured in a new global, multi-modal dataset called ImpactMesh, open-sourced this week by IBM Research in Europe and the European Space Agency (ESA). The dataset is also multi-temporal, meaning it features before-and-after snapshots of flooded or fire-scorched areas. The footage was captured by the Copernicus Sentinel-1 and Sentinel-2 Earth-orbiting satellites over the last decade.

To provide a clearer picture of landscape-level changes, each of the extreme events in the dataset is represented by three types of observations — optical images, radar images, and an elevation map of the impacted area. When storm clouds and smoky fires block optical sensors from seeing the extent of flood and wildfires from space, radar images and the altitude of the terrain can help to reveal the severity of what just happened…(More)”.

IBM and ESA open-source AI models trained on a new dataset for analyzing extreme floods and wildfires

Article by Louis Menand: “Once, every middle-class home had a piano and a dictionary. The purpose of the piano was to be able to listen to music before phonographs were available and affordable. Later on, it was to torture young persons by insisting that they learn to do something few people do well. The purpose of the dictionary was to settle intra-family disputes over the spelling of words like “camaraderie” and “sesquipedalian,” or over the correct pronunciation of “puttee.” (Dad wasn’t always right!) Also, it was sometimes useful for doing homework or playing Scrabble.

This was the state of the world not that long ago. In the late nineteen-eighties, Merriam-Webster’s Collegiate Dictionary was on the Times best-seller list for a hundred and fifty-five consecutive weeks. Fifty-seven million copies were sold, a number believed to be second only, in this country, to sales of the Bible. (The No. 1 print dictionary in the world is the Chinese-language Xinhua Dictionary; more than five hundred million copies have sold since it was introduced, in 1953.)

There was good money in the word business. Then came the internet and, with it, ready-to-hand answers to all questions lexical. If you are writing on a computer, it’s almost impossible to misspell a word anymore. It’s hard even to misplace a comma, although students do manage it. And, if you run across an unfamiliar word, you can type it into your browser and get a list of websites with information about it, often way more than you want or need. Like the rest of the analog world, legacy dictionaries have had to adapt or perish. Stefan Fatsis’s “Unabridged: The Thrill of (and Threat to) the Modern Dictionary” (Atlantic Monthly Press) is a good-natured and sympathetic account of what seems to be a losing struggle…(More)”.

Is the Dictionary Done For?

Get the latest news right in your inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday