Explore our articles
View All Results

Stefaan Verhulst

Press Release by IBM: “Record-setting wildfires across Bolivia last year scorched an area the size of Greece, displacing thousands of people and leading to widespread loss of crops and livestock. The cause of the fires was attributed to land clearing, pasture burning, and a severe drought during what was Earth’s warmest year on record.

The Bolivia wildfires are just one, among hundreds, of extreme flood and wildfire events captured in a new global, multi-modal dataset called ImpactMesh, open-sourced this week by IBM Research in Europe and the European Space Agency (ESA). The dataset is also multi-temporal, meaning it features before-and-after snapshots of flooded or fire-scorched areas. The footage was captured by the Copernicus Sentinel-1 and Sentinel-2 Earth-orbiting satellites over the last decade.

To provide a clearer picture of landscape-level changes, each of the extreme events in the dataset is represented by three types of observations — optical images, radar images, and an elevation map of the impacted area. When storm clouds and smoky fires block optical sensors from seeing the extent of flood and wildfires from space, radar images and the altitude of the terrain can help to reveal the severity of what just happened…(More)”.

IBM and ESA open-source AI models trained on a new dataset for analyzing extreme floods and wildfires

Article by Louis Menand: “Once, every middle-class home had a piano and a dictionary. The purpose of the piano was to be able to listen to music before phonographs were available and affordable. Later on, it was to torture young persons by insisting that they learn to do something few people do well. The purpose of the dictionary was to settle intra-family disputes over the spelling of words like “camaraderie” and “sesquipedalian,” or over the correct pronunciation of “puttee.” (Dad wasn’t always right!) Also, it was sometimes useful for doing homework or playing Scrabble.

This was the state of the world not that long ago. In the late nineteen-eighties, Merriam-Webster’s Collegiate Dictionary was on the Times best-seller list for a hundred and fifty-five consecutive weeks. Fifty-seven million copies were sold, a number believed to be second only, in this country, to sales of the Bible. (The No. 1 print dictionary in the world is the Chinese-language Xinhua Dictionary; more than five hundred million copies have sold since it was introduced, in 1953.)

There was good money in the word business. Then came the internet and, with it, ready-to-hand answers to all questions lexical. If you are writing on a computer, it’s almost impossible to misspell a word anymore. It’s hard even to misplace a comma, although students do manage it. And, if you run across an unfamiliar word, you can type it into your browser and get a list of websites with information about it, often way more than you want or need. Like the rest of the analog world, legacy dictionaries have had to adapt or perish. Stefan Fatsis’s “Unabridged: The Thrill of (and Threat to) the Modern Dictionary” (Atlantic Monthly Press) is a good-natured and sympathetic account of what seems to be a losing struggle…(More)”.

Is the Dictionary Done For?

Book by C. Thi Nguyen: “…takes us deep into the heart of games, and into the depths of bureaucracy, to see how scoring systems shape our desires.

Games are the most important art form of our era. They embody the spirit of free play. They show us the subtle beauty of action everywhere in life in video games, sports, and boardgames—but also cooking, gardening, fly-fishing, and running. They remind us that it isn’t always about outcomes, but about how glorious it feels to be doing the thing. And the scoring systems help get us there, by giving us new goals to try on.

Scoring systems are also at the center of our corporations and bureaucracies—in the form of metrics and rankings. They tell us exactly how to measure our success. They encourage us to outsource our values to an external authority. And they push on us to value simple, countable things. Metrics don’t capture what really matters; they only capture what’s easy to measure. The price of that clarity is  our independence.

The Score asks us is this the game you really want to be playing?…(More)”.

The Score

Crust News: “At some point this year it became obvious that simply writing about immigration enforcement in the United States was no longer enough. Every time something happened, it happened in isolation. A raid on Canal Street, an abduction in Chicago, an ICE agent that made a single headline for his actions. But the reality of authoritarianism, is that it is an entire system, there are no isolated incidents, they are all connected.

There was nowhere to see these connections in their entirety, so we built a place for it to stay. And best of all, we’re doing so outside the USA, where Trump’s regime can’t get to us.

The ICE List Wiki is now public. It documents immigration enforcement activity across the United States, not just ICE, but Border Patrol, HSI, DHS more broadly, and the hundreds of local police departments operating under 287(g) agreements. Agent identities, incidents, raids, vehicles, supporting agencies, and companies propping up the regime, are recorded as the interconnected system that they are. Entries and linked to each other so that nothing exists on its own anymore. This is our Christmas gift to the USA: a record that refuses to forget. The reason this became necessary has everything to do with the political moment we are in. Trump’s return to power has accelerated an enforcement machine that was already dangerous, but not yet authoritarian, but ripe to become so. What exists now is a system that moves quick, loud, and with very little interest in being legible to the public, avoiding accountability at every step.

Authoritarianism doesn’t usually arrive with a single dramatic act. It arrives through administration, repetition, and exhaustion. They break you down, and you forget how bad those initial steps were, because they had become normalised.

There is a huge misconception that this is just ICE, it’s not. ICE shows up, but so does CBP, HSI, DEA, FBI, local police, and even postmasters. 287(g) agreements turn police officers into extensions of Trump’s extremism while allowing everyone involved to hide behind the headlines about ICE, as if that is all that is going wrong in this moment. Together, these corrupted organisations are forming something much larger, much darker, and much more frightening than anything the USA has seen at home, but reminiscent of what the USA has seen in historical wars abroad.

We want to remove the misconception and track the whole thing. As much as we possibly can…(More)”.

The ICE List Wiki 

Article by Shana Lynch: “…After years of fast expansion and billion-dollar bets, 2026 may mark the moment artificial intelligence confronts its actual utility. In their predictions for the next year, Stanford faculty across computer science, medicine, law, and economics converge on a striking theme: The era of AI evangelism is giving way to an era of AI evaluation. Whether it’s standardized benchmarks for legal reasoning, real-time dashboards tracking labor displacement, or clinical frameworks for vetting the flood of medical AI startups, the coming year demands rigor over hype. The question is no longer “Can AI do this?” but “How well, at what cost, and for whom?”

Learn more about what Stanford HAI faculty expect in the new year…As the buzz around the use of GenAI builds, the creators of the technologies will get frustrated with the long decision cycles at health systems and begin going directly to the user in the form of applications that are made available for “free” to end users. Consider, for example, efforts such as literature summaries by OpenEvidence and on-demand answers to clinical questions by AtroposHealth

On the technology side, we will see a rise in generative transformers that have the potential to forecast diagnoses, treatment response, or disease progression without needing any task-specific labels.

Given this rise in available solutions, the need for patients to know the basis on which AI “help” is being provided will become crucial (see my prior commentary on this). The ability for researchers to keep up with technology developments via good benchmarking will be stretched thin, even if it is widely recognized to be important. And we will see a rise in solutions that empower patients to have agency in their own care (e.g., this example involving cancer treatment)…(More)”.

Stanford AI Experts Predict What Will Happen in 2026 

Book by Ben Zweig: “…offers a revolutionary approach to transforming human capital management through the power of taxonomies. The book follows the experience and ideas of key individuals―from the founders of Wall Street, to the original management consultant, to a young data scientist just out of grad school looking to make sense of the modern workforce―in order to illustrate why our current human capital infrastructure is not serving employees well and what we can do to change that.

By categorizing and organizing workforce data, Zweig provides a practical roadmap for creating a more efficient and data-driven labor market. This book includes key insights on how to:

  • Use AI and similar large language model technologies to support businesses with appropriate categorization and regimentation of data
  • Know whether or not a taxonomy can be useful and functional for an organization in their ability to be flexible, auditable, and adaptable
  • Build a taxonomy that meets the needs of a workforce or organization through clustering, labeling, and production

Combining storytelling with real-world examples, theoretical analysis, and a practical framework, Job Architecture is an essential guide for companies to manage a competitive, modern workforce that improves the working experience for all employees…(More)”.

Job Architecture

World Bank Report: “This brief presents the 2025 update of the GovTech Maturity Index (GTMI), offering a global snapshot of public sector digital transformation across 197 economies. The GTMI assesses four focus areas, Core Government Systems (CGSI), Online Public Service Delivery (PSDI), Digital Citizen Engagement (DCEI), and GovTech Enablers (GTEI), using 48 indicators. The methodology combines self-reported survey data from 158 economies with publicly available information for the remaining 39. Findings indicate overall progress since 2022 but widening disparities between higher-income (Group A) and lower-income (Group D) economies. Advances are noted in core systems (e.g., government cloud) and service delivery (e.g., customs services, digital ID), while digital citizen engagement remains the least mature area and adoption of a whole-of-government approach is limited. The brief recommends accelerating implementation of interoperability frameworks, strengthening sustainability of online service portals, and updating GovTech strategies in line with evolving technologies. It underscores the need for targeted support to low-income regions, particularly in Africa, and calls for clear monitoring frameworks to track progress and inform evidence-based policymaking…(More)”.

GovTech Maturity Index 2025 : Tracking Public Sector Digital Transformation Worldwide

Article by Mira Mohsini & Andres Lopez: “When the Coalition of Communities of Color (CCC) began a multi-year collaboration with the Oregon Health Authority (OHA), they worked together to modernize a critical public health information source: the Oregon Student Health Survey. This survey, disseminated annually across Oregon, was designed to track health trends and inform policy decisions affecting thousands of young people and families.

But there was a problem. Year after year, this survey illuminated inequities, showing, for example, that students of color experienced higher rates of bullying or mental health challenges, without providing any insight into why these inequities existed, how they were experienced, or what communities wanted done about them. The data revealed gaps but offered no pathways to close them.

Working alongside other culturally specific organizations within their coalition and researchers of color in their region, CCC set out to demonstrate what better data could look like for the Oregon Student Health Survey. They worked with high school teachers who had deep relationships with students and met with students to understand what kinds of questions mattered most to them. Simple and straightforward questions like “How are you doing?” and “What supports do you need?” revealed issues that the state’s standardized surveys had completely missed. The process generated rich, contextual data showing not just that systems were failing, but how they were failing and how students desired their needs to be met. The process also demonstrated that working with people with lived experiences of the issues being researched generated better questions and, therefore, better data about these issues.

And the improvements resulting from better data were tangible. OHA created a Youth Data Council, involving young people directly in designing aspects of the next version of the Student Health Survey. CCC documented the survey modernization process in a detailed community brief. For the first time ever, the Oregon Student Health Survey included three open-ended questions, yielding over 4,000 qualitative responses. OHA published a groundbreaking analysis of what students actually wanted to say when given the chance…(More)”

Community Data Is Trusted Evidence

Paper by Emilio Ferrara: “Generative AI (GenAI) now produces text, images, audio, and video that can be perceptually convincing at scale and at negligible marginal cost. While public debate often frames the associated harms as “deepfakes” or incremental extensions of misinformation and fraud, this view misses a broader socio-technical shift: GenAI enables synthetic realities; coherent, interactive, and potentially personalized information environments in which content, identity, and social interaction are jointly manufactured and mutually reinforcing. We argue that the most consequential risk is not merely the production of isolated synthetic artifacts, but the progressive erosion of shared epistemic ground and institutional verification practices as synthetic content, synthetic identity, and synthetic interaction become easy to generate and hard to audit. This paper (i) formalizes synthetic reality as a layered stack (content, identity, interaction, institutions), (ii) expands a taxonomy of GenAI harms spanning personal, economic, informational, and socio-technical risks, (iii) articulates the qualitative shifts introduced by GenAI (cost collapse, throughput, customization, micro-segmentation, provenance gaps, and trust erosion), and (iv) synthesizes recent risk realizations (2023-2025) into a compact case bank illustrating how these mechanisms manifest in fraud, elections, harassment, documentation, and supply-chain compromise. We then propose a mitigation stack that treats provenance infrastructure, platform governance, institutional workflow redesign, and public resilience as complementary rather than substitutable, and outline a research agenda focused on measuring epistemic security. We conclude with the Generative AI Paradox: as synthetic media becomes ubiquitous, societies may rationally discount digital evidence altogether…(More)”.

The Generative AI Paradox: GenAI and the Erosion of Trust, the Corrosion of Information Verification, and the Demise of Truth

Article by Thijs van de Graaf: “Artificial intelligence is often cast as intangible, a technology that lives in the cloud and thinks in code. The reality is more grounded. Behind every chatbot or image generator lie servers that draw electricity, cooling systems that consume water, chips that rely on fragile supply chains, and minerals dug from the earth.

That physical backbone is rapidly expanding. Data centers are multiplying in number and in size. The largest ones, “hyperscale” centers, have power needs in the tens of megawatts, at the scale of a small city. Amazon, Microsoft, Google, and Meta already run hundreds worldwide, but the next wave is far larger, with projects at gigawatt scale. In Abu Dhabi, OpenAI and its partners are planning a 5-gigawatt campus, matching the output of five nuclear reactors and sprawling across 10 square miles.

Economists debate when, if ever, these vast investments will pay off in productivity gains. Even so, governments are treating AI as the new frontier of industrial policy, with initiatives on a scale once reserved for aerospace or nuclear power. The United Arab Emirates appointed the world’s first minister for artificial intelligence in 2017. France has pledged more than €100 billion in AI spending. And in the two countries at the forefront of AI, the race is increasingly geopolitical: The United States has wielded export controls on advanced chips, while China has responded with curbs on sales of key minerals.

The contest in algorithms is just as much a competition for energy, land, water, semiconductors, and minerals. Supplies of electricity and chips will determine how fast the AI revolution moves and which countries and companies will control it…(More)”.

Inside the AI-Led Resource Race

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday