Explore our articles
View All Results

Stefaan Verhulst

Crust News: “At some point this year it became obvious that simply writing about immigration enforcement in the United States was no longer enough. Every time something happened, it happened in isolation. A raid on Canal Street, an abduction in Chicago, an ICE agent that made a single headline for his actions. But the reality of authoritarianism, is that it is an entire system, there are no isolated incidents, they are all connected.

There was nowhere to see these connections in their entirety, so we built a place for it to stay. And best of all, we’re doing so outside the USA, where Trump’s regime can’t get to us.

The ICE List Wiki is now public. It documents immigration enforcement activity across the United States, not just ICE, but Border Patrol, HSI, DHS more broadly, and the hundreds of local police departments operating under 287(g) agreements. Agent identities, incidents, raids, vehicles, supporting agencies, and companies propping up the regime, are recorded as the interconnected system that they are. Entries and linked to each other so that nothing exists on its own anymore. This is our Christmas gift to the USA: a record that refuses to forget. The reason this became necessary has everything to do with the political moment we are in. Trump’s return to power has accelerated an enforcement machine that was already dangerous, but not yet authoritarian, but ripe to become so. What exists now is a system that moves quick, loud, and with very little interest in being legible to the public, avoiding accountability at every step.

Authoritarianism doesn’t usually arrive with a single dramatic act. It arrives through administration, repetition, and exhaustion. They break you down, and you forget how bad those initial steps were, because they had become normalised.

There is a huge misconception that this is just ICE, it’s not. ICE shows up, but so does CBP, HSI, DEA, FBI, local police, and even postmasters. 287(g) agreements turn police officers into extensions of Trump’s extremism while allowing everyone involved to hide behind the headlines about ICE, as if that is all that is going wrong in this moment. Together, these corrupted organisations are forming something much larger, much darker, and much more frightening than anything the USA has seen at home, but reminiscent of what the USA has seen in historical wars abroad.

We want to remove the misconception and track the whole thing. As much as we possibly can…(More)”.

The ICE List Wiki 

Article by Shana Lynch: “…After years of fast expansion and billion-dollar bets, 2026 may mark the moment artificial intelligence confronts its actual utility. In their predictions for the next year, Stanford faculty across computer science, medicine, law, and economics converge on a striking theme: The era of AI evangelism is giving way to an era of AI evaluation. Whether it’s standardized benchmarks for legal reasoning, real-time dashboards tracking labor displacement, or clinical frameworks for vetting the flood of medical AI startups, the coming year demands rigor over hype. The question is no longer “Can AI do this?” but “How well, at what cost, and for whom?”

Learn more about what Stanford HAI faculty expect in the new year…As the buzz around the use of GenAI builds, the creators of the technologies will get frustrated with the long decision cycles at health systems and begin going directly to the user in the form of applications that are made available for “free” to end users. Consider, for example, efforts such as literature summaries by OpenEvidence and on-demand answers to clinical questions by AtroposHealth

On the technology side, we will see a rise in generative transformers that have the potential to forecast diagnoses, treatment response, or disease progression without needing any task-specific labels.

Given this rise in available solutions, the need for patients to know the basis on which AI “help” is being provided will become crucial (see my prior commentary on this). The ability for researchers to keep up with technology developments via good benchmarking will be stretched thin, even if it is widely recognized to be important. And we will see a rise in solutions that empower patients to have agency in their own care (e.g., this example involving cancer treatment)…(More)”.

Stanford AI Experts Predict What Will Happen in 2026 

Book by Ben Zweig: “…offers a revolutionary approach to transforming human capital management through the power of taxonomies. The book follows the experience and ideas of key individuals―from the founders of Wall Street, to the original management consultant, to a young data scientist just out of grad school looking to make sense of the modern workforce―in order to illustrate why our current human capital infrastructure is not serving employees well and what we can do to change that.

By categorizing and organizing workforce data, Zweig provides a practical roadmap for creating a more efficient and data-driven labor market. This book includes key insights on how to:

  • Use AI and similar large language model technologies to support businesses with appropriate categorization and regimentation of data
  • Know whether or not a taxonomy can be useful and functional for an organization in their ability to be flexible, auditable, and adaptable
  • Build a taxonomy that meets the needs of a workforce or organization through clustering, labeling, and production

Combining storytelling with real-world examples, theoretical analysis, and a practical framework, Job Architecture is an essential guide for companies to manage a competitive, modern workforce that improves the working experience for all employees…(More)”.

Job Architecture

World Bank Report: “This brief presents the 2025 update of the GovTech Maturity Index (GTMI), offering a global snapshot of public sector digital transformation across 197 economies. The GTMI assesses four focus areas, Core Government Systems (CGSI), Online Public Service Delivery (PSDI), Digital Citizen Engagement (DCEI), and GovTech Enablers (GTEI), using 48 indicators. The methodology combines self-reported survey data from 158 economies with publicly available information for the remaining 39. Findings indicate overall progress since 2022 but widening disparities between higher-income (Group A) and lower-income (Group D) economies. Advances are noted in core systems (e.g., government cloud) and service delivery (e.g., customs services, digital ID), while digital citizen engagement remains the least mature area and adoption of a whole-of-government approach is limited. The brief recommends accelerating implementation of interoperability frameworks, strengthening sustainability of online service portals, and updating GovTech strategies in line with evolving technologies. It underscores the need for targeted support to low-income regions, particularly in Africa, and calls for clear monitoring frameworks to track progress and inform evidence-based policymaking…(More)”.

GovTech Maturity Index 2025 : Tracking Public Sector Digital Transformation Worldwide

Article by Mira Mohsini & Andres Lopez: “When the Coalition of Communities of Color (CCC) began a multi-year collaboration with the Oregon Health Authority (OHA), they worked together to modernize a critical public health information source: the Oregon Student Health Survey. This survey, disseminated annually across Oregon, was designed to track health trends and inform policy decisions affecting thousands of young people and families.

But there was a problem. Year after year, this survey illuminated inequities, showing, for example, that students of color experienced higher rates of bullying or mental health challenges, without providing any insight into why these inequities existed, how they were experienced, or what communities wanted done about them. The data revealed gaps but offered no pathways to close them.

Working alongside other culturally specific organizations within their coalition and researchers of color in their region, CCC set out to demonstrate what better data could look like for the Oregon Student Health Survey. They worked with high school teachers who had deep relationships with students and met with students to understand what kinds of questions mattered most to them. Simple and straightforward questions like “How are you doing?” and “What supports do you need?” revealed issues that the state’s standardized surveys had completely missed. The process generated rich, contextual data showing not just that systems were failing, but how they were failing and how students desired their needs to be met. The process also demonstrated that working with people with lived experiences of the issues being researched generated better questions and, therefore, better data about these issues.

And the improvements resulting from better data were tangible. OHA created a Youth Data Council, involving young people directly in designing aspects of the next version of the Student Health Survey. CCC documented the survey modernization process in a detailed community brief. For the first time ever, the Oregon Student Health Survey included three open-ended questions, yielding over 4,000 qualitative responses. OHA published a groundbreaking analysis of what students actually wanted to say when given the chance…(More)”

Community Data Is Trusted Evidence

Paper by Emilio Ferrara: “Generative AI (GenAI) now produces text, images, audio, and video that can be perceptually convincing at scale and at negligible marginal cost. While public debate often frames the associated harms as “deepfakes” or incremental extensions of misinformation and fraud, this view misses a broader socio-technical shift: GenAI enables synthetic realities; coherent, interactive, and potentially personalized information environments in which content, identity, and social interaction are jointly manufactured and mutually reinforcing. We argue that the most consequential risk is not merely the production of isolated synthetic artifacts, but the progressive erosion of shared epistemic ground and institutional verification practices as synthetic content, synthetic identity, and synthetic interaction become easy to generate and hard to audit. This paper (i) formalizes synthetic reality as a layered stack (content, identity, interaction, institutions), (ii) expands a taxonomy of GenAI harms spanning personal, economic, informational, and socio-technical risks, (iii) articulates the qualitative shifts introduced by GenAI (cost collapse, throughput, customization, micro-segmentation, provenance gaps, and trust erosion), and (iv) synthesizes recent risk realizations (2023-2025) into a compact case bank illustrating how these mechanisms manifest in fraud, elections, harassment, documentation, and supply-chain compromise. We then propose a mitigation stack that treats provenance infrastructure, platform governance, institutional workflow redesign, and public resilience as complementary rather than substitutable, and outline a research agenda focused on measuring epistemic security. We conclude with the Generative AI Paradox: as synthetic media becomes ubiquitous, societies may rationally discount digital evidence altogether…(More)”.

The Generative AI Paradox: GenAI and the Erosion of Trust, the Corrosion of Information Verification, and the Demise of Truth

Article by Thijs van de Graaf: “Artificial intelligence is often cast as intangible, a technology that lives in the cloud and thinks in code. The reality is more grounded. Behind every chatbot or image generator lie servers that draw electricity, cooling systems that consume water, chips that rely on fragile supply chains, and minerals dug from the earth.

That physical backbone is rapidly expanding. Data centers are multiplying in number and in size. The largest ones, “hyperscale” centers, have power needs in the tens of megawatts, at the scale of a small city. Amazon, Microsoft, Google, and Meta already run hundreds worldwide, but the next wave is far larger, with projects at gigawatt scale. In Abu Dhabi, OpenAI and its partners are planning a 5-gigawatt campus, matching the output of five nuclear reactors and sprawling across 10 square miles.

Economists debate when, if ever, these vast investments will pay off in productivity gains. Even so, governments are treating AI as the new frontier of industrial policy, with initiatives on a scale once reserved for aerospace or nuclear power. The United Arab Emirates appointed the world’s first minister for artificial intelligence in 2017. France has pledged more than €100 billion in AI spending. And in the two countries at the forefront of AI, the race is increasingly geopolitical: The United States has wielded export controls on advanced chips, while China has responded with curbs on sales of key minerals.

The contest in algorithms is just as much a competition for energy, land, water, semiconductors, and minerals. Supplies of electricity and chips will determine how fast the AI revolution moves and which countries and companies will control it…(More)”.

Inside the AI-Led Resource Race

Article by Jacob Taylor and Scott E. Page: “…Generative artificial intelligence (AI) does not transport bodies, but it is already starting to disrupt the physics of collective intelligence: How ideas, drafts, data, and perspectives move between people, how much information groups can process, and how quickly they can move from vague hunch to concrete product.

These shifts are thrilling and terrifying. It now feels easy to build thousands of new tools and workflows. Some will increase our capacity to solve problems. Some could transform our public spaces to be more inclusive and less polarizing. Some could also quietly hollow out the cultures, relationships, and institutions upon which our ability to solve problems together depends.

The challenge—and opportunity—for scientists and practitioners is to start testing how AI can advance collective intelligence in real policy domains, and how these mechanisms can be turned into new muscles and immune systems for shared problem-solving…(More)”.

AI is changing the physics of collective intelligence—how do we respond?

UNDP Report: “Artificial Intelligence is advancing rapidly, yet many countries remain without the infrastructure, skills, and governance systems needed to capture its benefits. At the same time, they are already feeling its economic and social disruptions. This uneven mix of slow adoption and high vulnerability may trigger a Next Great Divergence, where inequalities between countries widen in the age of AI. 

UNDP’s flagship report, The Next Great Divergence: Why AI May Widen Inequality Between Countries, highlights how these pressures are playing out most visibly in Asia and the Pacific, a region marked by vast differences in income, digital readiness, and institutional capacity. The report outlines practical pathways for countries to harness AI’s opportunities while managing its risks in support of broader human development. 

The result of a multinational effort spanning Asia, Europe and North America, the paper draws on 9 nine background papers prepared with partners including the Massachusetts Institute of Technology (USA), the London School of Economics and Political Science (UK), the Max Planck Institute for Human Development (Germany), Tsinghua University and the Institute for AI International Governance (China), the University of Science and Technology of China, the Aapti Institute (India) and the Digital Future Lab (India)…(More)”.

The Next Great Divergence

Article by Hannah Devlin: “Hospitals in England are using artificial intelligence to help cut waiting times in emergency departments this winter.

The A&E forecasting tool predicts when demand will be highest, allowing trusts to better plan staffing and bed space. The prediction algorithm is trained on historical data including weather trends, school holidays, and rates of flu and Covid to determine how many people are likely to visit A&E.

The government said the technology allowed healthcare staff “to do the things that they’re trained to do, rather than having to be bound down by bureaucratic processes”.

Ian Murray, the minister for digital government and data, said: “The front door of the NHS is the A&E department. You’ve no idea how many people will come through the door, although you can have some analytical evidence that Saturday nights might be busier than a Tuesday night, for example, and the winter might be busier than the summer, unless you have a heatwave, of course…(More)”.

AI being used to help cut A&E waiting times in England this winter

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday