Stefaan Verhulst
Book by Ben Zweig: “…offers a revolutionary approach to transforming human capital management through the power of taxonomies. The book follows the experience and ideas of key individuals―from the founders of Wall Street, to the original management consultant, to a young data scientist just out of grad school looking to make sense of the modern workforce―in order to illustrate why our current human capital infrastructure is not serving employees well and what we can do to change that.
By categorizing and organizing workforce data, Zweig provides a practical roadmap for creating a more efficient and data-driven labor market. This book includes key insights on how to:
- Use AI and similar large language model technologies to support businesses with appropriate categorization and regimentation of data
- Know whether or not a taxonomy can be useful and functional for an organization in their ability to be flexible, auditable, and adaptable
- Build a taxonomy that meets the needs of a workforce or organization through clustering, labeling, and production
Combining storytelling with real-world examples, theoretical analysis, and a practical framework, Job Architecture is an essential guide for companies to manage a competitive, modern workforce that improves the working experience for all employees…(More)”.
World Bank Report: “This brief presents the 2025 update of the GovTech Maturity Index (GTMI), offering a global snapshot of public sector digital transformation across 197 economies. The GTMI assesses four focus areas, Core Government Systems (CGSI), Online Public Service Delivery (PSDI), Digital Citizen Engagement (DCEI), and GovTech Enablers (GTEI), using 48 indicators. The methodology combines self-reported survey data from 158 economies with publicly available information for the remaining 39. Findings indicate overall progress since 2022 but widening disparities between higher-income (Group A) and lower-income (Group D) economies. Advances are noted in core systems (e.g., government cloud) and service delivery (e.g., customs services, digital ID), while digital citizen engagement remains the least mature area and adoption of a whole-of-government approach is limited. The brief recommends accelerating implementation of interoperability frameworks, strengthening sustainability of online service portals, and updating GovTech strategies in line with evolving technologies. It underscores the need for targeted support to low-income regions, particularly in Africa, and calls for clear monitoring frameworks to track progress and inform evidence-based policymaking…(More)”.
Article by Mira Mohsini & Andres Lopez: “When the Coalition of Communities of Color (CCC) began a multi-year collaboration with the Oregon Health Authority (OHA), they worked together to modernize a critical public health information source: the Oregon Student Health Survey. This survey, disseminated annually across Oregon, was designed to track health trends and inform policy decisions affecting thousands of young people and families.
But there was a problem. Year after year, this survey illuminated inequities, showing, for example, that students of color experienced higher rates of bullying or mental health challenges, without providing any insight into why these inequities existed, how they were experienced, or what communities wanted done about them. The data revealed gaps but offered no pathways to close them.
Working alongside other culturally specific organizations within their coalition and researchers of color in their region, CCC set out to demonstrate what better data could look like for the Oregon Student Health Survey. They worked with high school teachers who had deep relationships with students and met with students to understand what kinds of questions mattered most to them. Simple and straightforward questions like “How are you doing?” and “What supports do you need?” revealed issues that the state’s standardized surveys had completely missed. The process generated rich, contextual data showing not just that systems were failing, but how they were failing and how students desired their needs to be met. The process also demonstrated that working with people with lived experiences of the issues being researched generated better questions and, therefore, better data about these issues.
And the improvements resulting from better data were tangible. OHA created a Youth Data Council, involving young people directly in designing aspects of the next version of the Student Health Survey. CCC documented the survey modernization process in a detailed community brief. For the first time ever, the Oregon Student Health Survey included three open-ended questions, yielding over 4,000 qualitative responses. OHA published a groundbreaking analysis of what students actually wanted to say when given the chance…(More)”
Paper by Emilio Ferrara: “Generative AI (GenAI) now produces text, images, audio, and video that can be perceptually convincing at scale and at negligible marginal cost. While public debate often frames the associated harms as “deepfakes” or incremental extensions of misinformation and fraud, this view misses a broader socio-technical shift: GenAI enables synthetic realities; coherent, interactive, and potentially personalized information environments in which content, identity, and social interaction are jointly manufactured and mutually reinforcing. We argue that the most consequential risk is not merely the production of isolated synthetic artifacts, but the progressive erosion of shared epistemic ground and institutional verification practices as synthetic content, synthetic identity, and synthetic interaction become easy to generate and hard to audit. This paper (i) formalizes synthetic reality as a layered stack (content, identity, interaction, institutions), (ii) expands a taxonomy of GenAI harms spanning personal, economic, informational, and socio-technical risks, (iii) articulates the qualitative shifts introduced by GenAI (cost collapse, throughput, customization, micro-segmentation, provenance gaps, and trust erosion), and (iv) synthesizes recent risk realizations (2023-2025) into a compact case bank illustrating how these mechanisms manifest in fraud, elections, harassment, documentation, and supply-chain compromise. We then propose a mitigation stack that treats provenance infrastructure, platform governance, institutional workflow redesign, and public resilience as complementary rather than substitutable, and outline a research agenda focused on measuring epistemic security. We conclude with the Generative AI Paradox: as synthetic media becomes ubiquitous, societies may rationally discount digital evidence altogether…(More)”.
Article by Thijs van de Graaf: “Artificial intelligence is often cast as intangible, a technology that lives in the cloud and thinks in code. The reality is more grounded. Behind every chatbot or image generator lie servers that draw electricity, cooling systems that consume water, chips that rely on fragile supply chains, and minerals dug from the earth.
That physical backbone is rapidly expanding. Data centers are multiplying in number and in size. The largest ones, “hyperscale” centers, have power needs in the tens of megawatts, at the scale of a small city. Amazon, Microsoft, Google, and Meta already run hundreds worldwide, but the next wave is far larger, with projects at gigawatt scale. In Abu Dhabi, OpenAI and its partners are planning a 5-gigawatt campus, matching the output of five nuclear reactors and sprawling across 10 square miles.
Economists debate when, if ever, these vast investments will pay off in productivity gains. Even so, governments are treating AI as the new frontier of industrial policy, with initiatives on a scale once reserved for aerospace or nuclear power. The United Arab Emirates appointed the world’s first minister for artificial intelligence in 2017. France has pledged more than €100 billion in AI spending. And in the two countries at the forefront of AI, the race is increasingly geopolitical: The United States has wielded export controls on advanced chips, while China has responded with curbs on sales of key minerals.
The contest in algorithms is just as much a competition for energy, land, water, semiconductors, and minerals. Supplies of electricity and chips will determine how fast the AI revolution moves and which countries and companies will control it…(More)”.
Article by Jacob Taylor and Scott E. Page: “…Generative artificial intelligence (AI) does not transport bodies, but it is already starting to disrupt the physics of collective intelligence: How ideas, drafts, data, and perspectives move between people, how much information groups can process, and how quickly they can move from vague hunch to concrete product.
These shifts are thrilling and terrifying. It now feels easy to build thousands of new tools and workflows. Some will increase our capacity to solve problems. Some could transform our public spaces to be more inclusive and less polarizing. Some could also quietly hollow out the cultures, relationships, and institutions upon which our ability to solve problems together depends.
The challenge—and opportunity—for scientists and practitioners is to start testing how AI can advance collective intelligence in real policy domains, and how these mechanisms can be turned into new muscles and immune systems for shared problem-solving…(More)”.
UNDP Report: “Artificial Intelligence is advancing rapidly, yet many countries remain without the infrastructure, skills, and governance systems needed to capture its benefits. At the same time, they are already feeling its economic and social disruptions. This uneven mix of slow adoption and high vulnerability may trigger a Next Great Divergence, where inequalities between countries widen in the age of AI.
UNDP’s flagship report, The Next Great Divergence: Why AI May Widen Inequality Between Countries, highlights how these pressures are playing out most visibly in Asia and the Pacific, a region marked by vast differences in income, digital readiness, and institutional capacity. The report outlines practical pathways for countries to harness AI’s opportunities while managing its risks in support of broader human development.
The result of a multinational effort spanning Asia, Europe and North America, the paper draws on 9 nine background papers prepared with partners including the Massachusetts Institute of Technology (USA), the London School of Economics and Political Science (UK), the Max Planck Institute for Human Development (Germany), Tsinghua University and the Institute for AI International Governance (China), the University of Science and Technology of China, the Aapti Institute (India) and the Digital Future Lab (India)…(More)”.
Article by Hannah Devlin: “Hospitals in England are using artificial intelligence to help cut waiting times in emergency departments this winter.
The A&E forecasting tool predicts when demand will be highest, allowing trusts to better plan staffing and bed space. The prediction algorithm is trained on historical data including weather trends, school holidays, and rates of flu and Covid to determine how many people are likely to visit A&E.
The government said the technology allowed healthcare staff “to do the things that they’re trained to do, rather than having to be bound down by bureaucratic processes”.
Ian Murray, the minister for digital government and data, said: “The front door of the NHS is the A&E department. You’ve no idea how many people will come through the door, although you can have some analytical evidence that Saturday nights might be busier than a Tuesday night, for example, and the winter might be busier than the summer, unless you have a heatwave, of course…(More)”.
Open access book by Anna Berti Suman: “provides novel insights into the field, exploring the potential for the ‘sensing citizens’ to concretely influence risk governance by filling – intentional or accidental – official informational gaps. Grassroots-driven environmental monitoring based on own senses or on sensor technology, i.e., ‘citizen sensing’, can be considered a constructive response to crises. When lay people distrust official information or just want to fill data gaps, they may resort to sensors and data infrastructures to visualize, monitor, and report risks caused by environmental factors to public health. Although through a possible initial conflict, citizen sensing may ultimately have the potential to contribute to institutional risk governance. Citizen sensing proves to be a practice able to address governance challenges with the way data over an (environmental) risk problem are gathered and provided to the public. This essentially unveils the issue of a perceived legitimacy gap in current (environmental) risk governance. Nonetheless, it also opens avenues for a more inclusive and transparent governmental response to pressing and complex risks, affecting first and foremost local people…(More)”.
Paper by Juergen Schmidhuber: “Machine learning (ML) is the science of credit assignment. It seeks to find patterns in observations that explain and predict the consequences of events and actions. This then helps to improve future performance. Minsky’s so-called “fundamental credit assignment problem” (1963) surfaces in all sciences including physics (why is the world the way it is?) and history (which persons/ideas/actions have shaped society and civilisation?). Here I focus on the history of ML itself. Modern artificial intelligence (AI) is dominated by artificial neural networks (NNs) and deep learning, both of which are conceptually closer to the old field of cybernetics than what was traditionally called AI (e.g., expert systems and logic programming). A modern history of AI & ML must emphasize breakthroughs outside the scope of shallow AI text books. In particular, it must cover the mathematical foundations of today’s NNs such as the chain rule (1676), the first NNs (circa 1800), the first practical AI (1914), the theory of AI and its limitations (1931-34), and the first working deep learning algorithms (1965-). From the perspective of 2025, I provide a timeline of the most significant events in the history of NNs, ML, deep learning, AI, computer science, and mathematics in general, crediting the individuals who laid the field’s foundations. The text contains numerous hyperlinks to relevant overview sites. With a ten-year delay, it supplements my 2015 award-winning deep learning survey which provides hundreds of additional references. Finally, I will put things in a broader historical context, spanning from the Big Bang to when the universe will be many times older than it is now…(More)”.