Stefaan Verhulst
Book by C. Thi Nguyen: “…takes us deep into the heart of games, and into the depths of bureaucracy, to see how scoring systems shape our desires.
Games are the most important art form of our era. They embody the spirit of free play. They show us the subtle beauty of action everywhere in life in video games, sports, and boardgames—but also cooking, gardening, fly-fishing, and running. They remind us that it isn’t always about outcomes, but about how glorious it feels to be doing the thing. And the scoring systems help get us there, by giving us new goals to try on.
Scoring systems are also at the center of our corporations and bureaucracies—in the form of metrics and rankings. They tell us exactly how to measure our success. They encourage us to outsource our values to an external authority. And they push on us to value simple, countable things. Metrics don’t capture what really matters; they only capture what’s easy to measure. The price of that clarity is our independence.
The Score asks us is this the game you really want to be playing?…(More)”.
Paper by Juergen Schmidhuber: “Machine learning (ML) is the science of credit assignment. It seeks to find patterns in observations that explain and predict the consequences of events and actions. This then helps to improve future performance. Minsky’s so-called “fundamental credit assignment problem” (1963) surfaces in all sciences including physics (why is the world the way it is?) and history (which persons/ideas/actions have shaped society and civilisation?). Here I focus on the history of ML itself. Modern artificial intelligence (AI) is dominated by artificial neural networks (NNs) and deep learning, both of which are conceptually closer to the old field of cybernetics than what was traditionally called AI (e.g., expert systems and logic programming). A modern history of AI & ML must emphasize breakthroughs outside the scope of shallow AI text books. In particular, it must cover the mathematical foundations of today’s NNs such as the chain rule (1676), the first NNs (circa 1800), the first practical AI (1914), the theory of AI and its limitations (1931-34), and the first working deep learning algorithms (1965-). From the perspective of 2025, I provide a timeline of the most significant events in the history of NNs, ML, deep learning, AI, computer science, and mathematics in general, crediting the individuals who laid the field’s foundations. The text contains numerous hyperlinks to relevant overview sites. With a ten-year delay, it supplements my 2015 award-winning deep learning survey which provides hundreds of additional references. Finally, I will put things in a broader historical context, spanning from the Big Bang to when the universe will be many times older than it is now…(More)”.
Article by Hannah Devlin: “Hospitals in England are using artificial intelligence to help cut waiting times in emergency departments this winter.
The A&E forecasting tool predicts when demand will be highest, allowing trusts to better plan staffing and bed space. The prediction algorithm is trained on historical data including weather trends, school holidays, and rates of flu and Covid to determine how many people are likely to visit A&E.
The government said the technology allowed healthcare staff “to do the things that they’re trained to do, rather than having to be bound down by bureaucratic processes”.
Ian Murray, the minister for digital government and data, said: “The front door of the NHS is the A&E department. You’ve no idea how many people will come through the door, although you can have some analytical evidence that Saturday nights might be busier than a Tuesday night, for example, and the winter might be busier than the summer, unless you have a heatwave, of course…(More)”.
Open access book by Anna Berti Suman: “provides novel insights into the field, exploring the potential for the ‘sensing citizens’ to concretely influence risk governance by filling – intentional or accidental – official informational gaps. Grassroots-driven environmental monitoring based on own senses or on sensor technology, i.e., ‘citizen sensing’, can be considered a constructive response to crises. When lay people distrust official information or just want to fill data gaps, they may resort to sensors and data infrastructures to visualize, monitor, and report risks caused by environmental factors to public health. Although through a possible initial conflict, citizen sensing may ultimately have the potential to contribute to institutional risk governance. Citizen sensing proves to be a practice able to address governance challenges with the way data over an (environmental) risk problem are gathered and provided to the public. This essentially unveils the issue of a perceived legitimacy gap in current (environmental) risk governance. Nonetheless, it also opens avenues for a more inclusive and transparent governmental response to pressing and complex risks, affecting first and foremost local people…(More)”.
Press Release by IBM: “Record-setting wildfires across Bolivia last year scorched an area the size of Greece, displacing thousands of people and leading to widespread loss of crops and livestock. The cause of the fires was attributed to land clearing, pasture burning, and a severe drought during what was Earth’s warmest year on record.
The Bolivia wildfires are just one, among hundreds, of extreme flood and wildfire events captured in a new global, multi-modal dataset called ImpactMesh, open-sourced this week by IBM Research in Europe and the European Space Agency (ESA). The dataset is also multi-temporal, meaning it features before-and-after snapshots of flooded or fire-scorched areas. The footage was captured by the Copernicus Sentinel-1 and Sentinel-2 Earth-orbiting satellites over the last decade.
To provide a clearer picture of landscape-level changes, each of the extreme events in the dataset is represented by three types of observations — optical images, radar images, and an elevation map of the impacted area. When storm clouds and smoky fires block optical sensors from seeing the extent of flood and wildfires from space, radar images and the altitude of the terrain can help to reveal the severity of what just happened…(More)”.
Article by Louis Menand: “Once, every middle-class home had a piano and a dictionary. The purpose of the piano was to be able to listen to music before phonographs were available and affordable. Later on, it was to torture young persons by insisting that they learn to do something few people do well. The purpose of the dictionary was to settle intra-family disputes over the spelling of words like “camaraderie” and “sesquipedalian,” or over the correct pronunciation of “puttee.” (Dad wasn’t always right!) Also, it was sometimes useful for doing homework or playing Scrabble.
This was the state of the world not that long ago. In the late nineteen-eighties, Merriam-Webster’s Collegiate Dictionary was on the Times best-seller list for a hundred and fifty-five consecutive weeks. Fifty-seven million copies were sold, a number believed to be second only, in this country, to sales of the Bible. (The No. 1 print dictionary in the world is the Chinese-language Xinhua Dictionary; more than five hundred million copies have sold since it was introduced, in 1953.)
There was good money in the word business. Then came the internet and, with it, ready-to-hand answers to all questions lexical. If you are writing on a computer, it’s almost impossible to misspell a word anymore. It’s hard even to misplace a comma, although students do manage it. And, if you run across an unfamiliar word, you can type it into your browser and get a list of websites with information about it, often way more than you want or need. Like the rest of the analog world, legacy dictionaries have had to adapt or perish. Stefan Fatsis’s “Unabridged: The Thrill of (and Threat to) the Modern Dictionary” (Atlantic Monthly Press) is a good-natured and sympathetic account of what seems to be a losing struggle…(More)”.
Whitepaper by Frontiers: “…shows that AI has rapidly become part of everyday peer review, with 53% of reviewers now using AI tools. The findings in Unlocking AI’s untapped potential: responsible innovation in research and publishing point to a pivotal moment for research publishing. Adoption is accelerating and the opportunity now is to translate this momentum into stronger, more transparent, and more equitable research practices as demonstrated in Frontiers’ policy outlines.
Drawing on insights from 1,645 active researchers worldwide, the whitepaper identifies a global community eager to use AI confidently and responsibly. While many reviewers currently rely on AI for drafting reports or summarizing findings, the report highlights significant untapped potential for AI to support rigor, reproducibility, and deeper methodological insight.
The study shows broad enthusiasm for using AI more effectively, especially among early-career researchers (87% adoption) and in rapidly growing research regions such as China (77%) and Africa (66%). Researchers in all regions see clear benefits, from reducing workload to improving communication, and many express a desire for clear, consistent policy recommendations that would enable more advanced use…(More)”.
Article by Mira Mohsini & Andres Lopez: “When the Coalition of Communities of Color (CCC) began a multi-year collaboration with the Oregon Health Authority (OHA), they worked together to modernize a critical public health information source: the Oregon Student Health Survey. This survey, disseminated annually across Oregon, was designed to track health trends and inform policy decisions affecting thousands of young people and families.
But there was a problem. Year after year, this survey illuminated inequities, showing, for example, that students of color experienced higher rates of bullying or mental health challenges, without providing any insight into why these inequities existed, how they were experienced, or what communities wanted done about them. The data revealed gaps but offered no pathways to close them.
Working alongside other culturally specific organizations within their coalition and researchers of color in their region, CCC set out to demonstrate what better data could look like for the Oregon Student Health Survey. They worked with high school teachers who had deep relationships with students and met with students to understand what kinds of questions mattered most to them. Simple and straightforward questions like “How are you doing?” and “What supports do you need?” revealed issues that the state’s standardized surveys had completely missed. The process generated rich, contextual data showing not just that systems were failing, but how they were failing and how students desired their needs to be met. The process also demonstrated that working with people with lived experiences of the issues being researched generated better questions and, therefore, better data about these issues.
And the improvements resulting from better data were tangible. OHA created a Youth Data Council, involving young people directly in designing aspects of the next version of the Student Health Survey. CCC documented the survey modernization process in a detailed community brief. For the first time ever, the Oregon Student Health Survey included three open-ended questions, yielding over 4,000 qualitative responses. OHA published a groundbreaking analysis of what students actually wanted to say when given the chance…(More)”
Article by Thijs van de Graaf: “Artificial intelligence is often cast as intangible, a technology that lives in the cloud and thinks in code. The reality is more grounded. Behind every chatbot or image generator lie servers that draw electricity, cooling systems that consume water, chips that rely on fragile supply chains, and minerals dug from the earth.
That physical backbone is rapidly expanding. Data centers are multiplying in number and in size. The largest ones, “hyperscale” centers, have power needs in the tens of megawatts, at the scale of a small city. Amazon, Microsoft, Google, and Meta already run hundreds worldwide, but the next wave is far larger, with projects at gigawatt scale. In Abu Dhabi, OpenAI and its partners are planning a 5-gigawatt campus, matching the output of five nuclear reactors and sprawling across 10 square miles.
Economists debate when, if ever, these vast investments will pay off in productivity gains. Even so, governments are treating AI as the new frontier of industrial policy, with initiatives on a scale once reserved for aerospace or nuclear power. The United Arab Emirates appointed the world’s first minister for artificial intelligence in 2017. France has pledged more than €100 billion in AI spending. And in the two countries at the forefront of AI, the race is increasingly geopolitical: The United States has wielded export controls on advanced chips, while China has responded with curbs on sales of key minerals.
The contest in algorithms is just as much a competition for energy, land, water, semiconductors, and minerals. Supplies of electricity and chips will determine how fast the AI revolution moves and which countries and companies will control it…(More)”.
Article by Jacob Taylor and Scott E. Page: “…Generative artificial intelligence (AI) does not transport bodies, but it is already starting to disrupt the physics of collective intelligence: How ideas, drafts, data, and perspectives move between people, how much information groups can process, and how quickly they can move from vague hunch to concrete product.
These shifts are thrilling and terrifying. It now feels easy to build thousands of new tools and workflows. Some will increase our capacity to solve problems. Some could transform our public spaces to be more inclusive and less polarizing. Some could also quietly hollow out the cultures, relationships, and institutions upon which our ability to solve problems together depends.
The challenge—and opportunity—for scientists and practitioners is to start testing how AI can advance collective intelligence in real policy domains, and how these mechanisms can be turned into new muscles and immune systems for shared problem-solving…(More)”.