Explore our articles
View All Results

Stefaan Verhulst

Article by Hannah Devlin: “Hospitals in England are using artificial intelligence to help cut waiting times in emergency departments this winter.

The A&E forecasting tool predicts when demand will be highest, allowing trusts to better plan staffing and bed space. The prediction algorithm is trained on historical data including weather trends, school holidays, and rates of flu and Covid to determine how many people are likely to visit A&E.

The government said the technology allowed healthcare staff “to do the things that they’re trained to do, rather than having to be bound down by bureaucratic processes”.

Ian Murray, the minister for digital government and data, said: “The front door of the NHS is the A&E department. You’ve no idea how many people will come through the door, although you can have some analytical evidence that Saturday nights might be busier than a Tuesday night, for example, and the winter might be busier than the summer, unless you have a heatwave, of course…(More)”.

AI being used to help cut A&E waiting times in England this winter

Open access book by Anna Berti Suman: “provides novel insights into the field, exploring the potential for the ‘sensing citizens’ to concretely influence risk governance by filling – intentional or accidental – official informational gaps. Grassroots-driven environmental monitoring based on own senses or on sensor technology, i.e., ‘citizen sensing’, can be considered a constructive response to crises. When lay people distrust official information or just want to fill data gaps, they may resort to sensors and data infrastructures to visualize, monitor, and report risks caused by environmental factors to public health. Although through a possible initial conflict, citizen sensing may ultimately have the potential to contribute to institutional risk governance. Citizen sensing proves to be a practice able to address governance challenges with the way data over an (environmental) risk problem are gathered and provided to the public. This essentially unveils the issue of a perceived legitimacy gap in current (environmental) risk governance. Nonetheless, it also opens avenues for a more inclusive and transparent governmental response to pressing and complex risks, affecting first and foremost local people…(More)”.

Citizen Sensing for Risk Response

Paper by Juergen Schmidhuber: “Machine learning (ML) is the science of credit assignment. It seeks to find patterns in observations that explain and predict the consequences of events and actions. This then helps to improve future performance. Minsky’s so-called “fundamental credit assignment problem” (1963) surfaces in all sciences including physics (why is the world the way it is?) and history (which persons/ideas/actions have shaped society and civilisation?). Here I focus on the history of ML itself. Modern artificial intelligence (AI) is dominated by artificial neural networks (NNs) and deep learning, both of which are conceptually closer to the old field of cybernetics than what was traditionally called AI (e.g., expert systems and logic programming). A modern history of AI & ML must emphasize breakthroughs outside the scope of shallow AI text books. In particular, it must cover the mathematical foundations of today’s NNs such as the chain rule (1676), the first NNs (circa 1800), the first practical AI (1914), the theory of AI and its limitations (1931-34), and the first working deep learning algorithms (1965-). From the perspective of 2025, I provide a timeline of the most significant events in the history of NNs, ML, deep learning, AI, computer science, and mathematics in general, crediting the individuals who laid the field’s foundations. The text contains numerous hyperlinks to relevant overview sites. With a ten-year delay, it supplements my 2015 award-winning deep learning survey which provides hundreds of additional references. Finally, I will put things in a broader historical context, spanning from the Big Bang to when the universe will be many times older than it is now…(More)”.

Annotated History of Modern AI and Deep Learning

Paper by Katharina Fellnhofer, Emilia Vähämaa & Margarita Angelidou: “Trust serves both as a social signal and as an alternative governance mechanism, enhancing confidence in collective action and institutional commitment to the public good. This study investigates how trust—particularly in regional organizations—influences citizen engagement in policymaking processes. Drawing on survey data from 7729 respondents across four European regions, via our Bayesian linear mixed-effect model, we find that higher levels of trust in regional organizations and perceived individual’s trust is significantly associated with higher citizen demand for engagement in policy development. However, a notable gender disparity emerges: while women report higher levels of trust in regional organizations, this does not translate into a greater demand for engagement. This finding underscores the need for more inclusive and equity-oriented engagement strategies that address gendered differences in political efficacy and perceived responsiveness. Our results have practical implications for participatory governance, particularly in the context of addressing complex urban sustainability challenges…(More)”. (See also: Making Civic Trust Less Abstract: A Framework for Measuring Trust Within Cities).

Public trust in regional policymaking and citizen demands for engagement

Article by Ellie McDonald and Lea Kaspar: “As the dust settles on the World Summit on the Information Society (WSIS) 20-year Review, attention is turning to what the final outcome document (adopted by consensus on 17 December) ultimately delivers. For much of the review, discussions were pragmatic and forward-looking, reflecting a shared interest in maintaining the relevance of the WSIS framework amid a rapidly evolving digital policy landscape. As negotiations moved into their final phase, focus narrowed to a smaller set of long-standing questions, shaping the contours of the text that was agreed.

The outcome document does not seek to resolve all of the issues raised during the review. Rather, it reaffirms core principles, clarifies institutional roles, and sets out expectations for implementation that will now need to be tested in practice.

As negotiations concluded, GPD intervened during the WSIS+20 high-level event this week, emphasising that legitimacy in digital governance is not secured by consensus alone, but depends on sustained participation, human rights anchoring, and accountability as frameworks move into implementation. Read the full intervention here…(More)“.

WSIS+20: What the Final Outcome Delivers – and What It Leaves Unresolved

Article by Aimee Levitt: “This past March, when Google began rolling out its AI Mode search capability, it began offering AI-generated recipes. The recipes were not all that intelligent. The AI had taken elements of similar recipes from multiple creators and Frankensteined them into something barely recognizable. In one memorable case, the Google AI failed to distinguish comments on a Reddit thread from legitimate recipe sites and advised users to cook with non-toxic glue.

Over the past few years, bloggers who have not secured their sites behind a paywall have seen their carefully developed and tested recipes show up, often without attribution and in a bastardized form, in ChatGPT replies. They have seen dumbed-down versions of their recipes in AI-assembled cookbooks available for digital downloads on Etsy or on AI-built websites that bear a superficial resemblance to an old-school human-written blog. Their photos and videos, meanwhile, are repurposed in Facebook posts and Pinterest pins that link back to this digital slop.

Recipe writers have no legal recourse because recipes generally are not copyrightable. Although copyright protects published or recorded work, they do not cover sets of instructions (although it can apply to the particular wording of those instructions)…(More)”.

Google AI summaries are ruining the livelihoods of recipe writers: ‘It’s an extinction event’

Article by James Grimmelmann: “…In response to these tussles, various groups have started trying to create new versions of robots.txt for the AI age. Many of these proposals focus on making REP more granular. Instead of a just binary decision—allow or disallow access—they add mechanisms for websites to place conditions on the usage of the contents scraped from it. This is not the first such attempt—a group of publishers proposed a system called the Automated Content Access Protocol in 2006 that was never widely adopted—but these new ones have more industry support and momentum.

Cloudflare’s Content Signals Policy (CSL) extends robots.txt with new syntax to differentiate using scraped content for search engines, AI model training, and AI inference. A group of publishers and content platforms has backed a more complicated set of extensions called Really Simple Licensing (RSL) that also includes restrictions on allowed users (for example, personal versus commercial versus educational) and countries or regions (for example, the U.S. but not the EU). And Creative Commons (disclosure: I am a member of its Board of Directors) is exploring a set of “preference signals” that would allow reuse of scraped content under certain conditions (for example, that any AI-generated outputs provide appropriate attribution of the source of data).

At the same time, some of these same groups are trying to extend REP into something more ambitious: a framework for websites and scrapers to negotiate payment and content licensing terms. Cloudflare is experimenting with using the HTTP response code 402 PAYMENT REQUIRED to direct scrapers into a “pay per crawl” system. RSL, for its part, includes detailed provisions for publishers to specify commercial licensing terms; for example, they might require scrapers to pay a specified fee per AI output made based on the content.

Going even further, other extensions to RSL include protocols for crawlers to authenticate themselves, and for sites to provide trusted crawlers with access to encrypted content. This is a full-fledged copyright licensing scheme built on the foundation—or perhaps on the ruins—of REP.

Preserving the Best of the Open Web

CSP, RSL, and similar proposals are a meaningful improvement on the ongoing struggle between websites and AI companies. They could greatly reduce the ongoing technical burdens of rampant scraping, and they could resolve many disputes through licensing rather than litigation. A future where AI companies and authors agree on payment for training data is better than a future where the AI companies just grab everything they can and the authors respond only by suing.

But at the same time, RSL similar proposals move away from something beautiful about REP: its commitment to the open web. The world of robots.txt was one where it was simply expected, as a matter of course, that people would put content on webpages and share it freely with the world. The legal system protected websites against egregious abuses—like denial-of-service attacks, or wholesale piracy—but it treated ordinary scraping as mostly harmless…(More)”

AI Scraping and the Open Web

Article by Jeff Jarvis: “News organizations will face an AI reckoning in 2026 and a choice: They can keep blocking AI crawlers, suing AI companies, and lobbying for protectionist AI legislation — all the while making themselves invisible to the publics they serve on the next medium that matters — or they can figure out how to play along.

Unsurprisingly, I hold a number of likely unpopular opinions on the matter:

  1. Journalists must address their civic, professional, even moral obligation to provide news, reporting, and information via AI. For — like it or not — AI is where more and more people will go for information. It is clear that competitors for attention — marketing and misinformation — are rushing to be included in the training and output of large language models. This study finds that “reputable sites forbid an average of 15.5 AI user agents, while misinformation sites prohibit fewer than one.” By blocking AI, journalism is abdicating control of society’s information ecosystem to pitchmen and propagandists.
  2. AI no longer needs news. Major models are already trained and in the future will be trained with synthetic data. Next frontiers in AI development — see, for example, the work of Yann LeCun — will be built on world models and experience, not text and content.
  3. Anyway, training models is fair use and transformative. This debate will not be fully adjudicated for some time, but the Anthropic decision makes it clear that media’s copyright fight against training is a tenuous strategy. Note well that the used books Anthropic legitimately acquired yielded no payment to authors or publishers, and if Anthropic had only bought one copy of each title in the purloined databases, it would not have been found liable and authors would have netted the royalties on just one book each.
  4. AI is the new means of discovery online. I had a conversation with a news executive recently who, in one breath, boasted of cutting off all the AI bots save one (Google’s), and in the next asked how his sites will be discovered online. The answer: AI. Rich Skrenta, executive director of the Common Crawl Foundation, writes that if media brands block crawlers, AI models will not know to search for them, quote them, or link to them when users ask relevant questions. He advises publishers to replace SEO with AIO: optimization for AI. Ah, but you say, AI doesn’t link. No. This study compared the links in AI against search and found that ChatGPT displayed “a systemic and overwhelming bias towards Earned media (third-party, authoritative sources) over Brand owned and Social content, a stark contrast to Google’s more balanced mix.” The links are there. Whether users click on them is, as ever, another question. But if your links aren’t there, no one will click anyway…(More)”.
APIs for news

Book Review by Charles Carman: “One day, Mrs. Pengelley came to London seeking the assistance of Hercule Poirot, Agatha Christie’s Belgian detective with the mustache, whose “little grey cells” assist him in solving mysteries. With a troubled look, she tells him that she fears she is being slowly poisoned. The doctor doesn’t see anything much the matter, she says. He attributes the stomach trouble to gastritis. She even sometimes improves, but strangely this happens during the absence of someone in her life, confirming in her a certain suspicion.

After listening to her tale with great interest, Poirot agrees to take up the case. He sends the lady back and plans to catch a train the following day to begin his investigation. Discussing the matter with his close friend, Captain Hastings, Poirot admits the case is especially interesting, even though “it has positively no new features,” because “if I mistake not, we have here a very poignant human drama.”

When Poirot arrives the next day, he discovers that the lady has been murdered after unwittingly taking the final dose of poison. Having found the case intriguing enough to look into it, Poirot chastises himself, a “criminal imbecile,” for not having taken her story more seriously. “May the good God forgive me,” he declares, “but I never believed anything would happen at all. Her story seemed to me artificial.” Had he been convinced enough to return with her right away, he might have saved her. All that remains for him now is to catch the murderer.

“The Cornish Mystery” occurred to me while reading Paul Kingsnorth’s new collection of essays, Against the Machine: On the Unmaking of Humanity. In the story he weaves, a sinister force has been lurking for some time within our civilization, especially in the West. His suspicion falls upon something to do with science, technology, and how we misapprehend the world. It has been slowly sapping away at our life, creating problems that have been diagnosed as this or that malady and treated with such and such a remedy. Sometimes we feel better. And yet, we sense we are being dehumanized, unmade, that something essential is being destroyed piece by piece. Such a process is hard to pin down. This is the genius of murder by slow poisoning: it leads to doubt and misattribution. There is little ambiguity about a gunshot to the heart. Yet when killing dose by dose, one easily mistakes murderous intent with the body’s frailty, a lingering affliction, or incidental complications: murder disguised as natural causes…(More)”.

The Cassandra of ‘The Machine’

Open Access Book edited by Ines Mergel and Carsten Schmidt: “…explores how national libraries digitally transform their processes and services by using artificial intelligence and shows how they integrate co-creation strategies and provide actionable insights and recommendations for policymakers and library managers to help shape the future of libraries. It is the result of the LibrarIN project, a Horizon Europe initiative focused on reimagining library services through social innovation and the co-creation of public value.

The book comprises three different parts. The first part focuses on the introduction, research design, and description of expectations. The second part consists of twelve in-depth illustrations of AI projects in national libraries of the European Union, associated countries, and the Library of Congress, United States. These case studies demonstrate how national libraries co-created the digital transformation of their services by including their stakeholders in the AI implementation steps to preserve the national values and heritage. The third part includes recommendations for implementation and provides insights into a “toolkit” for policymakers and innovators in libraries…(More)”.

AI Innovations in Public Services: The Case of National Libraries

Get the latest news right in your inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday