Explore our articles
View All Results

Stefaan Verhulst

Article by Jacob Taylor and Scott E. Page: “…Generative artificial intelligence (AI) does not transport bodies, but it is already starting to disrupt the physics of collective intelligence: How ideas, drafts, data, and perspectives move between people, how much information groups can process, and how quickly they can move from vague hunch to concrete product.

These shifts are thrilling and terrifying. It now feels easy to build thousands of new tools and workflows. Some will increase our capacity to solve problems. Some could transform our public spaces to be more inclusive and less polarizing. Some could also quietly hollow out the cultures, relationships, and institutions upon which our ability to solve problems together depends.

The challenge—and opportunity—for scientists and practitioners is to start testing how AI can advance collective intelligence in real policy domains, and how these mechanisms can be turned into new muscles and immune systems for shared problem-solving…(More)”.

AI is changing the physics of collective intelligence—how do we respond?

UNDP Report: “Artificial Intelligence is advancing rapidly, yet many countries remain without the infrastructure, skills, and governance systems needed to capture its benefits. At the same time, they are already feeling its economic and social disruptions. This uneven mix of slow adoption and high vulnerability may trigger a Next Great Divergence, where inequalities between countries widen in the age of AI. 

UNDP’s flagship report, The Next Great Divergence: Why AI May Widen Inequality Between Countries, highlights how these pressures are playing out most visibly in Asia and the Pacific, a region marked by vast differences in income, digital readiness, and institutional capacity. The report outlines practical pathways for countries to harness AI’s opportunities while managing its risks in support of broader human development. 

The result of a multinational effort spanning Asia, Europe and North America, the paper draws on 9 nine background papers prepared with partners including the Massachusetts Institute of Technology (USA), the London School of Economics and Political Science (UK), the Max Planck Institute for Human Development (Germany), Tsinghua University and the Institute for AI International Governance (China), the University of Science and Technology of China, the Aapti Institute (India) and the Digital Future Lab (India)…(More)”.

The Next Great Divergence

Article by Hannah Devlin: “Hospitals in England are using artificial intelligence to help cut waiting times in emergency departments this winter.

The A&E forecasting tool predicts when demand will be highest, allowing trusts to better plan staffing and bed space. The prediction algorithm is trained on historical data including weather trends, school holidays, and rates of flu and Covid to determine how many people are likely to visit A&E.

The government said the technology allowed healthcare staff “to do the things that they’re trained to do, rather than having to be bound down by bureaucratic processes”.

Ian Murray, the minister for digital government and data, said: “The front door of the NHS is the A&E department. You’ve no idea how many people will come through the door, although you can have some analytical evidence that Saturday nights might be busier than a Tuesday night, for example, and the winter might be busier than the summer, unless you have a heatwave, of course…(More)”.

AI being used to help cut A&E waiting times in England this winter

Open access book by Anna Berti Suman: “provides novel insights into the field, exploring the potential for the ‘sensing citizens’ to concretely influence risk governance by filling – intentional or accidental – official informational gaps. Grassroots-driven environmental monitoring based on own senses or on sensor technology, i.e., ‘citizen sensing’, can be considered a constructive response to crises. When lay people distrust official information or just want to fill data gaps, they may resort to sensors and data infrastructures to visualize, monitor, and report risks caused by environmental factors to public health. Although through a possible initial conflict, citizen sensing may ultimately have the potential to contribute to institutional risk governance. Citizen sensing proves to be a practice able to address governance challenges with the way data over an (environmental) risk problem are gathered and provided to the public. This essentially unveils the issue of a perceived legitimacy gap in current (environmental) risk governance. Nonetheless, it also opens avenues for a more inclusive and transparent governmental response to pressing and complex risks, affecting first and foremost local people…(More)”.

Citizen Sensing for Risk Response

Paper by Juergen Schmidhuber: “Machine learning (ML) is the science of credit assignment. It seeks to find patterns in observations that explain and predict the consequences of events and actions. This then helps to improve future performance. Minsky’s so-called “fundamental credit assignment problem” (1963) surfaces in all sciences including physics (why is the world the way it is?) and history (which persons/ideas/actions have shaped society and civilisation?). Here I focus on the history of ML itself. Modern artificial intelligence (AI) is dominated by artificial neural networks (NNs) and deep learning, both of which are conceptually closer to the old field of cybernetics than what was traditionally called AI (e.g., expert systems and logic programming). A modern history of AI & ML must emphasize breakthroughs outside the scope of shallow AI text books. In particular, it must cover the mathematical foundations of today’s NNs such as the chain rule (1676), the first NNs (circa 1800), the first practical AI (1914), the theory of AI and its limitations (1931-34), and the first working deep learning algorithms (1965-). From the perspective of 2025, I provide a timeline of the most significant events in the history of NNs, ML, deep learning, AI, computer science, and mathematics in general, crediting the individuals who laid the field’s foundations. The text contains numerous hyperlinks to relevant overview sites. With a ten-year delay, it supplements my 2015 award-winning deep learning survey which provides hundreds of additional references. Finally, I will put things in a broader historical context, spanning from the Big Bang to when the universe will be many times older than it is now…(More)”.

Annotated History of Modern AI and Deep Learning

Paper by Katharina Fellnhofer, Emilia Vähämaa & Margarita Angelidou: “Trust serves both as a social signal and as an alternative governance mechanism, enhancing confidence in collective action and institutional commitment to the public good. This study investigates how trust—particularly in regional organizations—influences citizen engagement in policymaking processes. Drawing on survey data from 7729 respondents across four European regions, via our Bayesian linear mixed-effect model, we find that higher levels of trust in regional organizations and perceived individual’s trust is significantly associated with higher citizen demand for engagement in policy development. However, a notable gender disparity emerges: while women report higher levels of trust in regional organizations, this does not translate into a greater demand for engagement. This finding underscores the need for more inclusive and equity-oriented engagement strategies that address gendered differences in political efficacy and perceived responsiveness. Our results have practical implications for participatory governance, particularly in the context of addressing complex urban sustainability challenges…(More)”. (See also: Making Civic Trust Less Abstract: A Framework for Measuring Trust Within Cities).

Public trust in regional policymaking and citizen demands for engagement

Article by Ellie McDonald and Lea Kaspar: “As the dust settles on the World Summit on the Information Society (WSIS) 20-year Review, attention is turning to what the final outcome document (adopted by consensus on 17 December) ultimately delivers. For much of the review, discussions were pragmatic and forward-looking, reflecting a shared interest in maintaining the relevance of the WSIS framework amid a rapidly evolving digital policy landscape. As negotiations moved into their final phase, focus narrowed to a smaller set of long-standing questions, shaping the contours of the text that was agreed.

The outcome document does not seek to resolve all of the issues raised during the review. Rather, it reaffirms core principles, clarifies institutional roles, and sets out expectations for implementation that will now need to be tested in practice.

As negotiations concluded, GPD intervened during the WSIS+20 high-level event this week, emphasising that legitimacy in digital governance is not secured by consensus alone, but depends on sustained participation, human rights anchoring, and accountability as frameworks move into implementation. Read the full intervention here…(More)“.

WSIS+20: What the Final Outcome Delivers – and What It Leaves Unresolved

Article by Aimee Levitt: “This past March, when Google began rolling out its AI Mode search capability, it began offering AI-generated recipes. The recipes were not all that intelligent. The AI had taken elements of similar recipes from multiple creators and Frankensteined them into something barely recognizable. In one memorable case, the Google AI failed to distinguish comments on a Reddit thread from legitimate recipe sites and advised users to cook with non-toxic glue.

Over the past few years, bloggers who have not secured their sites behind a paywall have seen their carefully developed and tested recipes show up, often without attribution and in a bastardized form, in ChatGPT replies. They have seen dumbed-down versions of their recipes in AI-assembled cookbooks available for digital downloads on Etsy or on AI-built websites that bear a superficial resemblance to an old-school human-written blog. Their photos and videos, meanwhile, are repurposed in Facebook posts and Pinterest pins that link back to this digital slop.

Recipe writers have no legal recourse because recipes generally are not copyrightable. Although copyright protects published or recorded work, they do not cover sets of instructions (although it can apply to the particular wording of those instructions)…(More)”.

Google AI summaries are ruining the livelihoods of recipe writers: ‘It’s an extinction event’

Article by James Grimmelmann: “…In response to these tussles, various groups have started trying to create new versions of robots.txt for the AI age. Many of these proposals focus on making REP more granular. Instead of a just binary decision—allow or disallow access—they add mechanisms for websites to place conditions on the usage of the contents scraped from it. This is not the first such attempt—a group of publishers proposed a system called the Automated Content Access Protocol in 2006 that was never widely adopted—but these new ones have more industry support and momentum.

Cloudflare’s Content Signals Policy (CSL) extends robots.txt with new syntax to differentiate using scraped content for search engines, AI model training, and AI inference. A group of publishers and content platforms has backed a more complicated set of extensions called Really Simple Licensing (RSL) that also includes restrictions on allowed users (for example, personal versus commercial versus educational) and countries or regions (for example, the U.S. but not the EU). And Creative Commons (disclosure: I am a member of its Board of Directors) is exploring a set of “preference signals” that would allow reuse of scraped content under certain conditions (for example, that any AI-generated outputs provide appropriate attribution of the source of data).

At the same time, some of these same groups are trying to extend REP into something more ambitious: a framework for websites and scrapers to negotiate payment and content licensing terms. Cloudflare is experimenting with using the HTTP response code 402 PAYMENT REQUIRED to direct scrapers into a “pay per crawl” system. RSL, for its part, includes detailed provisions for publishers to specify commercial licensing terms; for example, they might require scrapers to pay a specified fee per AI output made based on the content.

Going even further, other extensions to RSL include protocols for crawlers to authenticate themselves, and for sites to provide trusted crawlers with access to encrypted content. This is a full-fledged copyright licensing scheme built on the foundation—or perhaps on the ruins—of REP.

Preserving the Best of the Open Web

CSP, RSL, and similar proposals are a meaningful improvement on the ongoing struggle between websites and AI companies. They could greatly reduce the ongoing technical burdens of rampant scraping, and they could resolve many disputes through licensing rather than litigation. A future where AI companies and authors agree on payment for training data is better than a future where the AI companies just grab everything they can and the authors respond only by suing.

But at the same time, RSL similar proposals move away from something beautiful about REP: its commitment to the open web. The world of robots.txt was one where it was simply expected, as a matter of course, that people would put content on webpages and share it freely with the world. The legal system protected websites against egregious abuses—like denial-of-service attacks, or wholesale piracy—but it treated ordinary scraping as mostly harmless…(More)”

AI Scraping and the Open Web

Article by Jeff Jarvis: “News organizations will face an AI reckoning in 2026 and a choice: They can keep blocking AI crawlers, suing AI companies, and lobbying for protectionist AI legislation — all the while making themselves invisible to the publics they serve on the next medium that matters — or they can figure out how to play along.

Unsurprisingly, I hold a number of likely unpopular opinions on the matter:

  1. Journalists must address their civic, professional, even moral obligation to provide news, reporting, and information via AI. For — like it or not — AI is where more and more people will go for information. It is clear that competitors for attention — marketing and misinformation — are rushing to be included in the training and output of large language models. This study finds that “reputable sites forbid an average of 15.5 AI user agents, while misinformation sites prohibit fewer than one.” By blocking AI, journalism is abdicating control of society’s information ecosystem to pitchmen and propagandists.
  2. AI no longer needs news. Major models are already trained and in the future will be trained with synthetic data. Next frontiers in AI development — see, for example, the work of Yann LeCun — will be built on world models and experience, not text and content.
  3. Anyway, training models is fair use and transformative. This debate will not be fully adjudicated for some time, but the Anthropic decision makes it clear that media’s copyright fight against training is a tenuous strategy. Note well that the used books Anthropic legitimately acquired yielded no payment to authors or publishers, and if Anthropic had only bought one copy of each title in the purloined databases, it would not have been found liable and authors would have netted the royalties on just one book each.
  4. AI is the new means of discovery online. I had a conversation with a news executive recently who, in one breath, boasted of cutting off all the AI bots save one (Google’s), and in the next asked how his sites will be discovered online. The answer: AI. Rich Skrenta, executive director of the Common Crawl Foundation, writes that if media brands block crawlers, AI models will not know to search for them, quote them, or link to them when users ask relevant questions. He advises publishers to replace SEO with AIO: optimization for AI. Ah, but you say, AI doesn’t link. No. This study compared the links in AI against search and found that ChatGPT displayed “a systemic and overwhelming bias towards Earned media (third-party, authoritative sources) over Brand owned and Social content, a stark contrast to Google’s more balanced mix.” The links are there. Whether users click on them is, as ever, another question. But if your links aren’t there, no one will click anyway…(More)”.
APIs for news

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday