Explore our articles
View All Results

Stefaan Verhulst

Article by Felix Gille and Federica Zavattaro: “Trust is essential for successful digital health initiatives. It does not emerge spontaneously but demands deliberate, targeted efforts. A four-step approach (understand context, identify levers, implement trust indicators and refine actions) supports practical implementation. With a comprehensive understanding of user trust and traits of trustworthy initiatives, it is important to shift from abstraction to practical action using a stepwise method that delivers tangible benefits and keeps trust from remaining theoretical.

Key Points

  • Trust is vital for the acceptance of digital health initiatives and the broader digital transformation of health systems.
  • Frameworks define principles that support the development and implementation of trustworthy digital health initiatives.
  • Trust performance indicators enable ongoing evaluation and improvement of initiatives.
  • Building trust demands proactivity, leadership, resources, system thinking and continuous action…(More)”.
How to Build Trustworthy Digital Health Initiatives?

White paper by The Impact Licensing Initiative (ILI):”… has introduced impact licensing as a strategic tool to maximise the societal and environmental value of research, innovation, and technology. Through this approach, ILI has successfully enabled the sustainable transfer of five technologies from universities and companies to impact-driven ventures, fostering a robust pipeline of innovations across diverse sectors such as clean energy, sustainable agriculture, healthcare, artificial intelligence, and education. This whitepaper, aimed at investment audiences and policymakers, outlines the basics of impact licensing and based on early applications, demonstrates how its adoption can accelerate positive change in addressing societal challenges…(More)”.

Impact Licensing as a strategic instrument for Impact Investment

Report by Epoch: “What will happen if AI scaling persists to 2030? We are releasing a report that examines what this scale-up would involve in terms of compute, investment, data, hardware, and energy. We further examine the future AI capabilities this scaling will enable, particularly in scientific R&D, which is a focus for leading AI developers. We argue that AI scaling is likely to continue through 2030, despite requiring unprecedented infrastructure, and will deliver transformative capabilities across science and beyond.

Scaling is likely to continue until 2030: On current trends, frontier AI models in 2030 will require investments of hundreds of billions of dollars, and gigawatts of electrical power. Although these are daunting challenges, they are surmountable. Such investments will be justified if AI can generate corresponding economic returns by increasing productivity. If AI lab revenues keep growing at their current rate, they would generate returns that justify hundred-billion-dollar investments in scaling.

Scaling will lead to valuable AI capabilities: By 2030, AI will be able to implement complex scientific software from natural language, assist mathematicians formalising proof sketches, and answer open-ended questions about biology protocols. All of these examples are taken from existing AI benchmarks showing progress, where simple extrapolation suggests they will be solved by 2030. We expect AI capabilities will be transformative across several scientific fields, although it may take longer than 2030 to see them deployed to full effect…(More)”.

What will AI look like in 2030?

Paper by Jed Sundwall: “The COVID-19 pandemic revealed a striking lack of global data coordination among public institutions. While national governments and the World Health Organization struggled to collect and share timely information, a small team at Johns Hopkins University built a public dashboard that became the world’s most trusted source of COVID-19 data. Their work showed that planetary-scale data infrastructure can emerge quickly when practitioners act, even without formal authority. It also exposed a deeper truth: we cannot build global coordination on data without shared standards, yet it is unclear who gets to define what those standards might be.

This paper examines how the World Wide Web has created the conditions for useful data standards to emerge in the absence of any clear authority. It begins from the premise that standards are not just technical artefacts but the “connective tissue” that enables collaboration across institutions. They extend language itself, allowing people and systems to describe the world in compatible terms. Using the history of the web, the paper traces how small, loosely organised groups have repeatedly developed standards that now underpin global information exchange…(More)”.

Emergent standards: Enabling collaborations across institutions

Report by the Friedrich Naumann Foundation: “Despite the rapid global advancement of artificial intelligence, a significant disparity exists in its accessibility and benefits, disproportionately affecting low- and middle-income countries facing distinct socioeconomic hurdles. This policy paper compellingly argues for immediate, concerted international and national initiatives focused on strategic investments in infrastructure, capacity development, and ethical governance to ensure these nations can leverage AI as a powerful catalyst for equitable and sustainable progress…(More)”.

Examining AI in Low and Middle-Income Countries 

Blog by Beth Noveck: “Officials in Hamburg had long struggled with the fact that while citizens submitted thousands of comments on planning projects, only a fraction could realistically be read and processed. Making sense of feedback from a single engagement could once occupy five full-time employees for more than a week and chill any desire to do a follow-up conversation. Learn about how Hamburg built its own open source artificial intelligence to make sense of citizen feedback on a scale and speed that was once unimaginable…The Digital Participation System (DIPAS) is Hamburg, Germany’s integrated digital participation platform, designed to let residents contribute ideas, comments, and feedback on urban development projects online or in workshops. It combines mapping, document sharing, and discussion tools so that citizens can engage directly with concrete plans for their neighborhoods. 

City officials had long struggled with the fact that while citizens submitted thousands of comments on planning projects, only a fraction could realistically be read and processed.

“We take the promise of participation seriously,” explained Claudius Lieven, one of DIPAS’s creators in Hamburg’s Ministry of Urban Development and Housing. “If people contribute, their collective intelligence must count. But with so many inputs, we simply couldn’t keep up.” 

Making sense of feedback from a single engagement could once occupy five full-time employees for more than a week and chill any desire to do a follow-up conversation. 

As a result, Lieven and his team spent three years integrating AI into the open-source system to make the new analytics toolbox more useful and the government more responsive. They combined the fine-tuning of Facebook’s advanced open-source language models LLaMA and RoBERTa with topic modeling and geodata integration. 

Image3

With AI, DIPAS can cluster and summarize thousands of comments and distinguish between a “major position” about the current situation (for example, “The bike path is unsafe”) and an “idea” proposing what should be done (“The bike path should have better lighting”)…(More)”.

How Hamburg is Turning Resident Comments into Actionable Insight

Article by Aivin Solatorio: “As artificial intelligence (AI) becomes a new gateway to development data, a quiet but significant risk has emerged. Large language models (LLMs) can now summarize reports, answer data queries, and interpret indicators in seconds. But while these tools promise convenience, they also raise a fundamental question: How can we ensure that the numbers they produce remain true to the official data they claim to represent?

AI access does not equal data integrity

Many AI systems today use retrieval-augmented generation (RAG), a technique that feeds models with content from trusted sources or databases. While it is widely viewed as a safeguard against hallucinations, it does not eliminate them. Even when an AI model retrieves the correct data, it may still generate outputs that deviate from it. It might round numbers to sound natural, merge disaggregated values, or restate statistics in ways that subtly alter their meaning. These deviations often go unnoticed because the AI still appears confident and precise to the end user.

Developers often measure such errors through evaluation experiments (or “evals”), reporting aggregate accuracy rates. But those averages mean little to a policymaker, journalist, or citizen interacting with an AI tool. What matters is not whether the model is usually correct, but whether the specific number it just produced is faithful to the official data. 

Where Proof-Carrying Numbers come in

Proof-Carrying Numbers (PCN), a novel trust protocol developed by the AI for Data – Data for AI team, addresses this gap. It introduces a mechanism for verifying numerical faithfulness — that is, how closely the AI’s numbers match the trusted data they are based on — in real time.

Here’s how it works:

  • The data passed to the LLM must include a claim identifier and a policy that defines acceptable behavior (e.g., exact match required, rounding allowed, etc.). 
  • The model is instructed to follow the PCN protocol when generating numbers based on that data.
  • Each numeric output is checked against the reference data on which it was conditioned.
  • If the result satisfies the policy, PCN marks it as verified [✓].
  • If it deviates, PCN flags it for review [⚠️]. 
  • Any numbers produced without explicit marks are assumed unverified and should be treated cautiously.

This is a fail-closed mechanism, a built-in safeguard that errs on the side of caution. When faithfulness cannot be proven, PCN does not assume correctness; instead, it makes that failure visible. This feature changes how users interact with AI: instead of trusting responses blindly, they can immediately see whether a number aligns with official data…(More)”.

Strengthening governance and trust in AI-based data dissemination with Proof-Carrying Numbers

Paper by Margaret Hughes, et al: “Communities frequently report sending feedback “into a void” during community engagement processes like neighborhood planning, creating a critical disconnect between public input and decision-making. Voice to Vision addresses this gap with a sociotechnical system that comprises three integrated components: a flexible data architecture linking community input to planning outputs, a sensemaking interface for planners to analyze and synthesize feedback, and a community-facing platform that makes the entire engagement process transparent. By creating a shared information space between stakeholders, our system demonstrates how structured data and specialized interfaces can foster cooperation across stakeholder groups, while addressing tensions in accessibility and trust formation. Our CSCW demonstration will showcase this system’s ability to transform opaque civic decision-making processes into collaborative exchanges, inviting feedback on its potential applications beyond urban planning…(More)”.

Voice to Vision: A Sociotechnical System for Transparent Civic Decision-Making

Book by Maximilian Kasy: “AI is inescapable, from its mundane uses online to its increasingly consequential decision-making in courtrooms, job interviews, and wars. The ubiquity of AI is so great that it might produce public resignation—a sense that the technology is our shared fate.
 
As economist Maximilian Kasy shows in The Means of Prediction, artificial intelligence, far from being an unstoppable force, is irrevocably shaped by human decisions—choices made to date by the ownership class that steers its development and deployment. Kasy shows that the technology of AI is ultimately not that complex. It is insidious, however, in its capacity to steer results to its owners’ wants and ends. Kasy clearly and accessibly explains the fundamental principles on which AI works, and, in doing so, reveals that the real conflict isn’t between humans and machines, but between those who control the machines and the rest of us.
 
The Means of Prediction offers a powerful vision of the future of AI: a future not shaped by technology, but by the technology’s owners. Amid a deluge of debates about technical details, new possibilities, and social problems, Kasy cuts to the core issue: Who controls AI’s objectives, and how is this control maintained? The answer lies in what he calls “the means of prediction,” or the essential resources required for building AI systems: data, computing power, expertise, and energy. As Kasy shows, in a world already defined by inequality, one of humanity’s most consequential technologies has been and will be steered by those already in power.
 
Against those stakes, Kasy offers an elegant framework both for understanding AI’s capabilities and for designing its public control. He makes a compelling case for democratic control over AI objectives as the answer to mounting concerns about AI’s risks and harms. The Means of Prediction is a revelation, both an expert undressing of a technology that has masqueraded as more complicated and a compelling call for public oversight of this transformative technology…(More)”.

The Means of Prediction: How AI Really Works (and Who Benefits)

Paper by Rosa Vicari: “…Misinformation significantly challenges disaster risk management by increasing risks and complicating response efforts. This technical note introduces a methodology toolbox designed to help policy makers, decision makers, practitioners, and scientists systematically assess, prevent, and mitigate the risks and impacts of misinformation in disaster scenarios. The methodology consists of eight steps, each offering specific tools and strategies to help address misinformation effectively. The process begins with defining the communication context using PESTEL analysis and Berlo’s communication model to assess external factors and information flow. It then focuses on identifying misinformation patterns through data collection and analysis using advanced AI methods. The impact of misinformation on risk perceptions is assessed through established theoretical frameworks, guiding the development of targeted strategies. The methodology includes practical measures for mitigating misinformation, such as implementing AI tools for prebunking and debunking false information. Evaluating the effectiveness of these measures is crucial, and continuous monitoring is recommended to adapt strategies in real-time. Ethical considerations are outlined to ensure compliance with international laws and data privacy regulations. The final step emphasizes managerial aspects, including clear communication and public education, to build trust and promote reliable information sources. This structured approach provides practical insights for enhancing disaster response and reducing the risks associated with misinformation…(More)”.

A toolbox to deal with misinformation in disaster risk management

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday