Explore our articles
View All Results

Stefaan Verhulst

OECD Report: “Countries count AI compute infrastructure as a strategic asset without systematically tracking its distribution, availability and access. A new OECD Working Paper presents a methodology to help fill this gap by tracking and estimating the availability and global physical distribution of public cloud compute for AI.

Compute infrastructure is a foundational input for AI development and deployment, alongside data and algorithms. “AI compute” refers to the specialised hardware and software stacks required to train and run AI models. But as AI systems become more complex, their need for AI compute grows exponentially. 

The OECD collaborated with researchers from Oxford University Innovation on this new Working Paper to help operationalise a data collection framework outlined in an earlier OECD paper, A blueprint for building national compute capacity for artificial intelligence

Housed in data centres, AI compute comprises clusters of specialised semiconductors, or chips, known as AI accelerators. For the most part, three types of providers operate these clusters: government-funded computing facilities, private compute clusters, and public cloud providers (Figure 1). 

Public cloud AI compute refers to on-demand services from commercial providers, available to the general public.

Figure 1. Different types of AI compute and focus of this analysis

Figure 1. Different types of AI compute and focus of this analysis

This paper focuses on public cloud AI compute, which is particularly relevant for policymakers because:

  • It is accessible to a wide range of actors, including SMEs, academic institutions, and public agencies. 
  • It plays a central role in the development and deployment of the generative AI systems quickly diffusing into economies and societies. 
  • It is more transparent and measurable than private compute clusters or government-funded facilities, which often lack publicly available data…(More)”.
The geography of AI compute: Mapping what is available and where  

Report and Framework by UNDP: “…to help ensure data exchange systems reflect core governance principles and universal safeguards. The framework enables countries to assess how effectively their data exchange systems support inclusive public service delivery, identify practical steps to enable efficient and representative data sharing, mitigate risks of misuse, and function as digital public goods. Through the framework, UNDP aims to put integrity and public trust at the core of unlocking the full potential of data as a driver of equitable, efficient, secure and rights-based data exchange systems…(More)”

Explore the Governance Assessment Framework for Data Exchange Systems here 

Governance Assessment Framework for Data Exchange Systems

About: “CityMetrics is an online data platform to explore indicators and geospatial datasets related to the urban environment of many cities with which WRI works, including Cities4Forests ,UrbanShift and WRI Ross Center’s Deep Dive Cities Initiative. A previous iteration of CityMetrics was known as Cities Indicators. CityMetrics is in open beta phase and we welcome all feedback and request for improvements and bug fixes….

Indicators are organized into seven themes. The menus on the left side of the screen allow users to select a city of interest, indicator themes and a specific indicator. 

Indicator results can be viewed at the city scale—either for the political jurisdiction or the urban area—as summary value in comparison to the other cities in the selected city groups. Results can also be reviewed at the sub-city scale as a table, chart and, for many indicators, a map. All metrics for a particular city can also be reviewed in the City Overview. These views can be navigated between using the menu on the right side of the main window. Geospatial and tabular versions of the data in each view can be downloaded for offline use. Details about each indicator and the methods behind it are available by clicking on the “Information” icon next to the indicator description. Additional information on this project, the general methods used, and the methods and limitations of specific indicators is available in the associated WRI Technical Note…(More)”.

CityMetrics

Blog by Mayara Soares Faria, Ricardo Poppi and Carla de Paiva Bezerra: “We have heard it before and will likely hear it again: democracy is facing serious challenges. Around the world, levels of trust in governments and institutions are low. To overcome this, one of the most telling findings, highlighted in the latest OECD Survey on Drivers of Trust, is that people trust governments more when they feel their voices are genuinely heard.

Participation, therefore, has become a key ingredient for strengthening democracy and rebuilding trust. However, participation on its own does not guarantee trust. On the contrary, poorly designed processes can backfire, creating frustration and enhancing mistrust. Meaningful participation requires careful design, transparency, and a real link between what citizens ask for and what governments do.

It was to address this challenge that Brazil placed social participation at the heart of the government’s agenda. Within the General Secretariat of the Presidency, the National Secretariat of Social Participation was entrusted with a bold mission: to make policymaking more inclusive, reflective of the country’s regional and social diversity, and more effective by grounding it in the reality of each territory. To achieve this, a federal strategy of social participation was designed to foster dialogue between civil society and government, reduce barriers to participation and empower citizens. This required a concerted effort to rebuild the participatory structures that had been dismantled in previous years..(More)”.

Creating meaningful participation and building trust: The journey of Brasil Participativo

Article by Jacob Mchangama: “…The Trump administration has moved with startling speed from trumpeting free speech to seeking to criminalize it. At first glance, that might seem to vindicate the arguments in the historian Fara Dabhoiwala’s new book, What Is Free Speech? The History of a Dangerous Idea. Dabhoiwala believes that the modern obsession with free speech—particularly the American belief that almost any restriction on it threatens democracy—has blinded its defenders to how often that right is invoked cynically in pursuit of antidemocratic ends. In his view, the right to free speech has most often been wielded as “a weaponized mantra” by people motivated by “greed, technological change and political expediency” rather than as a principle invoked sincerely to restrain tyranny.

Although Dabhoiwala acknowledges that pre-Enlightenment peoples such as the Athenians valued forms of freedom of expression, his main story begins in the eighteenth century. In that era, he writes, the idea that freedom of speech was necessary for human flourishing went viral across Europe and the United States, despite the fact that the theorists who made the argument often did so “for personal gain, to silence others, to sow dissension or to subvert the truth.” A robust and civil-libertarian interpretation of it became entrenched in twentieth-century American culture and legal doctrine, but Dabhoiwala contends that modern First Amendment jurisprudence undermined the very democratic values it was supposed to safeguard. Rather than fulfilling its promise as an “antidote to misinformation and falsehood,” he writes, the American approach to free speech “often amplifies it.”…

Today’s crisis of free speech in America is not the legacy of John Stuart Mill or First Amendment fetishism. It has arisen because too many Americans have lost their faith in free-speech exceptionalism—at the very moment when the First Amendment remains the strongest constitutional barrier to Trump’s censorious agenda. Yet the First Amendment’s text alone cannot guarantee robust debate. Time and again, unpopular and persecuted groups—political, racial, and religious—have fought to strengthen its practical force. Americans must work again to secure that inheritance…(More)”.

Who Has Free Speech?

Blog by EuroCities: “Digitalisation has made it easier than ever to share information. And just as easy, to spread falsehoods. Cities are now facing the consequences as disinformation undermines public trust.

Local governments are often the first to feel the impact of misinformation, from confusion over public health advice to growing scepticism toward official information online. But as the level of government closest to citizens, and the one they trust the most according to the 2024 OECD Survey on Drivers of Trust in Public Institutions, cities are also in a strong position to respond. Across Europe, they are finding practical ways to strengthen transparency, improve communication, and help citizens navigate the digital world with confidence.

The trust crisis

False information spreads quickly through social media and online platforms. It fuels polarisation and confusion, and makes people question not only what is true, but who to trust. For local governments, this has a direct impact: if citizens lose confidence in their city’s information, services or institutions, democracy itself becomes weaker.

“Trust is fragile and being tested every day by the spread of misinformation,” said Sophie Woodville, Digital Programme Manager at Bordeaux Métropole. “We need to protect and strengthen that trust by rethinking how we deliver services, engage with citizens and build ecosystems that are transparent, inclusive and resilient.”

Cities’ proximity to citizens allows them to respond faster than national governments and to adapt messages to local realities and communities…

City representatives shared how misinformation takes shape at the local level.

In Ghent, false rumours during the Covid pandemic, from vaccine myths to confusion about lockdown rules , spread through neighbourhood networks and community groups. The city responded with clear, multilingual messages and direct outreach through schools, community influencers, and even printed flyers in eight languages.

“We chose not to attack the disinformation,” explained Mieke Hullebroeck, General Manager of the City of Ghent. “Instead, we built a communication strategy that was fair, transparent and clear, both internally to our staff and externally to our citizens. We made our messages as accessible as possible, using images and icons so that everyone could understand them.”

In Helsinki, misinformation has also taken new forms. “The amount of misinformation online multiplied during Covid, and we are still struggling with its effects,” said Jasmin Repo, Senior Advisor for Data Policy at the City of Helsinki. “Just recently, a deep fake video featuring a government official went viral. The quality of these fakes is improving so fast that it’s getting harder to know what is real. Combatting this requires not only digital skills, but critical thinking and understanding.”..(More)”.

Cities step up to rebuild trust in the digital age

Article by Robert Booth: “Experts have found weaknesses, some serious, in hundreds of tests used to check the safety and effectiveness of new artificial intelligence models being released into the world.

Computer scientists from the British government’s AI Security Institute, and experts at universities including Stanford, Berkeley and Oxford, examined more than 440 benchmarks that provide an important safety net.

They found flaws that “undermine the validity of the resulting claims”, that “almost all … have weaknesses in at least one area”, and resulting scores might be “irrelevant or even misleading”.

Many of the benchmarks are used to evaluate the latest AI models released by the big technology companies, said the study’s lead author, Andrew Bean, a researcher at the Oxford Internet Institute…(More)”

Experts find flaws in hundreds of tests that check AI safety and effectiveness

Article by Michael Stebbins & Eric Perakslis: “By shifting funding from small underpowered randomized control trials to large field experiments in which many different treatments are tested synchronously in a large population using the same objective measure of success, so-called megastudies can start to drive people toward healthier lifestyles. Megastudies will allow us to more quickly determine what works, in whom, and when for health-related behavioral interventions, saving tremendous dollars over traditional randomized controlled trial (RCT) approaches because of the scalability. But doing so requires the government to back the establishment of a research platform that sits on top of a large, diverse cohort of people with deep demographic data. 

According to the National Research Council, almost half of premature deaths (< 86 years of age) are caused by behavioral factors. Poor diet, high blood pressure, sedentary lifestyle, obesity, and tobacco use are the primary causes of early death for most of these people. Yet, despite studying these factors for decades, we know surprisingly little about what can be done to turn these unhealthy behaviors into healthier ones. This has not been due to a lack of effort. Thousands of randomized controlled trials intended to uncover messaging and incentives that can be used to steer people towards healthier behaviors have failed to yield impactful steps that can be broadly deployed to drive behavioral change across our diverse population. For sure, changing human behavior through such mechanisms is controversial, and difficult. Nonetheless studying how to bend behavior should be a national imperative if we are to extend healthspan and address the declining lifespan of Americans at scale….There is substantial risk when bringing together such deep personal data on a large population of people. While companies compile deep data all the time, it is unusual to do so for research purposes and will, for sure, raise some eyebrows, as has been the case for large studies like the aforementioned All of Us and the Million Veteran’s Program. 

Patients fear misuse of their data, inaccurate recommendations, and biased algorithms—especially among historically marginalized populations. Patients must trust that their data is being used for good, not for marketing purposes and determining their insurance rates. 

Icons © 2024 by Jae Deasigner is licensed under CC BY 4.0

Need for Data Interoperability

Many healthcare and community systems operate in data silos and data integration is a perennial challenge in healthcare. Patient-generated data from wearables, apps, or remote sensors often do not integrate with electronic health record data or demographic data gathered from elsewhere, limiting the precision and personalization of behavior-change interventions. This lack of interoperability undermines both provider engagement and user benefit..(More)”.

Behavioral Economics Megastudies are Necessary to Make America Healthy

Report by Open Data Watch: “In early 2025, an abrupt withdrawal of development assistance—driven by pauses in foreign aid and wider donor retrenchment—triggered a systemic shock to global health data systems. These systems, already reliant on a concentrated set of bilateral and multilateral funders for surveys, civil registration and vital statistics (CRVS), health management information systems (HMIS), and disease surveillance, now face immediate interruptions and heightened medium-term risks to data continuity, quality, openness, and use.

This report synthesizes early disclosures from major agencies, data from the Organisation for Economic Co-operation and Development / Development Assistance Committee (OECD/DAC), and a rapid assessment survey covering more than half of national statistical offices (NSOs). Evidence on philanthropic and domestic financing is incomplete, and survey nonresponse may introduce bias, but convergent signals show broad exposure. Three unknowns will shape the next 12–18 months: the duration of donor withdrawals, the degree of philanthropic bridging, and the extent of government backfilling to protect core functions…(More)”.

Rebuilding Global Health Data: Scale, Risks, and Paths to Recovery

Report by Brookings: “Cities in the U.S. and globally face a severe, system-wide housing shortfall—exacerbated by siloed, proprietary, and fragile data practices that impede coordinated action. Recent advances in artificial intelligence (AI) promise to increase the speed and effectiveness of data integration and decisionmaking for optimizing housing supply. But unlocking the value of these tools requires a common infrastructure of (i) shared computational assets (data, protocols, models) required to develop AI systems and (ii) institutional capabilities to deploy these systems to unlock housing supply. This memo develops a policy and implementation proposal for a “Home Genome Project” (Home GP): a cohort of cities building open standards, shared datasets and models, and an institutional playbook for operationalizing these assets using AI. Beginning with an initial pilot cohort of four to six cities, a Home GP-type initiative could help 50 partner cities identify and develop additional housing supply relative to business-as-usual projections by 2030. The open data infrastructure and AI tools developed through this approach could help cities better understand the on-the-ground impacts of policy decisions, while also providing a constructive way to track progress and stay accountable to longer-term housing supply goals…(More)”.

A home genome project: How a city learning cohort can create AI systems for optimizing housing supply

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday