Explore our articles
View All Results

Stefaan Verhulst

Essay by Marthe Smedinga, Angela Ballantyne and Owen Schaefer: “Advancing the public interest’ is a criterion for de-identified data use for research via several national data platforms and biobanks. This may be referred to via cognate terms such as public benefit, public good or social value. The criterion is often adopted without it being a legal requirement. It is a legal requirement in some jurisdictions for sharing identifiable data without consent, which does not apply to de-identified data. We argue that, even in circumstances where there are few or no legal restrictions on the sharing of de-identified data, there is a sound ethical reason for platforms to nevertheless impose a public interest criterion on data sharing. We argue that a public interest test is ethically essential for justifying research use of de-identified data via government-funded platforms because (1) it allows to promote public good and to minimise potential harmful consequences of research for both individuals and groups, for example, by offering grounds to reject research that could lead to stigmatisation of marginalised populations; (2) national data platforms hold public data and are made possible by government funds, and therefore should be used to support public interests and (3) it can demonstrate trustworthiness and contribute to promoting the social licence for data platforms to operate, which is especially important for efforts to align data governance policies with public norms and expectations…(More)”.

Why de-identified data sharing for research should be in the public interest

Press Release: “On Saturday, 9 August, National Women’s Day, the Department of Planning, Monitoring and Evaluation (DPME), together with the Pan African Collective for Evidence (PACE), unveiled South Africa’s first AI-driven Living Evidence Map aimed at tackling Gender-Based Violence and Femicide (GBVF).

Minister Maropene Ramokgopa emphasised that the launch goes beyond mere technology—it’s about taking action. “It is a tool for action, a tool for justice, and a tool that puts survivors first. We will not end GBVF with words alone. We need evidence, accountability, and the courage to act.

The cutting-edge digital platform was developed to bolster Pillar 6 of the National Strategic Plan on Gender-Based Violence and Femicide (NSP GBVF), which focuses on improving research and information management.

With this initiative, South Africa now boasts the country’s largest centralised and regularly updated gender evidence database, consolidating academic research, community insights, evaluations, and government data into a single platform.

“To end GBVF, we need to understand what works, for whom, and why. This platform gives us the power to base decisions on evidence rather than assumptions,” stated the department.

The Living Evidence Map is a collaborative effort involving researchers, civil society, and government departments. It is powered by ChatEIDM, an AI engine that enables real-time interaction with the data.

The platform is designed to assist:

  • Policymakers in creating targeted interventions
  • Civil society organisations in developing evidence-based strategies
  • Researchers and evaluators in spotting gaps and trends
  • The general public in gaining insight into the scale of GBVF and potential solutions…(More)”
South Africa launches AI-powered Living Evidence Map to combat GBVF

Article by Haishan Fu, Aivin Solatorio, Olivier Dupriez and Craig Hammer: “AI, particularly large language models (LLMs), is completely transforming the way people interact with data. Data users at all levels of experience and expertise—from first-timers to power users—are now able to pose complex questions in natural language to chatbots, to which they expect to promptly find, interpret, and present data-driven insights packaged as pithy, accurate responses.

For this evolution to be successful, AI systems need to get it right. This means the data being accessed and interpreted by AI systems must first be evaluated, validated, structured, governed, and shared in ways that support the responsible and effective use of AI. In short, the data must be “AI-ready.” 

AI-ready data does not supplant earlier advancements, foundational concepts, or standards—such as the Fundamental Principles of Official Statistics, open data frameworks, or the FAIR (Findable, Accessible, Interoperable, and Reusable) principles—but rather it builds on them. By extending established foundations and standards, AI-ready data means that development data is continuously open, discoverable, and reusable, while ensuring that it is systematically organized and well-documented, to facilitate seamless use by both people and AI systems. Ensuring AI-readiness can thus shorten the distance between development data and decision-making for better policies and faster innovation, democratizing development insights. The World Bank, in its efforts to become a bigger, better “Data Bank,” is already working to make this happen, in partnership with country partners and the global development community…(More)” See also: Moving Toward the FAIR-R principles: Advancing AI-Ready Data.

From open data to AI-ready data: Building the foundations for responsible AI in development

Paper by Alisha Suhag, Romana Burgess and Anya Skatova: “The growing ubiquity of digital footprint data presents new opportunities for behavioral epidemiology and public health research. Among these, supermarket loyalty card data—passively collected records of consumer purchases—offer objective, high-frequency insights into health-related behaviors at both individual and population levels. This paper explores the potential of loyalty card data to strengthen public health surveillance across 4 key behavioral risk domains: diet, alcohol, tobacco, and over-the-counter medication use. Drawing on recent empirical studies, we outline how these data can complement traditional epidemiological data sources by improving exposure assessment, enabling real-time trend monitoring, and supporting intervention evaluation. We also discuss critical methodological challenges, including issues of representativeness, data integration, and privacy, as well as the need for robust validation strategies. By synthesizing the current evidence base and offering practical recommendations for researchers, this paper highlights how loyalty card data can be responsibly leveraged to advance behavioral risk monitoring and support the adaptation of epidemiological practice to contemporary digital data environments…(More)”.

Shopping Data for Population Health Surveillance: Opportunities, Challenges, and Future Directions

Paper by Elena Murray, Moiz Raja Shaikh, Stefaan Verhulst, Hinali Dosh, Romeo Leapciuc, Perizat Mamutalieva, and Mahadia Tunga: “As data-driven service delivery expands, data reuse holds significant potential to improve access to and quality of essential services for young people. However, limited youth involvement in decisions about how their data is reused risks perpetuating mistrust and deepening the inequalities that these services seek to address, particularly if young people choose to avoid seeking services or withhold critical information out of fear of misuse. Responsible data reuse to enhance service delivery must therefore be grounded in methodologies that meaningfully engage youth and reflect their preferences and expectations. This paper presents findings from the NextGenData project, which developed and piloted a scalable methodology for engaging young people aged 19-24 in co-designing responsible data reuse strategies. Conducted as a year-long participatory action research initiative across India, Tanzania, Moldova, and Kyrgyzstan, the approach implemented youth assemblies, deliberative methods, and localized facilitation by national partners to engage young people. The study emphasizes the importance of context-specific, culturally responsive facilitation, and sustained, multi-phase engagement as the foundation for establishing a social license for data reuse. We present recommendations for practitioners to embed youth-centered approaches in data governance and offer a publicly available toolkit for replication. By centering young people in data decisions, this methodology advances ethical, inclusive, and effective service delivery and digital self-determination for young generations…(More)”.

Who Decides What and How Data is Re-Used? Lessons Learned from Youth-Led Co-Design for Responsible Data Reuse in Services

Article by The Financial Times Editorial Board: “Pity anyone tasked with delivering bad news about the US economy to Donald Trump. For months, Federal Reserve chair Jay Powell has drawn the president’s ire by failing to engineer cuts to interest rates — prompting childish name-calling and threats to his job. On Friday, it was the Bureau of Labor Statistics’ turn. The agency published sluggish non-farm payroll numbers for July, and reduced its estimates for job creation in the prior two months by a chunky 258,000. Erika McEntarfer, the agency’s commissioner, was spared the insults only to be fired on the spot. A replacement is expected to be announced soon.

Trump claimed, without evidence, that McEntarfer massaged the figures. The most likely explanation is that the US president simply did not like the numbers. It was only a matter of time before the administration’s cuts to civil service jobs, downbeat surveys of private sector hiring plans and the strain of elevated interest rates showed up in the headline numbers. And although last week’s data downgrades were large, non-farm payrolls are notoriously volatile and revisions are common. The president said: “Important numbers like this must be fair and accurate, they can’t be manipulated for political purposes.” Yet by sacking the BLS chief on dubious grounds, he has undermined trust in America’s economic data, and politicised it.

First, the drastic move creates a culture of fear around the production of national economic statistics. This gives investors, businesses and the Fed reason to doubt whether concerns around a presidential backlash might influence forthcoming data releases not just from the BLS but also from other public bodies, including the Bureau of Economic Analysis, which produces the GDP numbers. Second, it is likely that Trump’s replacement for McEntarfer might be more pliant to his demands. That threatens the integrity of the BLS’s data itself, not just how it is perceived.

The president’s actions are unhelpful for his own ambitions too. The BLS produces reports on the labour market and inflation, which underpin the pricing of trillions of dollars in assets globally. While private data sources can plug some gaps, stoking doubts over the credibility of national data still erodes the ability of investors, businesses and policymakers to make informed decisions. Ironically, the central bank is looking for clear signs of weakness in the labour market before making the rate cuts that Trump so desires. Just as worrying is the imminent replacement of Fed governor Adriana Kugler — who, on Friday, stepped down early — with what Trump hopes will be a puppet rate-setter.

The BLS is not without flaws. Like many national statistics bodies, it has faced falling survey response rates, especially since the pandemic. This has raised questions over the representativeness of its samples and the accuracy of its aggregations. A funding squeeze — exacerbated by the Trump administration’s own public sector cutbacks — hasn’t helped. In February, several advisory councils to federal statistical agencies were also terminated. Rather than engaging in a useful revamp of national statistics, Trump has gone for the heavy-handed option.

The president isn’t alone. He joins a list of leaders, including from Turkey, Iran, Russia, Argentina and China, accused in recent years of meddling with economic institutions in order to control the public narrative…(More)”

Trump’s chilling assault on economic data

Article by Cade Metz: “In downtown Berkeley, an old hotel has become a temple to the pursuit of artificial intelligence and the future of humanity. Its name is Lighthaven.

Covering much of a city block, this gated complex includes five buildings and a small park dotted with rose bushes, stone fountains and neoclassical statues. Stained glass windows glisten on the top floor of the tallest building, called Bayes House after an 18th-century mathematician and philosopher. Lighthaven is the de facto headquarters of a group who call themselves the Rationalists. This group has many interests involving mathematics, genetics and philosophy. One of their overriding beliefs is that artificial intelligence can deliver a better life if it doesn’t destroy humanity first. And the Rationalists believe it is up to the people building A.I. to ensure that it is a force for the greater good.

The Rationalists were talking about A.I. risks years before OpenAI created ChatGPT, which brought A.I. into the mainstream and turned Silicon Valley on its head. Their influence has quietly spread through many tech companies, from industry giants like Google to A.I. pioneers like OpenAI and Anthropic.

Many of the A.I. world’s biggest names — including Shane Legg, a co-founder of Google’s DeepMind; Anthropic’s chief executive, Dario Amodei; and Paul Christiano, a former OpenAI researcher who now leads safety work at the U.S. Center for A.I. Standards and Innovation — have been influenced by Rationalist philosophy. Elon Musk, who runs his own A.I. company, said that many of the community’s ideas align with his own.

Mr. Musk met his former partner, the pop star Grimes, after they made the same cheeky reference to a Rationalist belief called Roko’s Basilisk. This elaborate thought experiment argues that when an all-powerful A.I. arrives, it will punish everyone who has not done everything they can to bring it into existence.

But these tech industry leaders stop short of calling themselves Rationalists, often because that label has over the years invited ridicule…(More)”.

The Rise of Silicon Valley’s Techno-Religion

Article by David Adam: “Attached to the Very Large Telescope in Chile, the Multi Unit Spectroscopic Explorer (MUSE) allows researchers to probe the most distant galaxies. It’s a popular instrument: for its next observing session, from October to April, scientists have applied for more than 3,000 hours of observation time. That’s a problem. Even though it’s dubbed a cosmic time machine, not even MUSE can squeeze 379 nights of work into just seven months.

The European Southern Observatory (ESO), which runs the Chile telescope, usually asks panels of experts to select the worthiest proposals. But as the number of requests has soared, so has the burden on the scientists asked to grade them.

“The load was simply unbearable,” says astronomer Nando Patat at ESO’s Observing Programmes Office in Garching, Germany. So, in 2022, ESO passed the work back to the applicants. Teams that want observing time must also assess related applications from rival groups.AI is transforming peer review — and many scientists are worried

The change is one increasingly popular answer to the labour crisis engulfing peer review — the process by which grant applications and research manuscripts are assessed and filtered by specialists before a final decision is made about funding or publication.

With the number of scholarly papers rising each year, publishers and editors complain that it’s getting harder to get everything reviewed. And some funding bodies, such as ESO, are struggling to find reviewers.

As pressure on the system grows, many researchers point to low-quality or error-strewn research appearing in journals as an indictment of their peer-review systems failing to uphold rigour. Others complain that clunky grant-review systems are preventing exciting research ideas from being funded…(More)”.

The peer-review crisis: how to fix an overloaded system

Article by Sophia Fox-Sowell: “Illinois Gov. JB Pritzker last Friday signed a a bill into law banning the use of artificial intelligence from providing mental health services, aiming to protect residents from potentially harmful advice.

Known as the Wellness and Oversight for Psychological Resources Act, the law prohibits AI systems from delivering therapeutic treatment or making clinical decisions. The legislation still allows AI tools to be used in administrative roles, such as scheduling or note-taking, but draws a clear boundary around direct patient care.

Companies or individuals found to be in violation could face $10,000 in fines, enforced by the Illinois Department of Financial and Professional Regulation.

“The people of Illinois deserve quality healthcare from real, qualified professionals and not computer programs that pull information from all corners of the internet to generate responses that harm patients,” Mario Treto, Jr., Illinois’ financial regulation secretary, said in a press release. “This legislation stands as our commitment to safeguarding the well-being of our residents by ensuring that mental health services are delivered by trained experts who prioritize patient care above all else.”

The new legislation is a response to growing concerns over the use of AI in sensitive areas like health care. The Washington Post reported last May that an AI-powered therapist chatbot recommended “a small hit of meth to get through this week” to a fictional former addict.

Last year, the Illinois House Health Care Licenses and Insurance Committees held a joint hearing on AI in health insurance in which legislators and experts warned that AI systems lack the empathy, accountability or clinical oversight necessary for safe mental health treatment…(More)”.

Illinois bans AI from providing mental health services

Report by the National Academies of Sciences, Engineering, and Medicine: “As the artificial intelligence (AI) landscape rapidly evolves, many state and local governments are exploring how to use these technologies to enhance public services and governance. Alongside the potential to improve efficiency, responsiveness, and decision-making, AI adoption also brings challenges including concerns about privacy, bias, transparency, public trust, and long-term oversight. This guidance is intended for those involved in shaping, implementing, or managing AI in state and local government. By following structured, evidence-informed strategies, governments can integrate AI tools responsibly and in ways that reflect community values and institutional goals…(More)”.

Strategies for Integrating AI into State and Local Government Decision Making

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday