Explore our articles
View All Results

Stefaan Verhulst

Article by Elsevier: “… An understanding of how UK research and innovation supports the Government’s five missions would encompass many dimensions, such as the people, infrastructure and resource of the research system, as well its research output.

A new methodology has been developed to map the UK’s research publications to the Government’s five missions. This offers an approach to understanding one dimension of this question and offers insights into others.

Elsevier has created a new AI-powered methodology using Large Language Models to classify research papers into mission areas. Currently in development, we welcome your feedback to help improve it.

For the first time, this approach enables research outputs to be mapped at scale to narrative descriptions of policy priorities, and it has the potential to be applied to other government or policy priorities. This analysis was produced by classifying 20 million articles published between 2019-2023…(More)”.

UK research and innovation capability to support Government’s five missions

About: “…is a decentralized artist-owned archive…Our mission is to cooperatively maintain artworks in perpetuity, ensuring their preservation and access across generations. We manage and grow the value of digital assets by leveraging decentralized storage and encryption. Backed by a network of care, our model offers a new approach to media art valuation, conservation and governance…

Built on a foundation of more than a decade of mutual support and value exchange, our governance model offers a bold proposition of distributive justice in contemporary art. The model allows artists to own the value of their work and grow the equity of their studio practice through sustainable care…(More)”.

TRANSFER Data Trust

Paper by Aileen Nielsen, Chelse Swoopes and Elena Glassman: “As large language models (LLMs) enter judicial workflows, courts face mounting risks of uncritical reliance, conceptual brittleness, and procedural opacity in the unguided use of these tools. Jurists’ early ventures have attracted both praise and scrutiny, yet they have unfolded without critical attention to the role of interface design. This Essay argues that interface design is not a neutral conduit but rather a critical variable in shaping how judges can and will interact with LLM-generated content. Using Judge Newsom’s recent concurrences in Snell and Deleon as case studies, we show how more thoughtfully designed, AI-resilient interfaces could have mitigated problems of opacity, reproducibility, and conceptual brittleness identified in his explorative LLM-informed adjudication.

We offer a course correction on the legal community’s uncritical acceptance of the chat interface for LLM-assisted work. Proprietary consumer-facing chat interfaces are deeply problematic when used for adjudication. Such interfaces obscure the underlying stochasticity of model outputs and fail to support critical engagement with such outputs. In contrast, we describe existing, open-source interfaces designed to support reproducible workflows, enhance user awareness of LLM limitations, and preserve interpretive agency. Such tools could encourage judges to scrutinize LLM outputs, in part by offering affordances for scaling, archiving, and visualizing LLM outputs that are lacking in proprietary chat interfaces. We particularly caution against the uncritical use of LLMs in “hard cases,” where human uncertainty may perversely increase reliance on AI tools just when those tools may be more likely to fail. 

Beyond critique, we chart a path forward by articulating a broader vision for AI-resilient law: a system of incorporating law that would support judicial transparency, improve efficiency without compromising legitimacy, and open new possibilities for LLM-augmented legal reading and writing. Interface design is essential to legal AI governance. By foregrounding the design of human-AI interactions, this work proposes to reorient the legal community toward a more principled and truly generative approach to integrating LLMs into legal practice…(More)“.

Law is vulnerable to AI influence; interface design can help

Paper by Susan Ariel Aaronson: “Following decades of US government-led initiatives to open data and to encourage data flows, it came as a shock to many US trade partners when the Biden administration announced in October 2023 that it would withdraw its support for proposals to encourage cross-border free flow of data being discussed at the World Trade Organization. US policy makers questioned whether supporting these proposals was still in the country’s national interest. The United States followed through on its concerns by issuing executive orders to restrict the sale and transfer of various types of data to China and several other adversary nations. This paper examines how and why the United States became increasingly concerned about the national security risks of cross-border free flow of data and the impact of such restrictions on data…(More)”.

A Difficult Balance: Privacy, National Security and the Free Flow of Data

Article by Stefaan Verhulst: “At the turn of the 20th century, Andrew Carnegie was one of the richest men in the world. He was also one of the most reviled, infamous for the harsh labor conditions and occasional violence at his steel mills. Determined to rehabilitate his reputation, Carnegie embarked upon a number of ambitious philanthropic ventures that would redefine his legacy, and leave a lasting impact on the United States and the world.

Among the most ambitious of these were the Carnegie Libraries. Between 1860 and 1930, Carnegie spent almost $60 million (equivalent to around $2.3 billion today), to build a network of 2,509 libraries globally — 1,689 in the United States and the rest in places as diverse as Australia, Fiji, South Africa, and his native Scotland. Carnegie supported these libraries for a number of reasons: to burnish his own reputation, because he thought it would help support immigrant integration into the US, but most of all because he was “dedicated to the diffusion of knowledge.” For Carnegie, greater knowledge was key to fostering all manner of social goods — everything from a healthier democracy to more innovation and better health. Today, many of those libraries still stand in communities across the country, a testament to the lasting impact of Carnegie’s generosity.

The story of Carnegie’s libraries would seem to offer a happy story from the past, a quaint period piece. But it has resonance in the present.

Today, we are once again presented with a landscape in which information is both abundant and scarce, offering tremendous potential for the public good yet largely accessible and reusable only to a small (corporate) minority. This paradox stems from the fact that while more and more aspects of our lives are captured in digital form, the resulting data is increasingly locked away, or inaccessible.

The centrality of data to public life is now undeniable, particularly with the rise of generative artificial intelligence, which relies on vast troves of high-quality, diverse, and timely datasets. Yet access to such data is being steadily eroded as governments, corporations, and institutions impose new restrictions on what can be accessed and reused. In some cases, open data portals and official statistics once celebrated as milestones of transparency have been defunded or scaled back, with fewer datasets published and those that remain limited to low-risk, non-sensitive material. At the same time, private platforms that once offered public APIs for research — such as Twitter (now X), Meta and Reddit — have closed or heavily monetized access, cutting off academics, civil society groups, and smaller enterprises from vital resources.

The drivers of this shift are varied but interlinked. The rise of generative AI has triggered what some call “generative AI-nxiety,” prompting news organizations, academic institutions, and other data custodians to block crawlers and restrict even non-sensitive repositories, often in (understandable) reaction to unconsented scraping for commercial model training. This is compounded by a broader research data lockdown, in which critical resources such as social media datasets used to study misinformation, political discourse, or mental health, and open environmental data essential for climate modeling, are increasingly subject to paywalls, restrictive licensing, or geopolitical disputes.

Rising calls for digital sovereignty have also led to a proliferation of data localization laws that prevent cross-border flows, undermining collaborative efforts on urgent global challenges like pandemic preparedness, disaster response, and environmental monitoring. Meanwhile, in the private sector, data is increasingly treated as a proprietary asset to be hoarded or sold, rather than a shared resource that can be stewarded responsibly for mutual benefit.

Indeed, we may be entering a new “data winter,” one marked by the emergence of new silos and gatekeepers and by a relentless — and socially corrosive — erosion of the open, interoperable data infrastructures that once seemed to hold so much promise.

This narrowing of the data commons comes precisely at a moment when global challenges demand greater openness, collaboration, and trust. Left unchecked, it risks stalling scientific breakthroughs, weakening evidence-based policymaking, deepening inequities in access to knowledge, and entrenching power in the hands of a few large actors, reshaping not only innovation but our collective capacity to understand and respond to the world.

A Carnegie commitment to the “diffusion of knowledge”, updated for the digital age, can help avert this dire situation. Building modern data libraries, embedding principles of the commons, could restore openness while safeguarding privacy and security. Without such action, the promise of our data-rich era may curdle into a new form of information scarcity, with deep and lasting societal costs…(More)”.

Why We Need a Carnegie Moment for the Age of AI

Article by Paula Dupraz-Dobias: “Scrambling for solutions to reduce spending at the United Nations, a plan presented in late July – part of the UN80 reform initiative – referred to the use of artificial intelligence (AI) to cut duplication in reporting across the body’s numerous divisions without providing much detail as to how this may be achieved.

Tech tools involving data sourcing are increasingly being used to improve efficiency in humanitarian response. But partnering with private technology firms may pose risks for aid organisations as they increase their digital engagement.

Six years after the World Food Programme (WFP) announced an agreement with United States tech and data analysis firm Palantir to help streamline its logistics management, sparking a barrage of concerns over data protection, questions about humanitarians working with private companies have again resurfaced.

On Thursday, Amnesty International condemned the use by the US government of tools developed by Palantir and other tech firms to monitor non-citizens at pro-Palestinian demonstrations as well as migrants…Experts have recognised that more needs to be done to define the parameters of partnerships between aid organisations and the private sector to mitigate risks.

“The challenge today is how do you improve the way you make decisions or design services or develop policies that leverage new tools such as data and AI in a systematic, sustainable and responsible way … where no one is left behind,” says Stefaan Verhulst, co-founder of the The GovLab, a research centre at New York University focused on how to improve decision-making and leveraging data in the public interest.

Organisations, he explains, need to develop a more comprehensive data governance framework. That includes a clearer articulation of the reasons for collecting data and using AI as well as governance principles for the use of data based on human rights.

Local stakeholders, such as potential aid beneficiaries, should also be included in the decision-making, he says, while data should be managed using a life-cycle approach – from collection to processing to analysis and use. Sourcing data from vulnerable populations required a “social license” or contract, to use and reuse data, which Verhulst recognised was one of the key missing elements currently…(More)”

Donor crisis prompts rethink on humanitarian data partnership rules

Article by Rida Qadri, Michael Madaio, and Mary L. Gray: “Imagine you are a marketing professional prompting an artificial intelligence (AI) image generator to produce different images of Pakistani urban streetscapes. What if the model, despite the prompting for specificity, produces Orientalist scene after scene of dusty streets, poverty, and chaos—missing important landmarks, social scenes, and the human diversity that makes a Pakistani city unique? This example illustrates a growing concern with the cultural inclusivity of AI systems failing to work for global populations, but instead, reinforce stereotypes that erase swaths of particular populations in AI-generated output.

To address such issues of cultural inclusion in AI, the field has attempted to incorporate cultural knowledge into models through a common tool in its arsenal: datasets. Datasets of, for instance, global values, offensive terms, and cultural artifacts are all attempts to incorporate cultural awareness into models.

But trying to capture culture in datasets is akin to believing you have captured everything important about the world in a map. A map is an abstracted and simplified two-dimensional representation of a multidimensional world. While a valuable tool, using maps effectively requires understanding the limits of their correspondence with the physical world. One must know, for example, how the Mercator projection map, created in the 1500s and adopted in the 1700s as the global standard for navigation, distorted the relative sizes of the continents. Confusing the abstraction for the reality has led to all sorts of trouble. Colonial powers used the Mercator projection maps of the physical world to demarcate social worlds—drawing lines through simplified representations on a map, separating communities and leading to decades of ethnic strife, all to make navigation supposedly more efficient…(More)”.

Confusing the Map for the Territory

Article by Tima Bansal and Julian Birkinshaw: “In this article we’ll look at the strengths and weaknesses of the two dominant approaches that businesses apply to innovation—breakthrough thinking and design thinking—which often produce socially and environmentally dysfunctional outcomes in complex systems. To avoid them, innovators should apply systems thinking, a methodology that has been around for decades but is rarely used today. It addresses the fact that in the modern economy every organization is part of a network of people, products, finances, and data, and changes in one area of the network can have side effects in others. For example, recent attempts by the U.S. government to impose tariffs on foreign imports have had ripple effects on the supply chains of major products like cars and iPhones, whose components are sourced from multiple countries. The tariff plans have also led to a spiral of complex and unpredictable reactions in financial markets.

Systems thinking helps predict and solve problems in dynamic, interconnected environments. It’s especially relevant to innovation for sustainability challenges. Electric vehicles, for example, have attracted a lot of investment, notably in China, because they are seen as a green technology. But their net effect on carbon emissions is highly contingent on how green a country’s power supply is. What’s more, their technology requires raw materials whose processing is highly polluting. Solar panels also look like an environmental silver bullet, but the rapidly growing scale of their manufacturing threatens to produce a tsunami of electronic waste. Truly sustainable technology solutions for environmental challenges require a systems-led approach that explicitly recognizes that the benefits of an innovation in one part of the planet’s ecology may be outweighed by the harm done elsewhere…(More)”.

Why You Need Systems Thinking Now

Article by Toluwani Aliu: “From plotting missing bridges in Rwanda to community championed geospatial initiatives in Nigeria, AI is tackling decades-old local issues…AI-supported platform Darli, which supports 20+ African languages, has given over 110,000 farmers access to advice, logistics and finance…Tailoring AI to underserved areas creates scalable public benefits, fosters equity and offers frameworks for sustainable digital transformation…(More)”.

How AI is powering grassroots solutions for underserved communities

Paper by Enikő Kovács-Szépvölgyi, Dorina Anna Tóth, and Roland Kelemen: “While the digital environment offers new opportunities to realise children’s rights, their right to participation remains insufficiently reflected in digital policy frameworks. This study analyses the right of the child to be heard in the academic literature and in the existing international legal and EU regulatory frameworks. It explores how children’s participation right is incorporated into EU and national digital policies and examines how genuine engagement can strengthen children’s digital resilience and support their well-being. By applying the 7C model of coping skills and analysing its interaction with the right to participation, the study highlights how these elements mutually reinforce the achievement of the Sustainable Development Goals (SDGs). Through a qualitative analysis of key strategic documents and the relevant policy literature, the research identifies the tension between the formal acknowledgment of children’s right to participate and its practical implementation at law- and policy-making levels within the digital context. Although the European Union’s examined strategies emphasise children’s participation, their practical implementation often remains abstract and fragmented at the state level. While the new BIK+ strategy shows a stronger formal emphasis on child participation, this positive development in policy language has not yet translated into a substantive change in children’s influence at the state level. This nuance highlights that despite a positive trend in policy rhetoric, the essential dimension of genuine influence remains underdeveloped…(More)”. See also: Who Decides What and How Data is Re-Used? Lessons Learned from Youth-Led Co-Design for Responsible Data Reuse in Services

From Voice to Action: Upholding Children’s Right to Participation in Shaping Policies and Laws for Digital Safety and Well-Being

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday