Stefaan Verhulst
Article by Stefaan Verhulst: “At the turn of the 20th century, Andrew Carnegie was one of the richest men in the world. He was also one of the most reviled, infamous for the harsh labor conditions and occasional violence at his steel mills. Determined to rehabilitate his reputation, Carnegie embarked upon a number of ambitious philanthropic ventures that would redefine his legacy, and leave a lasting impact on the United States and the world.
Among the most ambitious of these were the Carnegie Libraries. Between 1860 and 1930, Carnegie spent almost $60 million (equivalent to around $2.3 billion today), to build a network of 2,509 libraries globally — 1,689 in the United States and the rest in places as diverse as Australia, Fiji, South Africa, and his native Scotland. Carnegie supported these libraries for a number of reasons: to burnish his own reputation, because he thought it would help support immigrant integration into the US, but most of all because he was “dedicated to the diffusion of knowledge.” For Carnegie, greater knowledge was key to fostering all manner of social goods — everything from a healthier democracy to more innovation and better health. Today, many of those libraries still stand in communities across the country, a testament to the lasting impact of Carnegie’s generosity.
The story of Carnegie’s libraries would seem to offer a happy story from the past, a quaint period piece. But it has resonance in the present.
Today, we are once again presented with a landscape in which information is both abundant and scarce, offering tremendous potential for the public good yet largely accessible and reusable only to a small (corporate) minority. This paradox stems from the fact that while more and more aspects of our lives are captured in digital form, the resulting data is increasingly locked away, or inaccessible.
The centrality of data to public life is now undeniable, particularly with the rise of generative artificial intelligence, which relies on vast troves of high-quality, diverse, and timely datasets. Yet access to such data is being steadily eroded as governments, corporations, and institutions impose new restrictions on what can be accessed and reused. In some cases, open data portals and official statistics once celebrated as milestones of transparency have been defunded or scaled back, with fewer datasets published and those that remain limited to low-risk, non-sensitive material. At the same time, private platforms that once offered public APIs for research — such as Twitter (now X), Meta and Reddit — have closed or heavily monetized access, cutting off academics, civil society groups, and smaller enterprises from vital resources.
The drivers of this shift are varied but interlinked. The rise of generative AI has triggered what some call “generative AI-nxiety,” prompting news organizations, academic institutions, and other data custodians to block crawlers and restrict even non-sensitive repositories, often in (understandable) reaction to unconsented scraping for commercial model training. This is compounded by a broader research data lockdown, in which critical resources such as social media datasets used to study misinformation, political discourse, or mental health, and open environmental data essential for climate modeling, are increasingly subject to paywalls, restrictive licensing, or geopolitical disputes.
Rising calls for digital sovereignty have also led to a proliferation of data localization laws that prevent cross-border flows, undermining collaborative efforts on urgent global challenges like pandemic preparedness, disaster response, and environmental monitoring. Meanwhile, in the private sector, data is increasingly treated as a proprietary asset to be hoarded or sold, rather than a shared resource that can be stewarded responsibly for mutual benefit.
Indeed, we may be entering a new “data winter,” one marked by the emergence of new silos and gatekeepers and by a relentless — and socially corrosive — erosion of the open, interoperable data infrastructures that once seemed to hold so much promise.
This narrowing of the data commons comes precisely at a moment when global challenges demand greater openness, collaboration, and trust. Left unchecked, it risks stalling scientific breakthroughs, weakening evidence-based policymaking, deepening inequities in access to knowledge, and entrenching power in the hands of a few large actors, reshaping not only innovation but our collective capacity to understand and respond to the world.
A Carnegie commitment to the “diffusion of knowledge”, updated for the digital age, can help avert this dire situation. Building modern data libraries, embedding principles of the commons, could restore openness while safeguarding privacy and security. Without such action, the promise of our data-rich era may curdle into a new form of information scarcity, with deep and lasting societal costs…(More)”.
Article by Paula Dupraz-Dobias: “Scrambling for solutions to reduce spending at the United Nations, a plan presented in late July – part of the UN80 reform initiative – referred to the use of artificial intelligence (AI) to cut duplication in reporting across the body’s numerous divisions without providing much detail as to how this may be achieved.
Tech tools involving data sourcing are increasingly being used to improve efficiency in humanitarian response. But partnering with private technology firms may pose risks for aid organisations as they increase their digital engagement.
Six years after the World Food Programme (WFP) announced an agreement with United States tech and data analysis firm Palantir to help streamline its logistics management, sparking a barrage of concerns over data protection, questions about humanitarians working with private companies have again resurfaced.
On Thursday, Amnesty International condemned the use by the US government of tools developed by Palantir and other tech firms to monitor non-citizens at pro-Palestinian demonstrations as well as migrants…Experts have recognised that more needs to be done to define the parameters of partnerships between aid organisations and the private sector to mitigate risks.
“The challenge today is how do you improve the way you make decisions or design services or develop policies that leverage new tools such as data and AI in a systematic, sustainable and responsible way … where no one is left behind,” says Stefaan Verhulst, co-founder of the The GovLab, a research centre at New York University focused on how to improve decision-making and leveraging data in the public interest.
Organisations, he explains, need to develop a more comprehensive data governance framework. That includes a clearer articulation of the reasons for collecting data and using AI as well as governance principles for the use of data based on human rights.
Local stakeholders, such as potential aid beneficiaries, should also be included in the decision-making, he says, while data should be managed using a life-cycle approach – from collection to processing to analysis and use. Sourcing data from vulnerable populations required a “social license” or contract, to use and reuse data, which Verhulst recognised was one of the key missing elements currently…(More)”
Article by Rida Qadri, Michael Madaio, and Mary L. Gray: “Imagine you are a marketing professional prompting an artificial intelligence (AI) image generator to produce different images of Pakistani urban streetscapes. What if the model, despite the prompting for specificity, produces Orientalist scene after scene of dusty streets, poverty, and chaos—missing important landmarks, social scenes, and the human diversity that makes a Pakistani city unique? This example illustrates a growing concern with the cultural inclusivity of AI systems failing to work for global populations, but instead, reinforce stereotypes that erase swaths of particular populations in AI-generated output.
To address such issues of cultural inclusion in AI, the field has attempted to incorporate cultural knowledge into models through a common tool in its arsenal: datasets. Datasets of, for instance, global values, offensive terms, and cultural artifacts are all attempts to incorporate cultural awareness into models.
But trying to capture culture in datasets is akin to believing you have captured everything important about the world in a map. A map is an abstracted and simplified two-dimensional representation of a multidimensional world. While a valuable tool, using maps effectively requires understanding the limits of their correspondence with the physical world. One must know, for example, how the Mercator projection map, created in the 1500s and adopted in the 1700s as the global standard for navigation, distorted the relative sizes of the continents. Confusing the abstraction for the reality has led to all sorts of trouble. Colonial powers used the Mercator projection maps of the physical world to demarcate social worlds—drawing lines through simplified representations on a map, separating communities and leading to decades of ethnic strife, all to make navigation supposedly more efficient…(More)”.
Article by Tima Bansal and Julian Birkinshaw: “In this article we’ll look at the strengths and weaknesses of the two dominant approaches that businesses apply to innovation—breakthrough thinking and design thinking—which often produce socially and environmentally dysfunctional outcomes in complex systems. To avoid them, innovators should apply systems thinking, a methodology that has been around for decades but is rarely used today. It addresses the fact that in the modern economy every organization is part of a network of people, products, finances, and data, and changes in one area of the network can have side effects in others. For example, recent attempts by the U.S. government to impose tariffs on foreign imports have had ripple effects on the supply chains of major products like cars and iPhones, whose components are sourced from multiple countries. The tariff plans have also led to a spiral of complex and unpredictable reactions in financial markets.
Systems thinking helps predict and solve problems in dynamic, interconnected environments. It’s especially relevant to innovation for sustainability challenges. Electric vehicles, for example, have attracted a lot of investment, notably in China, because they are seen as a green technology. But their net effect on carbon emissions is highly contingent on how green a country’s power supply is. What’s more, their technology requires raw materials whose processing is highly polluting. Solar panels also look like an environmental silver bullet, but the rapidly growing scale of their manufacturing threatens to produce a tsunami of electronic waste. Truly sustainable technology solutions for environmental challenges require a systems-led approach that explicitly recognizes that the benefits of an innovation in one part of the planet’s ecology may be outweighed by the harm done elsewhere…(More)”.
Article by Toluwani Aliu: “From plotting missing bridges in Rwanda to community championed geospatial initiatives in Nigeria, AI is tackling decades-old local issues…AI-supported platform Darli, which supports 20+ African languages, has given over 110,000 farmers access to advice, logistics and finance…Tailoring AI to underserved areas creates scalable public benefits, fosters equity and offers frameworks for sustainable digital transformation…(More)”.
Paper by Enikő Kovács-Szépvölgyi, Dorina Anna Tóth, and Roland Kelemen: “While the digital environment offers new opportunities to realise children’s rights, their right to participation remains insufficiently reflected in digital policy frameworks. This study analyses the right of the child to be heard in the academic literature and in the existing international legal and EU regulatory frameworks. It explores how children’s participation right is incorporated into EU and national digital policies and examines how genuine engagement can strengthen children’s digital resilience and support their well-being. By applying the 7C model of coping skills and analysing its interaction with the right to participation, the study highlights how these elements mutually reinforce the achievement of the Sustainable Development Goals (SDGs). Through a qualitative analysis of key strategic documents and the relevant policy literature, the research identifies the tension between the formal acknowledgment of children’s right to participate and its practical implementation at law- and policy-making levels within the digital context. Although the European Union’s examined strategies emphasise children’s participation, their practical implementation often remains abstract and fragmented at the state level. While the new BIK+ strategy shows a stronger formal emphasis on child participation, this positive development in policy language has not yet translated into a substantive change in children’s influence at the state level. This nuance highlights that despite a positive trend in policy rhetoric, the essential dimension of genuine influence remains underdeveloped…(More)”. See also: Who Decides What and How Data is Re-Used? Lessons Learned from Youth-Led Co-Design for Responsible Data Reuse in Services
Article by Henry Farrell: “Dan Wang’s new book, Breakneck: China’s Quest to Engineer the Future, came out last week…The biggest lesson I took from Breakneck was not about China, or the U.S., but the importance of “process knowledge.” That is not a concept that features much in the existing debates about trans-Pacific geopolitics, nor discussions about what America ought do to revitalize its economy. Dan makes a very strong case that it should.
As I’ve said twice, I’m biased. I’m fascinated by process knowledge and manufacturing because I spent a chunk of the late 1990s talking to manufacturers in Bologna and Baden-Wurttemberg for my Ph.D. dissertation.
I was carrying out research in the twilight of a long period of interest in so-called “industrial districts,” small localized regions with lots of small firms engaged in a particular sector of the economy. Paul Krugman’s Geography and Trade (maybe my favorite of his books) talks about some of the economic theory behind this form of concentrated production: economic sociologists and economic geographers had their own arguments. Economists, sociologists and geographers all emphasized the crucial importance of local diffuse knowledge about how to do things in making these economies successful. Such knowledge was in part the product of market interactions, but it wasn’t itself a commodity that could be bought and sold. It was more often tacit: a sense of how to do things, and who best to talk to, which could not easily be articulated. The sociologists were particularly interested in the informal institutions, norms and social practices that held this together. They identified different patterns of local institutional development, which the Communist party in Emilia-Romagna and Tuscany, and the Christian Democrats in the Veneto and Marche, had built on to foster vibrant local economies.
I was interested in Bologna because it had a heavy concentration of small manufacturers of packaging machinery, which could be compared, if you squinted a little, with the bigger and more famous cluster of engineering firms around Stuttgart in Germany. There were myriads of small companies in the unlovely industrial outskirts of Bologna, each with its own particular line of products. Most of these companies had been founded by people who had apprenticed and worked for someone else, spotted their own opportunity to iterate on their knowhow, and gone independent…(More)”.
Paper by Melisa Basol: “Artificial intelligence (AI) is reshaping digital autonomy. This article examines AI-driven manipulation, exploring its mechanisms, ethical challenges, and potential safeguards. AI systems are increasingly integrated into personal, social, and political domains, shaping decision-making processes. While these systems are often framed as neutral tools, AI systems can manipulate on three levels: (1) structural manipulation, where AI systems shape decisions through design choices like ranking algorithms and engagement-driven models; (2) exploitation by external actors, where AI is leveraged to amplify the spread of harmful falsehoods, automated deception, and personalized manipulation attempts; and (3) emergent manipulation, where AI systems may exhibit unexpected or autonomous influence over users, even in the absence of human intent. The article underscores the lack of a cohesive framework for defining and regulating AI manipulation, complicating efforts to mitigate its risks. To counteract these risks, this article draws on psychological theories of manipulation, persuasion, and social influence while examining the power asymmetries, systemic biases, and epistemic instability that complicate efforts to safeguard digital agency in the age of generative AI. Additionally, this article proposes a tiered intervention approach at three levels: (1) capability-level safeguards; (2) human-interaction interventions; and (3) systemic governance frameworks. Ultimately, this article challenges the prevailing narrative of an AI ‘revolution’, arguing that AI does not inherently democratize knowledge or expand autonomy but instead risks consolidating control under the guise of progress. Effective governance must move beyond transparency towards proactive regulatory mechanisms that prevent manipulation, curb power asymmetries, and protect human agency…(More)”.
Paper by Margot E. Kaminski and Gianclaudio Malgieri: “Privacy law has long centered on the individual. But we observe a meaningful shift toward group harm and rights. There is growing recognition that data-driven practices, including the development and use of artificial intelligence (AI) systems, affect not just atomized individuals but also their neighborhoods and communities, including and especially situationally vulnerable and historically marginalized groups.
This Article explores a recent shift in both data privacy law and the newly developing law of AI: a turn towards stakeholder participation in the governance of AI and data systems, specifically by impacted groups often though not always representing historically marginalized communities. In this Article we chart this development across an array of recent laws in both the United States and the European Union. We explain reasons for the turn, both theoretical and practical. We then offer analysis of the legal scaffolding of impacted stakeholder participation, establishing a catalog of both existing and possible interventions. We close with a call for reframing impacted stakeholders as rights-holders, and for recognizing several variations on a group right to contest AI systems, among other collective means of leveraging and invoking rights individuals have already been afforded…(More)”.
Article by Rainer Kattel: “Europe today faces no shortage of crises. From climate breakdown and geopolitical instability to social fragmentation and digital disruption, the continent is being reshaped by forces that defy easy policy responses. In this increasingly turbulent landscape, innovation is no longer a technocratic pursuit confined to boosting productivity or improving competitiveness, as the consensus of the early twenty-first century prescribed. It has become a political, economic and institutional necessity—one that demands not only new ideas but new ways of organising the public institutions that can turn those ideas into systemic change.
At the heart of this challenge lie innovation agencies. Long the workhorses of science, technology and industrial policy, these agencies—often semi-autonomous and operating at arm’s length from ministries—have traditionally focused on supporting firms, facilitating research and distributing grants. Yet over the past decade, their mandates have expanded dramatically. Now tasked with delivering missions such as decarbonising mobility, transforming food systems or building digital sovereignty, innovation agencies are being asked to operate not just as funders or intermediaries but as architects of change across complex socio-technical systems.
This shift is long overdue. In theory, Europe has embraced the logic of mission-oriented innovation. Horizon Europe, the EU’s flagship research programme, commits over €50 billion to grand societal challenges. The idea is straightforward: set ambitious, shared goals and allow national and regional actors to develop locally appropriate solutions.
But the reality on the ground is starkly different. At the EU level, implementation remains locked in rigid frameworks of compliance and administrative oversight. At the local and regional level, innovation flourishes—labs, pilots and experiments abound—but rarely scales beyond the project phase. The result is a peculiar imbalance: too much stability at the top, too much agility at the bottom and too little capacity in the middle…(More)”.