Explore our articles
View All Results

Stefaan Verhulst

A guide for policy makers by the OECD: “This working paper examines how horizon scanning can help governments anticipate and respond to emerging and converging technologies. Drawing on 129 international exercises (2020–2025), it highlights the diversity of practices, from government units to international initiatives, and explores methodological advances, including AI-enabled analytics. The paper identifies key technologies such as advanced materials, quantum technologies and bioengineering, and their policy implications for security, resilience and sustainability. It also highlights challenges in interpreting early signals and the need for robust standards and international collaboration to ensure horizon scanning effectively informs STI policy and decision-making…(More)”.

Building capacity in technology horizon scanning

Article by Nikolaj Moesgaard & Güliz Berfin Koldaş: “Addressing complex social challenges at scale requires strong, well-connected networks that can coordinate action, share learning, and adapt as conditions change. Whether in philanthropy, social investment, member alliances, or regional platforms, networks play a vital role in mobilizing resources, surfacing innovation, and supporting solutions across diverse contexts.

Yet a fundamental challenge persists: Many networks lack clear, current visibility into who their members are, what they do, and how their efforts align. Reliance on outdated directories, infrequent surveys, or anecdotal knowledge limits collaboration, progress tracking, and access to relevant opportunities. These gaps are often most acute in fast-changing or under-resourced environments, where information is fragmented or rarely updated.

In response, a growing range of networks—including grantee communities, professional alliances, funder collaboratives, and industry-wide partnerships focused on shared social or environmental goals—are beginning to adopt more dynamic, data-informed approaches. Tools such as AI-enabled analysis, automated research, and real-time feedback mechanisms are helping these groups replace static records with living, evolving views of network activity tailored to their unique data environments, geographies, and organizational types…(More)”.

Supercharging Network Intelligence

Book by Burcu Baykurt: “…provides a rich ethnographic investigation into how smartness is received and negotiated in a midsize US city. Burcu Baykurt follows the work of civic entrepreneurs, local residents, and city officials in Kansas City, Missouri, where Google tested a citywide gigabit service and the local government launched a series of smart city pilot projects in transportation, public housing, and municipal services. Baykurt redefines smartness as a collective effort to spotlight a city’s enduring local problems and align solutions with the often buggy, partially developed systems offered by tech companies. She shows that success in matching civic concerns with flawed tech systems is hard-won and ambiguous, and that the techniques of data capitalism extract value from urban inequalities rather than solve them…(More)”.

Smart as a City: The Politics of Test-Bed Urbanism

Blog by Tosin Gbadegesin, Namita Datta and Manjula K. Nettikumara: “Gender data has come a long way. We can now compare women’s labor force participation across countries, track legal reforms that affect women’s work, and see how many women sit on boards or work in certain firms. Yet the uncomfortable truth is this: we are measuring women in the system far better than we are measuring whether the system is working for women.

recently published International Finance Corporation (IFC) report, Closing the Gap: A Private Sector Data Outlook on Women’s Economic Opportunities, highlights both the progress and the problem. The report maps where gender data exists and where it falls short. The bigger message, however, is not just that we need more data. It’s that we need better measurement, one that shifts from compliance to performance, from snapshots to trajectories, and from averages to lived realities.

The gender data paradox: more indicators, less clarity

The assessment identified 20 data sources that meet minimum thresholds for transparency, comparability, and relevance to private-sector decision making. These sources provide 429 indicators, covering an average of 129 countries over roughly 18 years. That sounds like abundance.

But here’s the paradox: as the number of indicators grows, actionable clarity does not always improve. Why? Because availability is not the same as adequacy. Many indicators tell us what exists (a policy, a disclosure, a headcount), but not what changes (women’s outcomes, mobility, safety, productivity, or constraints over time and the underlying social norms that shape these outcomes). And the areas that matter most for modern businesses, including supply chains, digital access, care systems, and gender-based violence affecting women’s ability to work, remain among the least measured…(More)”.

From data gaps to action: measuring women’s economic opportunities better

Article by Shannon DosemagenGwen Ottinger: “…maintaining healthy communities requires information. To respond effectively to accidents and releases, residents and emergency responders need real-time measurements of hazardous chemicals in the air. Residents who observe soot or dust blanketing their communities need information about its chemical composition to share with their health providers. Closing the refinery in Benicia requires still more information, so the city can understand the levels of toxins left in the soil and the risks of further exposures from clean-up processes. Faced with the choice between the razing of an oil refinery and its conversion to a renewables facility, communities should be able to compare the status quo with expected emissions and safety risks for multiple future scenarios.

Creating the kind of knowledge base necessary for such consequential decisions would require long-term coordination across the many communities affected by energy infrastructure. Places like Benicia, Martinez, and Rodeo would need a place to store data about pollution before, during, and after major changes at nearby energy facilities. They would need to have a way of sharing their data and analyses with other similarly situated communities if they chose to do so, and they would need to be able to access data and analyses from other communities just as easily. Academic and nonprofit researchers with a bird’s eye view of the issues could also enhance knowledge infrastructures if they had access to data shared by communities and a way not only to disseminate their findings, but to share their methodologies for communities to adapt and deploy.

Existing data infrastructures can’t support this kind of collective learning about environmental issues. Both the technical and governance aspects of the infrastructure would need significant upgrades, and the customary models for funding science in the United States don’t offer the kinds of investment that would be necessary. Funding is typically structured around short grant cycles and discrete deliverables, making it difficult to support the long-term, shared stewardship that this infrastructure requires. Addressing these hurdles could enable creation of a robust environmental knowledge commons maintained by a plethora of users and contributors. Such a commons could ensure the continued capacity to generate new insights about the impacts of pollution and environmental change, forming a durable basis for evidence-informed public policy, whether or not the federal government continues to support environmental science. An environmental knowledge commons could, moreover, offer a model for ongoing advancement in other fields of science where traditional funding models have become precarious, even as their knowledge remains essential to public well-being…(More)”.

Constructing a New Knowledge Infrastructure

Paper by Mattia Mazzoli et al: “The COVID-19 pandemic served as an important test case of complementing traditional public health data with nontraditional data, such as mobility traces, social media activity, and wearable data, to inform real-time decision-making. Drawing on an expert workshop and a targeted survey of epidemic modelers in Europe, this study assesses the promise and the persistent limitations of such data in pandemic preparedness and response. We distinguish between “first-mile” challenges (obstacles to accessing and harmonizing data) and “last-mile” challenges (difficulties in translating insights into actionable policy interventions). The expert workshop, convened in March 2024 in Brussels, brought together 50 participants, including public health professionals, data scientists, policymakers, and industry leaders, to reflect on lessons learned and define strategies for better integration of nontraditional data into epidemic modeling and policymaking. The accompanying survey, gathering experiences from 29 modelers, offers empirical evidence of the barriers faced by modelers during the COVID-19 pandemic and highlights areas where key data were unavailable or underused. The experiences collected through the survey and workshop resulted in ten key actions and three overarching recommendations for public entities, data providers, and stakeholders. Our findings reveal ongoing issues with data access, quality, and interoperability, as well as institutional and cognitive barriers to evidence-based decision-making. Approximately 66% of all datasets had at least one access problem, with data sharing reluctance for nontraditional sources being double that of traditional data (30% vs 15%). Only 10% of respondents reported that they could use all the data they needed. These limitations included issues related to timeliness and granularity of data, as well as issues with linkage, comparability, and biases. To overcome these hurdles, we propose a set of enabling mechanisms, including data inventories, standardization protocols, simulation exercises, data stewardship roles, and data collaboratives. For first-mile challenges, solutions focus on technical and legal frameworks for data access. For last-mile challenges, we recommend fusion centers, decision accelerator laboratories, and networks of scientific ambassadors to bridge the gap between analysis and action. We argue that realizing the full value of nontraditional data requires a sustained investment in institutional readiness, cross-sectoral collaboration, and a shift toward a culture of data solidarity. Grounded in the lessons of the COVID-19 pandemic, the study can be used to design a roadmap for using nontraditional data to confront a broader array of public health emergencies, from climate shocks to humanitarian crises…(More)”.

Non-Traditional Data in Pandemic Preparedness and Response: Identifying and Addressing First- and Last-Mile Challenges

Report by Bronwyn Carlson and Tamika Worrell: “Artificial intelligence is increasingly embedded in everyday life in Australia, shaping communication, services, and relationships. This report presents findings from the Relational Futures project, an Indigenous led study examining how Aboriginal and Torres Strait Islander peoples are encountering and responding to AI, including generative systems, automated decision-making tools, and AI companions. The research draws on a mixed methods approach combining an online survey with 36 respondents and yarning circles with 22 participants, providing both broad and in-depth insight into Indigenous experiences of AI across community and professional contexts. This report presents the initial findings as the project continues…(More)”.

Relational futures: Indigenous sovereignty and the governance of artificial intelligence (AI)

Book edited by Daryl Lim and Peter K Yu: “As artificial intelligence and big data analytics reshape economies and societies, the promise of innovation is increasingly shadowed by concerns over inclusion, equity, and global justice. This accessible, interdisciplinary volume brings together established and emerging voices from across the world to critically examine issues lying at the intersection of innovation, intellectual property, and inequality in the age of artificial intelligence and big data. Featuring empirical studies, legal analyses, policy critiques, interdisciplinary perspectives, and global insights, Inclusive Innovation in the Age of AI and Big Data underscores the tremendous impact gender, race, and other socioeconomic factors have on innovation and intellectual property ecosystems. This volume also explores structural barriers in these ecosystems, diversity initiatives in the patent area, metrics for measuring inclusivity and diversity in innovation, changes brought about by artificial intelligence and big data, and the evolution of the global innovation and intellectual property systems. In an era marked by rapid technological change, extraordinary opportunities, and deepening inequality, this volume offers carefully designed reform strategies and policy recommendations to make innovation and intellectual property ecosystems more equitable, effective, and socially responsive…(More)”.

Inclusive Innovation in the Age of AI and Big Data

Book by Carissa Véliz: “For thousands of years, oracles, seers, and astrologers advised leaders and commoners alike about the future. But predictions are often power plays in disguise, obfuscating accountability and stripping individuals of their agency. Today we face the same threat of powerful prophets but under a new facade: tech.

Not only do modern predictions made by tech companies advise on war, industry, and marriages, but artificial intelligence also now determines whether we can get a loan, a job, an apartment, or an organ transplant. And when we cede ground to these predictions, we lose control of our own lives.

Drawing on history’s cautionary tales and modern-day tech companies’ malfeasance—from surveillance and biased algorithms to a startling lack of accountability—Carissa Véliz demonstrates that big tech’s prophecies are just as shallow, dangerous, and unjust as their ancient counterparts’. What she uncovers in the process is chilling. Artificial intelligence is increasing risk in business and society while creating a false sense of security. In this incisive, witty, and bracingly original book, Véliz contends that the main promise of prediction is not knowledge of the future but domination over others. Powerful people use predictions to determine our future. Prophecy is an invitation to defy those orders and live life on our own terms…(More)”.

Prophecy: Prediction, Power, and the Fight for the Future, from Ancient Oracles to AI

Article by Stefaan G. Verhulst and Roshni Singh: “Artificial intelligence systems are increasingly designed to remember us. Whether answering a question, drafting an email, or recommending a course of action, modern AI systems draw on accumulated knowledge about a user’s preferences, behaviors, goals, and past interactions to function effectively. This capacity for context — persistent memory about who we are and what we do — is not a secondary feature. It is foundational to how these systems generate value.

But context in AI is more than a technical convenience or feature. It is also a source of risk. The accumulation and reuse of personal information introduces privacy vulnerabilities, particularly when data from different domains is aggregated into a single, unified memory. This is, of course, not a new concern: as Helen Nissenbaum argued in Privacy in Context, privacy depends on maintaining appropriate information flows within specific social contexts, and risks emerge when those boundaries are collapsed. What AI changes is the scale, speed, and inferential power of such aggregation, turning what were once discrete data linkages into continuous, dynamic systems capable of generating new insights, predictions, and vulnerabilities far beyond the original contexts in which the data was produced.

And the persistence of context raises deeper questions about cognitive dependence: when AI systems continuously shape the informational environment in which users think, they do not merely respond to us but influence how we understand ourselves and make decisions. In doing so, they risk constraining what we have described as digital self-determination: the ability of individuals and communities to meaningfully shape the conditions under which their data is (re) used and how it, in turn, shapes them — shifting agency from the user to the system in often opaque and difficult-to-contest ways.

These risks are not limited to one category of AI. They apply across AI systems that store and reuse user data — from large language models and recommendation engines to agentic systems that act autonomously on a user’s behalf. What this article examines is not a particular technology, but a structural feature common to many: the use of context as memory, and the tradeoffs that follow.

Context is often treated as the accumulation of user data, but this framing is incomplete. Context is better understood as the relational structure that gives information meaning by situating it within social, temporal, and functional relationships. It is not simply what is stored, but how information is organized, linked, and interpreted within a given frame. Without these relationships, data may remain present but lose meaning or be misapplied across situations. As Jessica Talisman further elaborates, this spectrum runs from statistical proximity to formal logical commitment; AI systems that conflate these distinct levels of relational strength risk treating correlation as meaning.

In what follows, we draw on emerging writing on AI memory, context, and human-AI interaction to explore three interconnected dimensions of this problem. First, we examine why context matters so much for AI performance, and why it is better understood as a relational structure than as simple data storage. Second, we analyze the privacy risks that arise when contextual boundaries collapse. Third, we consider the cognitive risks of persistent memory: the possibility that AI systems come to shape not only what users do, but how they think. Across these dimensions, we also consider the implications for digital self-determination — that is, the extent to which individuals and communities retain meaningful agency over how they are represented, interpreted, and acted upon within context-aware AI systems. These concerns are especially acute for children and young users, for whom both data exposure and cognitive development are at stake…(More)”.

The Context Loop: How AI Remembers Us, and Shapes Digital Self-Determination

Get the latest news right in your inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday