Anticipatory Governance: Shaping a Responsible Future


Book edited by Melodena Stephens, Raed Awamleh and Frederic Sicre: “Anticipatory Governance is the systemic process of future shaping built on the understanding that the future is not a continuation of the past or present, thus making foresight a complex task requiring the engagement of the whole of government with its constituents in a constructive and iterative manner to achieve collective intelligence. Effective anticipatory governance amplifies the fundamental properties of agile government to build trust, challenge assumptions, and reach consensus. Moreover, anticipatory governance sets the foundation to adapt to exponential change. This seismic shift in the governance environment should lead to urgent rethinking of the ways and means governments and large corporate players formulate strategies, design processes, develop human capital and shape instiutional culture to achieve public value.

From a long-term multigenerational perspective, anticipatory governance is a key component to ensure guardrails for the future. Systems thinking is needed to harness our collective intelligence, by tapping into knowledge trapped within nations, organizations, and people. Many of the wicked problems governments and corporations are grappling with like artificial intelligence applications and ethics, climate change, refugee migration, education for future skills, and health care for all, require a “system of systems”, or anticipatory governance.

Yet, no matter how much we invest in foresight and shaping the future, we still need an agile government approach to manage unintended outcomes and people’s expectations. Crisis management which begins with listening to weak signals, sensemaking, intelligence management, reputation enhancement, and public value alignment and delivery, is critical. This book dives into the theory and practice of anticipatory governance and sets the agenda for future research…(More)”

Data solidarity: Operationalising public value through a digital tool


Paper by Seliem El-Sayed, Ilona Kickbusch & Barbara Prainsack: “Most data governance frameworks are designed to protect the individuals from whom data originates. However, the impacts of digital practices extend to a broader population and are embedded in significant power asymmetries within and across nations. Further, inequities in digital societies impact everyone, not just those directly involved. Addressing these challenges requires an approach which moves beyond individual data control and is grounded in the values of equity and a just contribution of benefits and risks from data use. Solidarity-based data governance (in short: data solidarity), suggests prioritising data uses over data type and proposes that data uses that generate public value should be actively facilitated, those that generate significant risks and harms should be prohibited or strictly regulated, and those that generate private benefits with little or no public value should be ‘taxed’ so that profits generated by corporate data users are reinvested in the public domain. In the context of global health data governance, the public value generated by data use is crucial. This contribution clarifies the meaning, importance, and potential of public value within data solidarity and outlines methods for its operationalisation through the PLUTO tool, specifically designed to assess the public value of data uses…(More)”.

Kickstarting Collaborative, AI-Ready Datasets in the Life Sciences with Government-funded Projects


Article by Erika DeBenedictis, Ben Andrew & Pete Kelly: “In the age of Artificial Intelligence (AI), large high-quality datasets are needed to move the field of life science forward. However, the research community lacks strategies to incentivize collaboration on high-quality data acquisition and sharing. The government should fund collaborative roadmapping, certification, collection, and sharing of large, high-quality datasets in life science. In such a system, nonprofit research organizations engage scientific communities to identify key types of data that would be valuable for building predictive models, and define quality control (QC) and open science standards for collection of that data. Projects are designed to develop automated methods for data collection, certify data providers, and facilitate data collection in consultation with researchers throughout various scientific communities. Hosting of the resulting open data is subsidized as well as protected by security measures. This system would provide crucial incentives for the life science community to identify and amass large, high-quality open datasets that will immensely benefit researchers…(More)”.

The world of tomorrow


Essay by Virginia Postrel: “When the future arrived, it felt… ordinary. What happened to the glamour of tomorrow?

Progress used to be glamorous. For the first two thirds of the twentieth-century, the terms modern, future, and world of tomorrow shimmered with promise.

Glamour is more than a synonym for fashion or celebrity, although these things can certainly be glamorous. So can a holiday resort, a city, or a career. The military can be glamorous, as can technology, science, or the religious life. It all depends on the audience. Glamour is a form of communication that, like humor, we recognize by its characteristic effect. Something is glamorous when it inspires a sense of projection and longing: if only . . .

Whatever its incarnation, glamour offers a promise of escape and transformation. It focuses deep, often unarticulated longings on an image or idea that makes them feel attainable. Both the longings – for wealth, happiness, security, comfort, recognition, adventure, love, tranquility, freedom, or respect – and the objects that represent them vary from person to person, culture to culture, era to era. In the twentieth-century, ‘the future’ was a glamorous concept…

Much has been written about how and why culture and policy repudiated the visions of material progress that animated the first half of the twentieth-century, including a special issue of this magazine inspired by J Storrs Hall’s book Where Is My Flying Car? The subtitle of James Pethokoukis’s recent book The Conservative Futurist is ‘How to create the sci-fi world we were promised’. Like Peter Thiel’s famous complaint that ‘we wanted flying cars, instead we got 140 characters’, the phrase captures a sense of betrayal. Today’s techno-optimism is infused with nostalgia for the retro future.

But the most common explanations for the anti-Promethean backlash fall short. It’s true but incomplete to blame the environmental consciousness that spread in the late sixties…

How exactly today’s longings might manifest themselves, whether in glamorous imagery or real-life social evolution, is hard to predict. But one thing is clear: For progress to be appealing, it must offer room for diverse pursuits and identities, permitting communities with different commitments and values to enjoy a landscape of pluralism without devolving into mutually hostile tribes. The ideal of the one best way passed long ago. It was glamorous in its day but glamour is an illusion…(More)”.

The AI tool that can interpret any spreadsheet instantly


Article by Duncan C. McElfresh: “Say you run a hospital and you want to estimate which patients have the highest risk of deterioration so that your staff can prioritize their care1. You create a spreadsheet in which there is a row for each patient, and columns for relevant attributes, such as age or blood-oxygen level. The final column records whether the person deteriorated during their stay. You can then fit a mathematical model to these data to estimate an incoming patient’s deterioration risk. This is a classic example of tabular machine learning, a technique that uses tables of data to make inferences. This usually involves developing — and training — a bespoke model for each task. Writing in Nature, Hollmann et al.report a model that can perform tabular machine learning on any data set without being trained specifically to do so.

Tabular machine learning shares a rich history with statistics and data science. Its methods are foundational to modern artificial intelligence (AI) systems, including large language models (LLMs), and its influence cannot be overstated. Indeed, many online experiences are shaped by tabular machine-learning models, which recommend products, generate advertisements and moderate social-media content3. Essential industries such as healthcare and finance are also steadily, if cautiously, moving towards increasing their use of AI.

Despite the field’s maturity, Hollmann and colleagues’ advance could be revolutionary. The authors’ contribution is known as a foundation model, which is a general-purpose model that can be used in a range of settings. You might already have encountered foundation models, perhaps unknowingly, through AI tools, such as ChatGPT and Stable Diffusion. These models enable a single tool to offer varied capabilities, including text translation and image generation. So what does a foundation model for tabular machine learning look like?

Let’s return to the hospital example. With spreadsheet in hand, you choose a machine-learning model (such as a neural network) and train the model with your data, using an algorithm that adjusts the model’s parameters to optimize its predictive performance (Fig. 1a). Typically, you would train several such models before selecting one to use — a labour-intensive process that requires considerable time and expertise. And of course, this process must be repeated for each unique task.

Figure 1 | A foundation model for tabular machine learning. a, Conventional machine-learning models are trained on individual data sets using mathematical optimization algorithms. A different model needs to be developed and trained for each task, and for each data set. This practice takes years to learn and requires extensive time and computing resources. b, By contrast, a ‘foundation’ model could be used for any machine-learning task and is pre-trained on the types of data used to train conventional models. This type of model simply reads a data set and can immediately produce inferences about new data points. Hollmann et al. developed a foundation model for tabular machine learning, in which inferences are made on the basis of tables of data. Tabular machine learning is used for tasks as varied as social-media moderation and hospital decision-making, so the authors’ advance is expected to have a profound effect in many areas…(More)”

Comparative perspectives on the regulation of large language models


Editorial to Special Issue by Cristina Poncibò and Martin Ebers: “Large language models (LLMs) represent one of the most significant technological advancements in recent decades, offering transformative capabilities in natural language processing and content generation. Their development has far-reaching implications across technological, economic and societal domains, simultaneously creating opportunities for innovation and posing profound challenges for governance and regulation. As LLMs become integral to various sectors, from education to healthcare to entertainment, regulators are scrambling to establish frameworks that ensure their safe and ethical use.

Our issue primarily examines the private ordering, regulatory responses and normative frameworks for LLMs from a comparative law perspective, with a particular focus on the European Union (EU), the United States (US) and China. An introductory part preliminarily explores the technical principles that underpin LLMs as well as their epistemological foundations. It also addresses key sector-specific legal challenges posed by LLMs, including their implications for criminal law, data protection and copyright law…(More)”.

The Future of Jobs Report 2025


Report by the World Economic Forum: “Technological change, geoeconomic fragmentation, economic uncertainty, demographic shifts and the green transition – individually and in combination are among the major drivers expected to shape and transform the global labour market by 2030. The Future of Jobs Report 2025 brings together the perspective of over 1,000 leading global employers—collectively representing more than 14 million workers across 22 industry clusters and 55 economies from around the world—to examine how these macrotrends impact jobs and skills, and the workforce transformation strategies employers plan to embark on in response, across the 2025 to 2030 timeframe…(More)”.

The Bridging Dictionary


About: “What if generative AI could help us understand people with opposing views better just by showing how they use common words and phrases differently? That’s the deceptively simple-sounding idea behind a new experiment from MIT’s Center for Constructive Communication (CCC). 

It’s called the Bridging Dictionary (BD), a research prototype that’s still very much a work in progress – one we hope your feedback will help us improve.

The Bridging Dictionary identifies words and phrases that both reflect and contribute to sharply divergent views in our fractured public sphere. That’s the “dictionary” part. If that’s all it did, we could just call it the “Frictionary.” But the large language model (LLM) that undergirds the BD also suggests less polarized alternatives – hence “bridging.” 

In this prototype, research scientist Doug Beeferman and a team at CCC led by Maya Detwiller and Dennis Jen used thousands of transcripts and opinion articles from foxnews.com and msnbc.com as proxies for the conversation on the right and the left. You’ll see the most polarized words and phrases when you sample the BD for yourself, but you can also plug any term of your choosing into the search box. (For a more complete explanation of the methodology behind the BD, see https://bridgingdictionary.org/info/ .)…(More)”.

The People Say


About: “The People Say is an online research hub that features first-hand insights from older adults and caregivers on the issues most important to them, as well as feedback from experts on policies affecting older adults. 

This project particularly focuses on the experiences of communities often under-consulted in policymaking, including older adults of color, those who are low income, and/or those who live in rural areas where healthcare isn’t easily accessible. The People Say is funded by The SCAN Foundation and developed by researchers and designers at the Public Policy Lab.

We believe that effective policymaking listens to most-affected communities—but policies and systems that serve older adults are typically formed with little to no input from older adults themselves. We hope The People Say will help policymakers hear the voices of older adults when shaping policy…(More)”

Government reform starts with data, evidence


Article by Kshemendra Paul: “It’s time to strengthen the use of dataevidence and transparency to stop driving with mud on the windshield and to steer the government toward improving management of its programs and operations.

Existing Government Accountability Office and agency inspectors general reports identify thousands of specific evidence-based recommendations to improve efficiency, economy and effectiveness, and reduce fraud, waste and abuse. Many of these recommendations aim at program design and requirements, highlighting specific instances of overlap, redundancy and duplication. Others describe inadequate internal controls to balance program integrity with the experience of the customer, contractor or grantee. While progress is being reported in part due to stronger partnerships with IGs, much remains to be done. Indeed, GAO’s 2023 High Risk List, which it has produced going back to 1990, shows surprisingly slow progress of efforts to reduce risk to government programs and operations.

Here are a few examples:

  • GAO estimates recent annual fraud of between $233 billion to $521 billion, or about 3% to 7% of federal spending. On the other hand, identified fraud with high-risk Recovery Act spending was held under 1% using data, transparency and partnerships with Offices of Inspectors General.
  • GAO and IGs have collectively identified hundreds of billions in potential cost savings or improvements not yet addressed by federal agencies.
  • GAO has recently described shortcomings with the government’s efforts to build evidence. While federal policymakers need good information to inform their decisions, the Commission on Evidence-Based Policymaking previously said, “too little evidence is produced to meet this need.”

One of the main reasons for agency sluggishness is the lack of agency and governmentwide use of synchronized, authoritative and shared data to support how the government manages itself.

For example, the Energy Department IG found that, “[t]he department often lacks the data necessary to make critical decisions, evaluate and effectively manage risks, or gain visibility into program results.” It is past time for the government to commit itself to move away from its widespread use of data calls, the error-prone, costly and manual aggregation of data used to support policy analysis and decision-making. Efforts to embrace data-informed approaches to manage government programs and operations are stymied by lack of basic agency and governmentwide data hygiene. While bright pockets exist, management gaps, as DOE OIG stated, “create blind spots in the universe of data that, if captured, could be used to more efficiently identify, track and respond to risks…”

The proposed approach starts with current agency operating models, then drives into management process integration to tackle root causes of dysfunction from the bottom up. It recognizes that inefficiency, fraud and other challenges are diffused, deeply embedded and have non-obvious interrelationships within the federal complex…(More)”