Book edited by Kostina Prifti, Esra Demir, Julia Krämer, Klaus Heine, and Evert Stamhuis: “This book explores the structure and frameworks of digital governance, focusing on various regulatory patterns, with the aim of tackling the disruptive impact of artificial intelligence (AI) technologies. Addressing the various challenges posed by AI technologies, this book explores potential avenues for crafting legal remedies and solutions, spanning liability of AI, platform governance, and the implications for data protection and privacy…(More)”.
Anticipatory Governance: Shaping a Responsible Future
Book edited by Melodena Stephens, Raed Awamleh and Frederic Sicre: “Anticipatory Governance is the systemic process of future shaping built on the understanding that the future is not a continuation of the past or present, thus making foresight a complex task requiring the engagement of the whole of government with its constituents in a constructive and iterative manner to achieve collective intelligence. Effective anticipatory governance amplifies the fundamental properties of agile government to build trust, challenge assumptions, and reach consensus. Moreover, anticipatory governance sets the foundation to adapt to exponential change. This seismic shift in the governance environment should lead to urgent rethinking of the ways and means governments and large corporate players formulate strategies, design processes, develop human capital and shape instiutional culture to achieve public value.
From a long-term multigenerational perspective, anticipatory governance is a key component to ensure guardrails for the future. Systems thinking is needed to harness our collective intelligence, by tapping into knowledge trapped within nations, organizations, and people. Many of the wicked problems governments and corporations are grappling with like artificial intelligence applications and ethics, climate change, refugee migration, education for future skills, and health care for all, require a “system of systems”, or anticipatory governance.
Yet, no matter how much we invest in foresight and shaping the future, we still need an agile government approach to manage unintended outcomes and people’s expectations. Crisis management which begins with listening to weak signals, sensemaking, intelligence management, reputation enhancement, and public value alignment and delivery, is critical. This book dives into the theory and practice of anticipatory governance and sets the agenda for future research…(More)”
The world of tomorrow
Essay by Virginia Postrel: “When the future arrived, it felt… ordinary. What happened to the glamour of tomorrow?
Progress used to be glamorous. For the first two thirds of the twentieth-century, the terms modern, future, and world of tomorrow shimmered with promise.
Glamour is more than a synonym for fashion or celebrity, although these things can certainly be glamorous. So can a holiday resort, a city, or a career. The military can be glamorous, as can technology, science, or the religious life. It all depends on the audience. Glamour is a form of communication that, like humor, we recognize by its characteristic effect. Something is glamorous when it inspires a sense of projection and longing: if only . . .
Whatever its incarnation, glamour offers a promise of escape and transformation. It focuses deep, often unarticulated longings on an image or idea that makes them feel attainable. Both the longings – for wealth, happiness, security, comfort, recognition, adventure, love, tranquility, freedom, or respect – and the objects that represent them vary from person to person, culture to culture, era to era. In the twentieth-century, ‘the future’ was a glamorous concept…
Much has been written about how and why culture and policy repudiated the visions of material progress that animated the first half of the twentieth-century, including a special issue of this magazine inspired by J Storrs Hall’s book Where Is My Flying Car? The subtitle of James Pethokoukis’s recent book The Conservative Futurist is ‘How to create the sci-fi world we were promised’. Like Peter Thiel’s famous complaint that ‘we wanted flying cars, instead we got 140 characters’, the phrase captures a sense of betrayal. Today’s techno-optimism is infused with nostalgia for the retro future.
But the most common explanations for the anti-Promethean backlash fall short. It’s true but incomplete to blame the environmental consciousness that spread in the late sixties…
How exactly today’s longings might manifest themselves, whether in glamorous imagery or real-life social evolution, is hard to predict. But one thing is clear: For progress to be appealing, it must offer room for diverse pursuits and identities, permitting communities with different commitments and values to enjoy a landscape of pluralism without devolving into mutually hostile tribes. The ideal of the one best way passed long ago. It was glamorous in its day but glamour is an illusion…(More)”.
The AI tool that can interpret any spreadsheet instantly
Article by Duncan C. McElfresh: “Say you run a hospital and you want to estimate which patients have the highest risk of deterioration so that your staff can prioritize their care1. You create a spreadsheet in which there is a row for each patient, and columns for relevant attributes, such as age or blood-oxygen level. The final column records whether the person deteriorated during their stay. You can then fit a mathematical model to these data to estimate an incoming patient’s deterioration risk. This is a classic example of tabular machine learning, a technique that uses tables of data to make inferences. This usually involves developing — and training — a bespoke model for each task. Writing in Nature, Hollmann et al.report a model that can perform tabular machine learning on any data set without being trained specifically to do so.
Tabular machine learning shares a rich history with statistics and data science. Its methods are foundational to modern artificial intelligence (AI) systems, including large language models (LLMs), and its influence cannot be overstated. Indeed, many online experiences are shaped by tabular machine-learning models, which recommend products, generate advertisements and moderate social-media content3. Essential industries such as healthcare and finance are also steadily, if cautiously, moving towards increasing their use of AI.
Despite the field’s maturity, Hollmann and colleagues’ advance could be revolutionary. The authors’ contribution is known as a foundation model, which is a general-purpose model that can be used in a range of settings. You might already have encountered foundation models, perhaps unknowingly, through AI tools, such as ChatGPT and Stable Diffusion. These models enable a single tool to offer varied capabilities, including text translation and image generation. So what does a foundation model for tabular machine learning look like?
Let’s return to the hospital example. With spreadsheet in hand, you choose a machine-learning model (such as a neural network) and train the model with your data, using an algorithm that adjusts the model’s parameters to optimize its predictive performance (Fig. 1a). Typically, you would train several such models before selecting one to use — a labour-intensive process that requires considerable time and expertise. And of course, this process must be repeated for each unique task.

The Future of Jobs Report 2025
Report by the World Economic Forum: “Technological change, geoeconomic fragmentation, economic uncertainty, demographic shifts and the green transition – individually and in combination are among the major drivers expected to shape and transform the global labour market by 2030. The Future of Jobs Report 2025 brings together the perspective of over 1,000 leading global employers—collectively representing more than 14 million workers across 22 industry clusters and 55 economies from around the world—to examine how these macrotrends impact jobs and skills, and the workforce transformation strategies employers plan to embark on in response, across the 2025 to 2030 timeframe…(More)”.
In the hands of a few: Disaster recovery committee networks
Paper by Timothy Fraser, Daniel P. Aldrich, Andrew Small and Andrew Littlejohn: “When disaster strikes, urban planners often rely on feedback and guidance from committees of officials, residents, and interest groups when crafting reconstruction policy. Focusing on recovery planning committees after Japan’s 2011 earthquake, tsunami, and nuclear disasters, we compile and analyze a dataset on committee membership patterns across 39 committees with 657 members. Using descriptive statistics and social network analysis, we examine 1) how community representation through membership varied among committees, and 2) in what ways did committees share members, interlinking members from certain interests groups. This study finds that community representation varies considerably among committees, negatively related to the prevalence of experts, bureaucrats, and business interests. Committee membership overlap occurred heavily along geographic boundaries, bridged by engineers and government officials. Engineers and government bureaucrats also tend to be connected to more members of the committee network than community representatives, giving them prized positions to disseminate ideas about best practices in recovery. This study underscores the importance of diversity and community representation in disaster recovery planning to facilitate equal participation, information access, and policy implementation across communities…(More)”.
The Circle of Sharing: How Open Datasets Power AI Innovation
A Sankey diagram developed by AI World and Hugging Face:”… illustrating the flow from top open-source datasets through AI organizations to their derivative models, showcasing the collaborative nature of AI development…(More)”.

Distorted insights from human mobility data
Paper by Riccardo Gallotti, Davide Maniscalco, Marc Barthelemy & Manlio De Domenico: “The description of human mobility is at the core of many fundamental applications ranging from urbanism and transportation to epidemics containment. Data about human movements, once scarce, is now widely available thanks to new sources such as phone call detail records, GPS devices, or Smartphone apps. Nevertheless, it is still common to rely on a single dataset by implicitly assuming that the statistical properties observed are robust regardless of data gathering and processing techniques. Here, we test this assumption on a broad scale by comparing human mobility datasets obtained from 7 different data-sources, tracing 500+ millions individuals in 145 countries. We report wide quantifiable differences in the resulting mobility networks and in the displacement distribution. These variations impact processes taking place on these networks like epidemic spreading. Our results point to the need for disclosing the data processing and, overall, to follow good practices to ensure robust and reproducible results…(More)”
Academic writing is getting harder to read—the humanities most of all
The Economist: “Academics have long been accused of jargon-filled writing that is impossible to understand. A recent cautionary tale was that of Ally Louks, a researcher who set off a social media storm with an innocuous post on X celebrating the completion of her PhD. If it was Ms Louks’s research topic (“olfactory ethics”—the politics of smell) that caught the attention of online critics, it was her verbose thesis abstract that further provoked their ire. In two weeks, the post received more than 21,000 retweets and 100m views.
Although the abuse directed at Ms Louks reeked of misogyny and anti-intellectualism—which she admirably shook off—the reaction was also a backlash against an academic use of language that is removed from normal life. Inaccessible writing is part of the problem. Research has become harder to read, especially in the humanities and social sciences. Though authors may argue that their work is written for expert audiences, much of the general public suspects that some academics use gobbledygook to disguise the fact that they have nothing useful to say. The trend towards more opaque prose hardly allays this suspicion…(More)”.
To Whom Does the World Belong?
Essay by Alexander Hartley: “For an idea of the scale of the prize, it’s worth remembering that 90 percent of recent U.S. economic growth, and 65 percent of the value of its largest 500 companies, is already accounted for by intellectual property. By any estimate, AI will vastly increase the speed and scale at which new intellectual products can be minted. The provision of AI services themselves is estimated to become a trillion-dollar market by 2032, but the value of the intellectual property created by those services—all the drug and technology patents; all the images, films, stories, virtual personalities—will eclipse that sum. It is possible that the products of AI may, within my lifetime, come to represent a substantial portion of all the world’s financial value.
In this light, the question of ownership takes on its true scale, revealing itself as a version of Bertolt Brecht’s famous query: To whom does the world belong?
Questions of AI authorship and ownership can be divided into two broad types. One concerns the vast troves of human-authored material fed into AI models as part of their “training” (the process by which their algorithms “learn” from data). The other concerns ownership of what AIs produce. Call these, respectively, the input and output problems.
So far, attention—and lawsuits—have clustered around the input problem. The basic business model for LLMs relies on the mass appropriation of human-written text, and there simply isn’t anywhere near enough in the public domain. OpenAI hasn’t been very forthcoming about its training data, but GPT-4 was reportedly trained on around thirteen trillion “tokens,” roughly the equivalent of ten trillion words. This text is drawn in large part from online repositories known as “crawls,” which scrape the internet for troves of text from news sites, forums, and other sources. Fully aware that vast data scraping is legally untested—to say the least—developers charged ahead anyway, resigning themselves to litigating the issue in retrospect. Lawyer Peter Schoppert has called the training of LLMs without permission the industry’s “original sin”—to be added, we might say, to the technology’s mind-boggling consumption of energy and water in an overheating planet. (In September, Bloomberg reported that plans for new gas-fired power plants have exploded as energy companies are “racing to meet a surge in demand from power-hungry AI data centers.”)…(More)”.