Artificial Intelligence and the Future of Work


Report by National Academies of Sciences, Engineering, and Medicine: “Advances in artificial intelligence (AI) promise to improve productivity significantly, but there are many questions about how AI could affect jobs and workers.

Recent technical innovations have driven the rapid development of generative AI systems, which produce text, images, or other content based on user requests – advances which have the potential to complement or replace human labor in specific tasks, and to reshape demand for certain types of expertise in the labor market.

Artificial Intelligence and the Future of Work evaluates recent advances in AI technology and their implications for economic productivity, the workforce, and education in the United States. The report notes that AI is a tool with the potential to enhance human labor and create new forms of valuable work – but this is not an inevitable outcome. Tracking progress in AI and its impacts on the workforce will be critical to helping inform and equip workers and policymakers to flexibly respond to AI developments…(More)”.

AI Is Evolving — And Changing Our Understanding Of Intelligence


Essay by Blaise Agüera y Arcas and James Manyika: “Dramatic advances in artificial intelligence today are compelling us to rethink our understanding of what intelligence truly is. Our new insights will enable us to build better AI and understand ourselves better.

In short, we are in paradigm-shifting territory.

Paradigm shifts are often fraught because it’s easier to adopt new ideas when they are compatible with one’s existing worldview but harder when they’re not. A classic example is the collapse of the geocentric paradigm, which dominated cosmological thought for roughly two millennia. In the geocentric model, the Earth stood still while the Sun, Moon, planets and stars revolved around us. The belief that we were at the center of the universe — bolstered by Ptolemy’s theory of epicycles, a major scientific achievement in its day — was both intuitive and compatible with religious traditions. Hence, Copernicus’s heliocentric paradigm wasn’t just a scientific advance but a hotly contested heresy and perhaps even, for some, as Benjamin Bratton notes, an existential trauma. So, today, artificial intelligence.

In this essay, we will describe five interrelated paradigm shifts informing our development of AI:

  1. Natural Computing — Computing existed in nature long before we built the first “artificial computers.” Understanding computing as a natural phenomenon will enable fundamental advances not only in computer science and AI but also in physics and biology.
  2. Neural Computing — Our brains are an exquisite instance of natural computing. Redesigning the computers that power AI so they work more like a brain will greatly increase AI’s energy efficiency — and its capabilities too.
  3. Predictive Intelligence — The success of large language models (LLMs) shows us something fundamental about the nature of intelligence: it involves statistical modeling of the future (including one’s own future actions) given evolving knowledge, observations and feedback from the past. This insight suggests that current distinctions between designing, training and running AI models are transitory; more sophisticated AI will evolve, grow and learn continuously and interactively, as we do.
  4. General Intelligence — Intelligence does not necessarily require biologically based computation. Although AI models will continue to improve, they are already broadly capable, tackling an increasing range of cognitive tasks with a skill level approaching and, in some cases, exceeding individual human capability. In this sense, “Artificial General Intelligence” (AGI) may already be here — we just keep shifting the goalposts.
  5. Collective Intelligence — Brains, AI agents and societies can all become more capable through increased scale. However, size alone is not enough. Intelligence is fundamentally social, powered by cooperation and the division of labor among many agents. In addition to causing us to rethink the nature of human (or “more than human”) intelligence, this insight suggests social aggregations of intelligences and multi-agent approaches to AI development that could reduce computational costs, increase AI heterogeneity and reframe AI safety debates.

But to understand our own “intelligence geocentrism,” we must begin by reassessing our assumptions about the nature of computing, since it is the foundation of both AI and, we will argue, intelligence in any form…(More)”.

Energy and AI


Report by the International Energy Agency (IEA): “The development and uptake of artificial intelligence (AI) has accelerated in recent years – elevating the question of what widespread deployment of the technology will mean for the energy sector. There is no AI without energy – specifically electricity for data centres. At the same time, AI could transform how the energy industry operates if it is adopted at scale. However, until now, policy makers and other stakeholders have often lacked the tools to analyse both sides of this issue due to a lack of comprehensive data. 

This report from the International Energy Agency (IEA) aims to fill this gap based on new global and regional modelling and datasets, as well as extensive consultation with governments and regulators, the tech sector, the energy industry and international experts. It includes projections for how much electricity AI could consume over the next decade, as well as which energy sources are set to help meet it. It also analyses what the uptake of AI could mean for energy security, emissions, innovation and affordability…(More)”.

We Must Steward, Not Subjugate Nor Worship AI


Essay by Brian J. A. Boyd: “…How could stewardship of artificially living AI be pursued on a broader, even global, level? Here, the concept of “integral ecology” is helpful. Pope Francis uses the phrase to highlight the ways in which everything is connected, both through the web of life and in that social, political, and environmental challenges cannot be solved in isolation. The immediate need for stewardship over AI is to ensure that its demands for power and industrial production are addressed in a way that benefits those most in need, rather than de-prioritizing them further. For example, the energy requirements to develop tomorrow’s AI should spur research into small modular nuclear reactors and updated distribution systems, making energy abundant rather than causing regressive harms by driving up prices on an already overtaxed grid. More broadly, we will need to find the right institutional arrangements and incentive structures to make AI Amistics possible.

We are having a painfully overdue conversation about the nature and purpose of social media, and tech whistleblowers like Tristan Harris have offered grave warnings about how the “race to the bottom of the brain stem” is underway in AI as well. The AI equivalent of the addictive “infinite scroll” design feature of social media will likely be engagement with simulated friends — but we need not resign ourselves to it becoming part of our lives as did social media. And as there are proposals to switch from privately held Big Data to a public Data Commons, so perhaps could there be space for AI that is governed not for maximizing profit but for being sustainable as a common-pool resource, with applications and protocols ordered toward long-run benefit as defined by local communities…(More)”.

Global data-driven prediction of fire activity


Paper by Francesca Di Giuseppe, Joe McNorton, Anna Lombardi & Fredrik Wetterhall: “Recent advancements in machine learning (ML) have expanded the potential use across scientific applications, including weather and hazard forecasting. The ability of these methods to extract information from diverse and novel data types enables the transition from forecasting fire weather, to predicting actual fire activity. In this study we demonstrate that this shift is feasible also within an operational context. Traditional methods of fire forecasts tend to over predict high fire danger, particularly in fuel limited biomes, often resulting in false alarms. By using data on fuel characteristics, ignitions and observed fire activity, data-driven predictions reduce the false-alarm rate of high-danger forecasts, enhancing their accuracy. This is made possible by high quality global datasets of fuel evolution and fire detection. We find that the quality of input data is more important when improving forecasts than the complexity of the ML architecture. While the focus on ML advancements is often justified, our findings highlight the importance of investing in high-quality data and, where necessary create it through physical models. Neglecting this aspect would undermine the potential gains from ML-based approaches, emphasizing that data quality is essential to achieve meaningful progress in fire activity forecasting…(More)”.

LLM Social Simulations Are a Promising Research Method


Paper by Jacy Reese Anthis et al: “Accurate and verifiable large language model (LLM) simulations of human research subjects promise an accessible data source for understanding human behavior and training new AI systems. However, results to date have been limited, and few social scientists have adopted these methods. In this position paper, we argue that the promise of LLM social simulations can be achieved by addressing five tractable challenges. We ground our argument in a literature survey of empirical comparisons between LLMs and human research subjects, commentaries on the topic, and related work. We identify promising directions with prompting, fine-tuning, and complementary methods. We believe that LLM social simulations can already be used for exploratory research, such as pilot experiments for psychology, economics, sociology, and marketing. More widespread use may soon be possible with rapidly advancing LLM capabilities, and researchers should prioritize developing conceptual models and evaluations that can be iteratively deployed and refined at pace with ongoing AI advances…(More)”.

AI Liability Along the Value Chain


Report by Beatriz Botero Arcila: “…explores how liability law can help solve the “problem of many hands” in AI: that is, determining who is responsible for harm that has been dealt in a value chain in which a variety of different companies and actors might be contributing to the development of any given AI system. This is aggravated by the fact that AI systems are both opaque and technically complex, making their behavior hard to predict.

Why AI Liability Matters

To find meaningful solutions to this problem, different kinds of experts have to come together. This resource is designed for a wide audience, but we indicate how specific audiences can best make use of different sections, overviews, and case studies.

Specifically, the report:

  • Proposes a 3-step analysis to consider how liability should be allocated along the value chain: 1) The choice of liability regime, 2) how liability should be shared amongst actors along the value chain and 3) whether and how information asymmetries will be addressed.
  • Argues that where ex-ante AI regulation is already in place, policymakers should consider how liability rules will interact with these rules.
  • Proposes a baseline liability regime where actors along the AI value chain share responsibility if fault can be demonstrated, paired with measures to alleviate or shift the burden of proof and to enable better access to evidence — which would incentivize companies to act with sufficient care and address information asymmetries between claimants and companies.
  • Argues that in some cases, courts and regulators should extend a stricter regime, such as product liability or strict liability.
  • Analyzes liability rules in the EU based on this framework…(More)”.

How crawlers impact the operations of the Wikimedia projects


Article by the Wikimedia Foundation: “Since the beginning of 2024, the demand for the content created by the Wikimedia volunteer community – especially for the 144 million images, videos, and other files on Wikimedia Commons – has grown significantly. In this post, we’ll discuss the reasons for this trend and its impact.

The Wikimedia projects are the largest collection of open knowledge in the world. Our sites are an invaluable destination for humans searching for information, and for all kinds of businesses that access our content automatically as a core input to their products. Most notably, the content has been a critical component of search engine results, which in turn has brought users back to our sites. But with the rise of AI, the dynamic is changing: We are observing a significant increase in request volume, with most of this traffic being driven by scraping bots collecting training data for large language models (LLMs) and other use cases. Automated requests for our content have grown exponentially, alongside the broader technology economy, via mechanisms including scraping, APIs, and bulk downloads. This expansion happened largely without sufficient attribution, which is key to drive new users to participate in the movement, and is causing a significant load on the underlying infrastructure that keeps our sites available for everyone. 

When Jimmy Carter died in December 2024, his page on English Wikipedia saw more than 2.8 million views over the course of a day. This was relatively high, but manageable. At the same time, quite a few users played a 1.5 hour long video of Carter’s 1980 presidential debate with Ronald Reagan. This caused a surge in the network traffic, doubling its normal rate. As a consequence, for about one hour a small number of Wikimedia’s connections to the Internet filled up entirely, causing slow page load times for some users. The sudden traffic surge alerted our Site Reliability team, who were swiftly able to address this by changing the paths our internet connections go through to reduce the congestion. But still, this should not have caused any issues, as the Foundation is well equipped to handle high traffic spikes during exceptional events. So what happened?…

Since January 2024, we have seen the bandwidth used for downloading multimedia content grow by 50%. This increase is not coming from human readers, but largely from automated programs that scrape the Wikimedia Commons image catalog of openly licensed images to feed images to AI models. Our infrastructure is built to sustain sudden traffic spikes from humans during high-interest events, but the amount of traffic generated by scraper bots is unprecedented and presents growing risks and costs…(More)”.

AI, Innovation and the Public Good: A New Policy Playbook


Paper by Burcu Kilic: “When Chinese start-up DeepSeek released R1 in January 2025, the groundbreaking open-source artificial intelligence (AI) model rocked the tech industry as a more cost-effective alternative to models running on more advanced chips. The launch coincided with industrial policy gaining popularity as a strategic tool for governments aiming to build AI capacity and competitiveness. Once dismissed under neoliberal economic frameworks, industrial policy is making a strong comeback with more governments worldwide embracing it to build digital public infrastructure and foster local AI ecosystems. This paper examines how the national innovation system framework can guide AI industrial policy to foster innovation and reduce reliance on dominant tech companies…(More)”.

Oxford Intersections: AI in Society


Series edited by Philipp Hacker: “…provides an interdisciplinary corpus for understanding artificial intelligence (AI) as a global phenomenon that transcends geographical and disciplinary boundaries. Edited by a consortium of experts hailing from diverse academic traditions and regions, the 11 edited and curated sections provide a holistic view of AI’s societal impact. Critically, the work goes beyond the often Eurocentric or U.S.-centric perspectives that dominate the discourse, offering nuanced analyses that encompass the implications of AI for a range of regions of the world. Taken together, the sections of this work seek to move beyond the state of the art in three specific respects. First, they venture decisively beyond existing research efforts to develop a comprehensive account and framework for the rapidly growing importance of AI in virtually all sectors of society. Going beyond a mere mapping exercise, the curated sections assess opportunities, critically discuss risks, and offer solutions to the manifold challenges AI harbors in various societal contexts, from individual labor to global business, law and governance, and interpersonal relationships. Second, the work tackles specific societal and regulatory challenges triggered by the advent of AI and, more specifically, large generative AI models and foundation models, such as ChatGPT or GPT-4, which have so far received limited attention in the literature, particularly in monographs or edited volumes. Third, the novelty of the project is underscored by its decidedly interdisciplinary perspective: each section, whether covering Conflict; Culture, Art, and Knowledge Work; Relationships; or Personhood—among others—will draw on various strands of knowledge and research, crossing disciplinary boundaries and uniting perspectives most appropriate for the context at hand…(More)”.