Paper by Daron Acemoglu: “This paper evaluates claims about large macroeconomic implications of new advances in AI. It starts from a task-based model of AI’s effects, working through automation and task complementarities. So long as AI’s microeconomic effects are driven by cost savings/productivity improvements at the task level, its macroeconomic consequences will be given by a version of Hulten’s theorem: GDP and aggregate productivity gains can be estimated by what fraction of tasks are impacted and average task-level cost savings. Using existing estimates on exposure to AI and productivity improvements at the task level, these macroeconomic effects appear nontrivial but modest—no more than a 0.66% increase in total factor productivity (TFP) over 10 years. The paper then argues that even these estimates could be exaggerated, because early evidence is from easy-to-learn tasks, whereas some of the future effects will come from hard-to-learn tasks, where there are many context-dependent factors affecting decision-making and no objective outcome measures from which to learn successful performance. Consequently, predicted TFP gains over the next 10 years are even more modest and are predicted to be less than 0.53%. I also explore AI’s wage and inequality effects. I show theoretically that even when AI improves the productivity of low-skill workers in certain tasks (without creating new tasks for them), this may increase rather than reduce inequality. Empirically, I find that AI advances are unlikely to increase inequality as much as previous automation technologies because their impact is more equally distributed across demographic groups, but there is also no evidence that AI will reduce labor income inequality. Instead, AI is predicted to widen the gap between capital and labor income. Finally, some of the new tasks created by AI may have negative social value (such as design of algorithms for online manipulation), and I discuss how to incorporate the macroeconomic effects of new tasks that may have negative social value…(More)”.
The citizen’s panel on AI issues its report
Belgian presidency of the European Union: “Randomly select 60 citizens from all four corners of Belgium. Give them an exciting topic to explore. Add a few local players. Season with participation experts. Bake for three weekends at the Egmont Palace conference centre. And you’ll end up with the rich and ambitious views of citizens on the future of artificial intelligence (AI) in the European Union.
This is the recipe that has been in progress since February 2024, led by the Belgian presidency of the European Union, with the ambition of involving citizens in this strategic field and enriching the debate on AI, which has been particularly lively in recent months as part of the drafting of the AI Act recently adopted by the European Parliament.
And the initiative really cut the mustard, as the 60 citizens worked enthusiastically, overcoming their apprehensions about a subject as complex as AI. In a spirit of collective intelligence, they dove right into the subject, listening to speakers from academia, government, civil society and the private sector, and sharing their experiences and knowledge. Some of them were just discovering AI, while others were already using it. They turned this diversity into a richness, enabling them to write a report on citizens’ views that reflects the various aspirations of the Belgian population.
At the end of the three weekends, the citizens almost unanimously adopted a precise and ambitious report containing nine key messages focusing on the need for a responsible, ambitious and beneficial approach to AI, ensuring that it serves the interests of all and leaves no one behind…(More)”
Determinants of behaviour and their efficacy as targets of behavioural change interventions
Paper with Dolores Albarracín, Bita Fayaz-Farkhad & Javier A. Granados Samayoa: “Unprecedented social, environmental, political and economic challenges — such as pandemics and epidemics, environmental degradation and community violence — require taking stock of how to promote behaviours that benefit individuals and society at large. In this Review, we synthesize multidisciplinary meta-analyses of the individual and social-structural determinants of behaviour (for example, beliefs and norms, respectively) and the efficacy of behavioural change interventions that target them. We find that, across domains, interventions designed to change individual determinants can be ordered by increasing impact as those targeting knowledge, general skills, general attitudes, beliefs, emotions, behavioural skills, behavioural attitudes and habits. Interventions designed to change social-structural determinants can be ordered by increasing impact as legal and administrative sanctions; programmes that increase institutional trustworthiness; interventions to change injunctive norms; monitors and reminders; descriptive norm interventions; material incentives; social support provision; and policies that increase access to a particular behaviour. We find similar patterns for health and environmental behavioural change specifically. Thus, policymakers should focus on interventions that enable individuals to circumvent obstacles to enacting desirable behaviours rather than targeting salient but ineffective determinants of behaviour such as knowledge and beliefs…(More)”.
Quantum Policy
A Primer by Jane Bambauer: “Quantum technologies have received billions in private and public
investments and have caused at least some ambient angst about how they will disrupt an already fast-moving economy and uncertain social order. Some consulting firms are already offering “quantum readiness” services, even though the potential applications for quantum computing, networking, and sensing technologies are still somewhat speculative, in part because the impact of these technologies may be mysterious and profound. Law and policy experts have begun to offer advice about how the development of quantum technologies should be regulated through ethical norms or laws. This report builds on the available work by providing a brief summary of the applications that seem potentially viable
to researchers and companies and cataloging the effects—both positive and negative—that these applications may have on industry, consumers, and society at large.
As the report will show, quantum technologies (like many information technologies that have come before) will produce benefits and risks and will inevitably require developers and regulators to make trade-offs between several legitimate but conflicting goals. Some of these policy decisions can be made in advance, but some will have to be reactive in nature, as unexpected risks and benefits will emerge…(More)”.
A New National Purpose: Harnessing Data for Health
Report by the Tony Blair Institute: “We are at a pivotal moment where the convergence of large health and biomedical data sets, artificial intelligence and advances in biotechnology is set to revolutionise health care, drive economic growth and improve the lives of citizens. And the UK has strengths in all three areas. The immense potential of the UK’s health-data assets, from the NHS to biobanks and genomics initiatives, can unlock new diagnostics and treatments, deliver better and more personalised care, prevent disease and ultimately help people live longer, healthier lives.
However, realising this potential is not without its challenges. The complex and fragmented nature of the current health-data landscape, coupled with legitimate concerns around privacy and public trust, has made for slow progress. The UK has had a tendency to provide short-term funding across multiple initiatives, which has led to an array of individual projects – many of which have struggled to achieve long-term sustainability and deliver tangible benefits to patients.
To overcome these challenges, it will be necessary to be bold and imaginative. We must look for ways to leverage the unique strengths of the NHS, such as its nationwide reach and cradle-to-grave data coverage, to create a health-data ecosystem that is much more than the sum of its many parts. This will require us to think differently about how we collect, manage and utilise health data, and to create new partnerships and models of collaboration that break down traditional silos and barriers. It will mean treating data as a key health resource and managing it accordingly.
One model to do this is the proposed sovereign National Data Trust (NDT) – an endeavour to streamline access to and curation of the UK’s valuable health-data assets…(More)”.
Artificial intelligence, the common good, and the democratic deficit in AI governance
Paper by Mark Coeckelbergh: “There is a broad consensus that artificial intelligence should contribute to the common good, but it is not clear what is meant by that. This paper discusses this issue and uses it as a lens for analysing what it calls the “democracy deficit” in current AI governance, which includes a tendency to deny the inherently political character of the issue and to take a technocratic shortcut. It indicates what we may agree on and what is and should be up to (further) deliberation when it comes to AI ethics and AI governance. Inspired by the republican tradition in political theory, it also argues for a more active role of citizens and (end-)users: not only as participants in deliberation but also in ensuring, creatively and communicatively, that AI contributes to the common good…(More)”.
Toward a Polycentric or Distributed Approach to Artificial Intelligence & Science
Article by Stefaan Verhulst: “Even as enthusiasm grows over the potential of artificial intelligence (AI), concerns have arisen in equal measure about a possible domination of the field by Big Tech. Such an outcome would replicate many of the mistakes of preceding decades, when a handful of companies accumulated unprecedented market power and often acted as de facto regulators in the global digital ecosystem. In response, the European Group of Chief Scientific Advisors has recently proposed establishing a “state-of-the-art facility for academic research,” to be called the European Distributed Institute for AI in Science (EDIRAS). According to the Group, the facility would be modeled on Geneva’s high-energy physics lab, CERN, with the goal of creating a “CERN for AI” to counterbalance the growing AI prowess of the US and China.
While the comparison to CERN is flawed in some respects–see below–the overall emphasis on a distributed, decentralized approach to AI is highly commendable. In what follows, we outline three key areas where such an approach can help advance the field. These areas–access to computational resources, access to high quality data, and access to purposeful modeling–represent three current pain points (“friction”) in the AI ecosystem. Addressing them through a distributed approach can not only help address the immediate challenges, but more generally advance the cause of open science and ensure that AI and data serve the broader public interest…(More)”.
Access to Public Information, Open Data, and Personal Data Protection: How do they dialogue with each other?
Report by Open Data Charter and Civic Compass: “In this study, we aim to examine data protection policies in the European Union and Latin America juxtaposed with initiatives concerning open government data and access to public information. We analyse the regulatory landscape, international rankings, and commitments about each right in four countries from each region to achieve this. Additionally, we explore how these institutions interact with one another, considering their respective stances while delving into existing tensions and exploring possibilities for achieving a balanced approach…(More)”.
Middle Tech: Software Work and the Culture of Good Enough
Book by Paula Bialski: “Contrary to much of the popular discourse, not all technology is seamless and awesome; some of it is simply “good enough.” In Middle Tech, Paula Bialski offers an ethnographic study of software developers at a non-flashy, non-start-up corporate tech company. Their stories reveal why software isn’t perfect and how developers communicate, care, and compromise to make software work—or at least work until the next update. Exploring the culture of good enoughness at a technology firm she calls “MiddleTech,” Bialski shows how doing good-enough work is a collectively negotiated resistance to the organizational ideology found in corporate software settings.
The truth, Bialski reminds us, is that technology breaks due to human-related issues: staff cutbacks cause media platforms to crash, in-car GPS systems cause catastrophic incidents, and chatbots can be weird. Developers must often labor to patch and repair legacy systems rather than dream up killer apps. Bialski presents a less sensationalist, more empirical portrait of technology work than the frequently told Silicon Valley narratives of disruption and innovation. She finds that software engineers at MiddleTech regard technology as an ephemeral object that only needs to be good enough to function until its next iteration. As a result, they don’t feel much pressure to make it perfect. Through the deeply personal stories of people and their practices at MiddleTech, Bialski traces the ways that workers create and sustain a complex culture of good enoughness…(More)”
How the war on drunk driving was won
Blog by Nick Cowen: “…Viewed from the 1960s it might have seemed like ending drunk driving would be impossible. Even in the 1980s, the movement seemed unlikely to succeed and many researchers questioned whether it constituted a social problem at all.
Yet things did change: in 1980, 1,450 fatalities were attributed to drunk driving accidents in the UK. In 2020, there were 220. Road deaths in general declined much more slowly, from around 6,000 in 1980 to 1,500 in 2020. Drunk driving fatalities dropped overall and as a percentage of all road deaths.
The same thing happened in the United States, though not to quite the same extent. In 1980, there were around 28,000 drunk driving deaths there, while in 2020, there were 11,654. Despite this progress, drunk driving remains a substantial public threat, comparable in scale to homicide (of which in 2020 there were 594 in Britain and 21,570 in America).
Of course, many things have happened in the last 40 years that contributed to this reduction. Vehicles are better designed to prioritize life preservation in the event of a collision. Emergency hospital care has improved so that people are more likely to survive serious injuries from car accidents. But, above all, driving while drunk has become stigmatized.

This stigma didn’t come from nowhere. Governments across the Western world, along with many civil society organizations, engaged in hard-hitting education campaigns about the risks of drunk driving. And they didn’t just talk. Tens of thousands of people faced criminal sanctions, and many were even put in jail.
Two underappreciated ideas stick out from this experience. First, deterrence works: incentives matter to offenders much more than many scholars found initially plausible. Second, the long-run impact that successful criminal justice interventions have is not primarily in rehabilitation, incapacitation, or even deterrence, but in altering the social norms around acceptable behavior…(More)”.