Explore our articles

Stefaan Verhulst

Book edited by Jessamy Perriam and Katrine Meldgaard Kjær: “..shows that as welfare is increasingly digitalized, an investigation of the social implications of this digitalization becomes increasingly pertinent. The book offers chapters on how the state operates, from the day-to-day practices of governance to keeping registers of businesses, from overarching and sometimes contradictory policies to considering how to best include citizens in digitalized processes. Moreover, the book takes a citizen perspective on key issues of access, identification and social harm to consider the social implications of digitalization in the everyday. The diversity of topics in Digitalization in Practice reflects how digitalization as an ongoing process and practice fundamentally impacts and often reshapes the relationship between states and citizens.

  • Provides much needed critical perspectives on digital states in practice.
  • Opens up provocative questions for further studies and research topics in digital states.
  • Showcases empirical studies of situations where digital states are enacted…(More)”.
Digitalization in Practice

Article by Ramsha Jahangir, Elodie Vialle and Dylan Moses: “It’s been 100 days since the Digital Services Act (DSA) came into effect, and many of us are still wondering how the Trusted Flagger mechanism is taking shape, particularly for civil society organizations (CSOs) that could be potential applicants.

With an emphasis on accountability and transparency, the DSA requires national coordinators to appoint Trusted Flaggers, who are designated entities whose requests to flag illegal content must be prioritized. “Notices submitted by Trusted Flaggers acting within their designated area of expertise . . . are given priority and are processed and decided upon without undue delay,” according to the DSA. Trusted flaggers can include non-governmental organizations, industry associations, private or semi-public bodies, and law enforcement agencies. For instance, a private company that focuses on finding CSAM or terrorist-type content, or tracking groups that traffic in that content, could be eligible for Trusted Flagger status under the DSA. To be appointed, entities need to meet certain criteria, including being independent, accurate, and objective.

Trusted escalation channels are a key mechanism for civil society organizations (CSOs) supporting vulnerable users, such as human rights defenders and journalists targeted by online attacks on social media, particularly in electoral contexts. However, existing channels could be much more efficient. The DSA is a unique opportunity to redesign these mechanisms for reporting illegal or harmful content at scale. They need to be rethought for CSOs that hope to become Trusted Flaggers. Platforms often require, for instance, content to be translated into English and context to be understood by English-speaking audiences (due mainly to the fact that the key decision-makers are based in the US), which creates an added burden for CSOs that are resource-strapped. The lack of transparency in the reporting process can be distressing for the victims for whom those CSOs advocate. The lack of timely response can lead to dramatic consequences for human rights defenders and information integrity. Several CSOs we spoke with were not even aware of these escalation channels – and platforms are not incentivized to promote mechanisms given the inability to vet, prioritize and resolve all potential issues sent to them….(More)”.

More Questions Than Flags: Reality Check on DSA’s Trusted Flaggers

Paper by Daron Acemoglu: “This paper evaluates claims about large macroeconomic implications of new advances in AI. It starts from a task-based model of AI’s effects, working through automation and task complementarities. So long as AI’s microeconomic effects are driven by cost savings/productivity improvements at the task level, its macroeconomic consequences will be given by a version of Hulten’s theorem: GDP and aggregate productivity gains can be estimated by what fraction of tasks are impacted and average task-level cost savings. Using existing estimates on exposure to AI and productivity improvements at the task level, these macroeconomic effects appear nontrivial but modest—no more than a 0.66% increase in total factor productivity (TFP) over 10 years. The paper then argues that even these estimates could be exaggerated, because early evidence is from easy-to-learn tasks, whereas some of the future effects will come from hard-to-learn tasks, where there are many context-dependent factors affecting decision-making and no objective outcome measures from which to learn successful performance. Consequently, predicted TFP gains over the next 10 years are even more modest and are predicted to be less than 0.53%. I also explore AI’s wage and inequality effects. I show theoretically that even when AI improves the productivity of low-skill workers in certain tasks (without creating new tasks for them), this may increase rather than reduce inequality. Empirically, I find that AI advances are unlikely to increase inequality as much as previous automation technologies because their impact is more equally distributed across demographic groups, but there is also no evidence that AI will reduce labor income inequality. Instead, AI is predicted to widen the gap between capital and labor income. Finally, some of the new tasks created by AI may have negative social value (such as design of algorithms for online manipulation), and I discuss how to incorporate the macroeconomic effects of new tasks that may have negative social value…(More)”.

The Simple Macroeconomics of AI

Belgian presidency of the European Union: “Randomly select 60 citizens from all four corners of Belgium. Give them an exciting topic to explore. Add a few local players. Season with participation experts. Bake for three weekends at the Egmont Palace conference centre. And you’ll end up with the rich and ambitious views of citizens on the future of artificial intelligence (AI) in the European Union.

This is the recipe that has been in progress since February 2024, led by the Belgian presidency of the European Union, with the ambition of involving citizens in this strategic field and enriching the debate on AI, which has been particularly lively in recent months as part of the drafting of the AI Act recently adopted by the European Parliament.

And the initiative really cut the mustard, as the 60 citizens worked enthusiastically, overcoming their apprehensions about a subject as complex as AI. In a spirit of collective intelligence, they dove right into the subject, listening to speakers from academia, government, civil society and the private sector, and sharing their experiences and knowledge. Some of them were just discovering AI, while others were already using it. They turned this diversity into a richness, enabling them to write a report on citizens’ views that reflects the various aspirations of the Belgian population.

At the end of the three weekends, the citizens almost unanimously adopted a precise and ambitious report containing nine key messages focusing on the need for a responsible, ambitious and beneficial approach to AI, ensuring that it serves the interests of all and leaves no one behind…(More)”

The citizen’s panel on AI issues its report

Paper with Dolores Albarracín, Bita Fayaz-Farkhad & Javier A. Granados Samayoa: “Unprecedented social, environmental, political and economic challenges — such as pandemics and epidemics, environmental degradation and community violence — require taking stock of how to promote behaviours that benefit individuals and society at large. In this Review, we synthesize multidisciplinary meta-analyses of the individual and social-structural determinants of behaviour (for example, beliefs and norms, respectively) and the efficacy of behavioural change interventions that target them. We find that, across domains, interventions designed to change individual determinants can be ordered by increasing impact as those targeting knowledge, general skills, general attitudes, beliefs, emotions, behavioural skills, behavioural attitudes and habits. Interventions designed to change social-structural determinants can be ordered by increasing impact as legal and administrative sanctions; programmes that increase institutional trustworthiness; interventions to change injunctive norms; monitors and reminders; descriptive norm interventions; material incentives; social support provision; and policies that increase access to a particular behaviour. We find similar patterns for health and environmental behavioural change specifically. Thus, policymakers should focus on interventions that enable individuals to circumvent obstacles to enacting desirable behaviours rather than targeting salient but ineffective determinants of behaviour such as knowledge and beliefs…(More)”.

Determinants of behaviour and their efficacy as targets of behavioural change interventions

A Primer by Jane Bambauer: “Quantum technologies have received billions in private and public
investments and have caused at least some ambient angst about how they will disrupt an already fast-moving economy and uncertain social order. Some consulting firms are already offering “quantum readiness” services, even though the potential applications for quantum computing, networking, and sensing technologies are still somewhat speculative, in part because the impact of these technologies may be mysterious and profound. Law and policy experts have begun to offer advice about how the development of quantum technologies should be regulated through ethical norms or laws. This report builds on the available work by providing a brief summary of the applications that seem potentially viable
to researchers and companies and cataloging the effects—both positive and negative—that these applications may have on industry, consumers, and society at large.

As the report will show, quantum technologies (like many information technologies that have come before) will produce benefits and risks and will inevitably require developers and regulators to make trade-offs between several legitimate but conflicting goals. Some of these policy decisions can be made in advance, but some will have to be reactive in nature, as unexpected risks and benefits will emerge…(More)”.

Quantum Policy

Report by the Tony Blair Institute: “We are at a pivotal moment where the convergence of large health and biomedical data sets, artificial intelligence and advances in biotechnology is set to revolutionise health care, drive economic growth and improve the lives of citizens. And the UK has strengths in all three areas. The immense potential of the UK’s health-data assets, from the NHS to biobanks and genomics initiatives, can unlock new diagnostics and treatments, deliver better and more personalised care, prevent disease and ultimately help people live longer, healthier lives.

However, realising this potential is not without its challenges. The complex and fragmented nature of the current health-data landscape, coupled with legitimate concerns around privacy and public trust, has made for slow progress. The UK has had a tendency to provide short-term funding across multiple initiatives, which has led to an array of individual projects – many of which have struggled to achieve long-term sustainability and deliver tangible benefits to patients.

To overcome these challenges, it will be necessary to be bold and imaginative. We must look for ways to leverage the unique strengths of the NHS, such as its nationwide reach and cradle-to-grave data coverage, to create a health-data ecosystem that is much more than the sum of its many parts. This will require us to think differently about how we collect, manage and utilise health data, and to create new partnerships and models of collaboration that break down traditional silos and barriers. It will mean treating data as a key health resource and managing it accordingly.

One model to do this is the proposed sovereign National Data Trust (NDT) – an endeavour to streamline access to and curation of the UK’s valuable health-data assets…(More)”.

A New National Purpose: Harnessing Data for Health

Paper by Mark Coeckelbergh: “There is a broad consensus that artificial intelligence should contribute to the common good, but it is not clear what is meant by that. This paper discusses this issue and uses it as a lens for analysing what it calls the “democracy deficit” in current AI governance, which includes a tendency to deny the inherently political character of the issue and to take a technocratic shortcut. It indicates what we may agree on and what is and should be up to (further) deliberation when it comes to AI ethics and AI governance. Inspired by the republican tradition in political theory, it also argues for a more active role of citizens and (end-)users: not only as participants in deliberation but also in ensuring, creatively and communicatively, that AI contributes to the common good…(More)”.

Artificial intelligence, the common good, and the democratic deficit in AI governance

Article by Stefaan Verhulst: “Even as enthusiasm grows over the potential of artificial intelligence (AI), concerns have arisen in equal measure about a possible domination of the field by Big Tech. Such an outcome would replicate many of the mistakes of preceding decades, when a handful of companies accumulated unprecedented market power and often acted as de facto regulators in the global digital ecosystem. In response, the European Group of Chief Scientific Advisors has recently proposed establishing a “state-of-the-art facility for academic research,” to be called the European Distributed Institute for AI in Science (EDIRAS). According to the Group, the facility would be modeled on Geneva’s high-energy physics lab, CERN, with the goal of creating a “CERN for AI” to counterbalance the growing AI prowess of the US and China. 

While the comparison to CERN is flawed in some respects–see below–the overall emphasis on a distributed, decentralized approach to AI is highly commendable. In what follows, we outline three key areas where such an approach can help advance the field. These areas–access to computational resources, access to high quality data, and access to purposeful modeling–represent three current pain points (“friction”) in the AI ecosystem. Addressing them through a distributed approach can not only help address the immediate challenges, but more generally advance the cause of open science and ensure that AI and data serve the broader public interest…(More)”.

Toward a Polycentric or Distributed Approach to Artificial Intelligence & Science

Report by Open Data Charter and Civic Compass: “In this study, we aim to examine data protection policies in the European Union and Latin America juxtaposed with initiatives concerning open government data and access to public information. We analyse the regulatory landscape, international rankings, and commitments about each right in four countries from each region to achieve this. Additionally, we explore how these institutions interact with one another, considering their respective stances while delving into existing tensions and exploring possibilities for achieving a balanced approach…(More)”.

Access to Public Information, Open Data, and Personal Data Protection: How do they dialogue with each other?

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday