Enhancing the European Administrative Space (ComPAct)


European Commission: “Efficient national public administrations are critical to transform EU and national policies into reality, to implement reforms to the benefit of people and business alike, and to channel investments towards the achievement of the green and digital transition, and greater competitiveness. At the same time, national public administrations are also under an increasing pressure to deal with polycrisis and with many competing priorities. 

For the first time, with the ComPAct, the Commission is proposing a strategic set of actions not only to support the public administrations in the Member States to become more resilient, innovative and skilled, but also to strengthen the administrative cooperation between them, thereby allowing to close existing gaps in policies and services at European level.

With the ComPAct, the Commission aims to enhance the European Administrative Space by promoting a common set of overarching principles underpinning the quality of public administration and reinforcing its support for the administrative modernisation of the Member States. The ComPAct will help Member States address the EU Skills Agenda and the actions under the European Year of Skills, deliver on the targets of the Digital Decade to have 100% of key public services accessible online by 2030, and shape the conditions for the economies and societies to deliver on the ambitious 2030 climate and energy targets. The ComPAct will also help EU enlargement countries on their path to building better public administrations…(More)”.

Learning Like a State: Statecraft in the Digital Age


Paper by Marion Fourcade and Jeff Gordon: “What does it mean to sense, see, and act like a state in the digital age? We examine the changing phenomenology, governance, and capacity of the state in the era of big data and machine learning. Our argument is threefold. First, what we call the dataist state may be less accountable than its predecessor, despite its promise of enhanced transparency and accessibility. Second, a rapid expansion of the data collection mandate is fueling a transformation in political rationality, in which data affordances increasingly drive policy strategies. Third, the turn to dataist statecraft facilitates a corporate reconstruction of the state. On the one hand, digital firms attempt to access and capitalize on data “minted” by the state. On the other hand, firms compete with the state in an effort to reinvent traditional public functions. Finally, we explore what it would mean for this dataist state to “see like a citizen” instead…(More)”.

Shifting policy systems – a framework for what to do and how to do it


Blog by UK Policy Lab: “Systems change is hard work, and it takes time. The reality is that no single system map or tool is enough to get you from point A to point B, from system now to system next. Over the last year, we have explored the latest in systems change theory and applied it to policymaking. In this four part blog series, we share our reflections on the wealth of knowledge we’ve gained working on intractable issues surrounding how support is delivered for people experiencing multiple disadvantage. Along the way, we realised that we need to make new tools to support policy teams to do this deep work in the future, and to see afresh the limitations of existing mental models for change and transformation.

Policy Lab has previously written about systems mapping as a useful process for understanding the interconnected nature of factors and actors that make up policy ecosystems. Here, we share our latest experimentation on how we can generate practical ideas for long-lasting and systemic change.

This blog includes:

  • An overview of what we did on our latest project – including the policy context, systems change frameworks we experimented with, and the bespoke project framework we created;
  • Our reflections on how we carried out the project;
  • A matrix which provides a practical guide for you to use this approach in your own work…(More)”.

Artificial intelligence in government: Concepts, standards, and a unified framework


Paper by Vincent J. Straub, Deborah Morgan, Jonathan Bright, Helen Margetts: “Recent advances in artificial intelligence (AI), especially in generative language modelling, hold the promise of transforming government. Given the advanced capabilities of new AI systems, it is critical that these are embedded using standard operational procedures, clear epistemic criteria, and behave in alignment with the normative expectations of society. Scholars in multiple domains have subsequently begun to conceptualize the different forms that AI applications may take, highlighting both their potential benefits and pitfalls. However, the literature remains fragmented, with researchers in social science disciplines like public administration and political science, and the fast-moving fields of AI, ML, and robotics, all developing concepts in relative isolation. Although there are calls to formalize the emerging study of AI in government, a balanced account that captures the full depth of theoretical perspectives needed to understand the consequences of embedding AI into a public sector context is lacking. Here, we unify efforts across social and technical disciplines by first conducting an integrative literature review to identify and cluster 69 key terms that frequently co-occur in the multidisciplinary study of AI. We then build on the results of this bibliometric analysis to propose three new multifaceted concepts for understanding and analysing AI-based systems for government (AI-GOV) in a more unified way: (1) operational fitness, (2) epistemic alignment, and (3) normative divergence. Finally, we put these concepts to work by using them as dimensions in a conceptual typology of AI-GOV and connecting each with emerging AI technical measurement standards to encourage operationalization, foster cross-disciplinary dialogue, and stimulate debate among those aiming to rethink government with AI…(More)”.

Why Deliberation and Voting Belong Together


Paper by Simone Chambers & Mark E. Warren: “The field of deliberative democracy now generally recognizes the co-dependence of deliberation and voting. The field tends to emphasize what deliberation accomplishes for vote-based decisions. In this paper, we reverse this now common view to ask: In what ways does voting benefit deliberation? We discuss seven ways voting can complement and sometimes enhance deliberation. First, voting furnishes deliberation with a feasible and fair closure mechanism. Second, the power to vote implies equal recognition and status, both morally and strategically, which is a condition of democratic deliberation. Third, voting politicizes deliberation by injecting the strategic features of politics into deliberation—effectively internalizing conflict into deliberative processes, without which they can become detached from their political environments. Fourth, anticipation of voting may induce authenticity by revealing preferences, as what one says will count. Fifth, voting preserves expressions of dissent, helping to push back against socially induced pressures for consensus. Sixth, voting defines the issues, such that deliberation is focused, and thus more likely to be effective. And, seventh, within contexts where votes are public—as in representative contexts, voting can induce accountability, particularly for one’s claims. We then use these points to discuss four general types of institutions—general elections, legislatures, minipublics, and minipublics embedded in referendum processes—that combine talking and voting, with the aim of identifying designs that do a better or worse job of capitalizing upon the strengths of each…(More)”.

Setting Democratic Ground Rules for AI: Civil Society Strategies


Report by Beth Kerley: “…analyzes priorities, challenges, and promising civil society strategies for advancing democratic approaches to governing artificial intelligence (AI). The report is based on conversations from a private Forum workshop in Buenos Aires, Argentina that brought together Latin American and global researchers and civil society practitioners.

With recent leaps in the development of AI, we are experiencing a seismic shift in the balance of power between people and governments, posing new challenges to democratic principles such as privacy, transparency, and non-discrimination. We know that AI will shape the political world we inhabit–but how can we ensure that democratic norms and institutions shape the trajectory of AI?

Drawing on global civil society perspectives, this report surveys what stakeholders need to know about AI systems and the human relationships behind them. It delves into the obstacles– from misleading narratives to government opacity to gaps in technical expertise–that hinder democratic engagement on AI governance, and explores how new thinking, new institutions, and new collaborations can better equip societies to set democratic ground rules for AI technologies…(More)”.

Europe wants to get better at planning for the worst


Article by Sarah Anne Aarup: “The European Union is beset by doom and gloom — from wars on its doorstep to inflation and the climate crisis — not to mention political instability in the U.S. and rivalry with China.

All too often, the EU has been overtaken by events, which makes the task of getting better at planning for the worst all the more pressing. 

As European leaders fought political fires at their informal summit last week in Granada, unaware that Palestinian militants would launch their devastating raid on Israel a day later, they quietly started a debate on strategic foresight.

At this stage still very much a thought experiment, the concept of “open strategic autonomy” is being championed by host Spain, the current president of the Council of the EU. The idea reflects a shift in priorities to navigate an increasingly uncertain world, and a departure from the green and digital transitions that have dominated the agenda in recent years.

To the uninitiated, the concept of open strategic autonomy sounds like an oxymoron — that’s because it is.

After the hyper globalized early 2000s, trust in liberalism started to erode. Then the Trump-era trade wars, COVID-19 pandemic and Russia’s invasion of Ukraine exposed Europe’s economic reliance on powerful nations that are either latent — or overt — strategic rivals.

“The United States and China are becoming more self-reliant, and some voices were saying that this is what we have to do,” an official with the Spanish presidency told POLITICO. “But that’s not a good idea for Europe.”

Instead, open strategic autonomy is about shielding the EU just enough to protect its economic security while remaining an international player. In other words, it means “cooperating multilaterally wherever we can, acting autonomously wherever we must.”

It’s a grudging acceptance that great power politics now dominate economics…

The open strategic autonomy push is about countering an inward turn that was all about cutting dependencies, such as the EU’s reliance on Russian energy, after President Vladimir Putin ordered the invasion of Ukraine.

“[We’re] missing a more balanced and forward-looking strategy” following the Versailles Declaration, the Spanish official said, referring to a first response by EU leaders to the Russian attack of February 24, 2022.

Spain delivered its contribution to the debate in the form of a thick paper drafted by its foresight office, in coordination with over 80 ministries across the EU…(More)”.

AI-tocracy


Article by Peter Dizikes: “It’s often believed that authoritarian governments resist technical innovation in a way that ultimately weakens them both politically and economically. But a more complicated story emerges from a new study on how China has embraced AI-driven facial recognition as a tool of repression. 

“What we found is that in regions of China where there is more unrest, that leads to greater government procurement of facial-recognition AI,” says coauthor Martin Beraja, an MIT economist. Not only has use of the technology apparently worked to suppress dissent, but it has spurred software development. The scholars call this mutually reinforcing situation an “AI-tocracy.” 

In fact, they found, firms that were granted a government contract for facial-recognition technologies produce about 49% more software products in the two years after gaining the contract than before. “We examine if this leads to greater innovation by facial-recognition AI firms, and indeed it does,” Beraja says.

Adding it all up, the case of China indicates how autocratic governments can potentially find their political power enhanced, rather than upended, when they harness technological advances—and even generate more economic growth than they would have otherwise…(More)”.

Citizens’ Assemblies Are Upgrading Democracy: Fair Algorithms Are Part of the Program


Article by Ariel Procaccia: “…Taken together, these assemblies have demonstrated an impressive capacity to uncover the will of the people and build consensus.

The effectiveness of citizens’ assemblies isn’t surprising. Have you ever noticed how politicians grow a spine the moment they decide not to run for reelection? Well, a citizens’ assembly is a bit like a legislature whose members make a pact barring them from seeking another term in office. The randomly selected members are not beholden to party machinations or outside interests; they are free to speak their mind and vote their conscience.

What’s more, unlike elected bodies, these assemblies are chosen to mirror the population, a property that political theorists refer to as descriptive representation. For example, a typical citizens’ assembly has a roughly equal number of men and women (some also ensure nonbinary participation), whereas the average proportion of seats held by women in national parliaments worldwide was 26 percent in 2021—a marked increase from 12 percent in 1997 but still far from gender balance. Descriptive representation, in turn, lends legitimacy to the assembly: citizens seem to find decisions more acceptable when they are made by people like themselves.

As attractive as descriptive representation is, there are practical obstacles to realizing it while adhering to the principle of random selection. Overcoming these hurdles has been a passion of mine for the past few years. Using tools from mathematics and computer science, my collaborators and I developed an algorithm for the selection of citizens’ assemblies that many practitioners around the world are using. Its story provides a glimpse into the future of democracy—and it begins a long time ago…(More)”.

How Americans View Data Privacy


Pew Research: “…Americans – particularly Republicans – have grown more concerned about how the government uses their data. The share who say they are worried about government use of people’s data has increased from 64% in 2019 to 71% today. That reflects rising concern among Republicans (from 63% to 77%), while Democrats’ concern has held steady. (Each group includes those who lean toward the respective party.)

The public increasingly says they don’t understand what companies are doing with their data. Some 67% say they understand little to nothing about what companies are doing with their personal data, up from 59%.

Most believe they have little to no control over what companies or the government do with their data. While these shares have ticked down compared with 2019, vast majorities feel this way about data collected by companies (73%) and the government (79%).

We’ve studied Americans’ views on data privacy for years. The topic remains in the national spotlight today, and it’s particularly relevant given the policy debates ranging from regulating AI to protecting kids on social media. But these are far from abstract concepts. They play out in the day-to-day lives of Americans in the passwords they choose, the privacy policies they agree to and the tactics they take – or not – to secure their personal information. We surveyed 5,101 U.S. adults using Pew Research Center’s American Trends Panel to give voice to people’s views and experiences on these topics.

In addition to the key findings covered on this page, the three chapters of this report provide more detail on: