Book by Dan Honig: “…argues that the performance of our governments can be transformed by managing bureaucrats for their empowerment rather than for compliance. Aimed at public sector workers, leaders, academics, and citizens alike, it contends that public sectors too often rely on a managerial approach that seeks to tightly monitor and control employees, and thus demotivates and repels the mission-motivated. The book suggests that better performance can in many cases come from a more empowerment-oriented managerial approach—which allows autonomy, cultivates feelings of competence, and creates connection to peers and purpose—which allows the mission-motivated to thrive. Arguing against conventional wisdom, the volume argues that compliance often thwarts, rather than enhances, public value—and that we can often get less corruption and malfeasance with less monitoring. It provides a handbook of strategies for managers to introduce empowerment-oriented strategies into their agency. It also describes what everyday citizens can do to support the empowerment of bureaucrats in their governments. Interspersed throughout this book are featured profiles of real-life Mission Driven Bureaucrats, who exemplify the dedication and motivation that is typical of many civil servants. Drawing on original empirical data from a number of countries and the prior work of other scholars from around the globe, the volume argues that empowerment-oriented management and how to cultivate, support, attract, and retain Mission Driven Bureaucrats should have a larger place in our thinking and practice…(More)”.
Governance in silico: Experimental sandbox for policymaking over AI Agents
Paper by Denisa Reshef Keraa, Eilat Navonb and Galit Well: “The concept of ‘governance in silico’ summarizes and questions the various design and policy experiments with synthetic data and content in public policy, such as synthetic data simulations, AI agents, and digital twins. While it acknowledges the risks of AI-generated hallucinations, errors, and biases, often reflected in the parameters and weights of the ML models, it focuses on the prompts. Prompts enable stakeholder negotiation and representation of diverse agendas and perspectives that support experimental and inclusive policymaking. To explore the prompts’ engagement qualities, we conducted a pilot study on co-designing AI agents for negotiating contested aspects of the EU Artificial Intelligence Act (EU AI Act). The experiments highlight the value of an ‘exploratory sandbox’ approach, which fosters political agency through direct representation over AI agent simulations. We conclude that such ‘governance in silico’ exploratory approach enhances public consultation and engagement and presents a valuable alternative to the frequently overstated promises of evidence-based policy…(More)”.
Connecting the dots: AI is eating the web that enabled it
Article by Tom Wheeler: “The large language models (LLMs) of generative AI that scraped their training data from websites are now using that data to eliminate the need to go to many of those same websites. Respected digital commentator Casey Newton concluded, “the web is entering a state of managed decline.” The Washington Post headline was more dire: “Web publishers brace for carnage as Google adds AI answers.”…
Created by Sir Tim Berners-Lee in 1989, the World Wide Web redefined the nature of the internet into a user-friendly linkage of diverse information repositories. “The first decade of the web…was decentralized with a long-tail of content and options,” Berners-Lee wrote this year on the occasion of its 35th anniversary. Over the intervening decades, that vision of distributed sources of information has faced multiple challenges. The dilution of decentralization began with powerful centralized hubs such as Facebook and Google that directed user traffic. Now comes the ultimate disintegration of Berners-Lee’s vision as generative AI reduces traffic to websites by recasting their information.
The web’s open access to the world’s information trained the large language models (LLMs) of generative AI. Now, those generative AI models are coming for their progenitor.
The web allowed users to discover diverse sources of information from which to draw conclusions. AI cuts out the intellectual middleman to go directly to conclusions from a centralized source.
The AI paradigm of cutting out the middleman appears to have been further advanced in Apple’s recent announcement that it will incorporate OpenAI to enable its Siri app to provide ChatGPT-like answers. With this new deal, Apple becomes an AI-based disintermediator, not only eliminating the need to go to websites, but also potentially disintermediating the need for the Google search engine for which Apple has been paying $20 billion annually.
The Atlantic, University of Toronto, and Gartner studies suggest the Pew research on website mortality could be just the beginning. Generative AI’s ability to deliver conclusions cannibalizes traffic to individual websites threatening the raison d’être of all websites, especially those that are commercially supported…(More)”
Using AI to Inform Policymaking
Paper for the AI4Democracy series at The Center for the Governance of Change at IE University: “Good policymaking requires a multifaceted approach, incorporating diverse tools and processes to address the varied needs and expectations of constituents. The paper by Turan and McKenzie focuses on an LLM-based tool, “Talk to the City” (TttC), developed to facilitate collective decision-making by soliciting, analyzing, and organizing public opinion. This tool has been tested in three distinct applications:
1. Finding Shared Principles within Constituencies: Through large-scale citizen consultations, TttC helps identify common values and priorities.
2. Compiling Shared Experiences in Community Organizing: The tool aggregates and synthesizes the experiences of community members, providing a cohesive overview.
3. Action-Oriented Decision Making in Decentralized Governance: TttC supports decision-making processes in decentralized governance structures by providing actionable insights from diverse inputs.
CAPABILITIES AND BENEFITS OF LLM TOOLS
LLMs, when applied to democratic decision-making, offer significant advantages:
- Processing Large Volumes of Qualitative Inputs: LLMs can handle extensive qualitative data, summarizing discussions and identifying overarching themes with high accuracy.
- Producing Aggregate Descriptions in Natural Language: The ability to generate clear, comprehensible summaries from complex data makes these tools invaluable for communicating nuanced topics.
- Facilitating Understanding of Constituents’ Needs: By organizing public input, LLM tools help leaders gain a better understanding of their constituents’ needs and priorities.
CASE STUDIES AND TOOL EFFICACY
The paper presents case studies using TttC, demonstrating its effectiveness in improving collective deliberation and decision-making. Key functionalities include:
- Aggregating Responses and Clustering Ideas: TttC identifies common themes and divergences within a population’s opinions.
- Interactive Interface for Exploration: The tool provides an interactive platform for exploring the diversity of opinions at both individual and group scales, revealing complexity, common ground, and polarization…(More)”
Is Software Eating the World?
Paper by Sangmin Aum & Yongseok Shin: “When explaining the declining labor income share in advanced economies, the macro literature finds that the elasticity of substitution between capital and labor is greater than one. However, the vast majority of micro-level estimates shows that capital and labor are complements (elasticity less than one). Using firm- and establishment-level data from Korea, we divide capital into equipment and software, as they may interact with labor in different ways. Our estimation shows that equipment and labor are complements (elasticity 0.6), consistent with other micro-level estimates, but software and labor are substitutes (1.6), a novel finding that helps reconcile the macro vs. micro-literature elasticity discord. As the quality of software improves, labor shares fall within firms because of factor substitution and endogenously rising markups. In addition, production reallocates toward firms that use software more intensively, as they become effectively more productive. Because in the data these firms have higher markups and lower labor shares, the reallocation further raises the aggregate markup and reduces the aggregate labor share. The rise of software accounts for two-thirds of the labor share decline in Korea between 1990 and 2018. The factor substitution and the markup channels are equally important. On the other hand, the falling equipment price plays a minor role, because the factor substitution and the markup channels offset each other…(More)”.
The use of AI for improving energy security
Rand Report: “Electricity systems around the world are under pressure due to aging infrastructure, rising demand for electricity and the need to decarbonise energy supplies at pace. Artificial intelligence (AI) applications have potential to help address these pressures and increase overall energy security. For example, AI applications can reduce peak demand through demand response, improve the efficiency of wind farms and facilitate the integration of large numbers of electric vehicles into the power grid. However, the widespread deployment of AI applications could also come with heightened cybersecurity risks, the risk of unexplained or unexpected actions, or supplier dependency and vendor lock-in. The speed at which AI is developing means many of these opportunities and risks are not yet well understood.
The aim of this study was to provide insight into the state of AI applications for the power grid and the associated risks and opportunities. Researchers conducted a focused scan of the scientific literature to find examples of relevant AI applications in the United States, the European Union, China and the United Kingdom…(More)”.
The Behavioral Scientists Working Toward a More Peaceful World
Interview by Heather Graci: “…Nation-level data doesn’t help us understand community-level conflict. Without understanding community-level conflict, it becomes much harder to design policies to prevent it.
Cikara: “So much of the data that we have is at the level of the nation, when our effects are all happening at very local levels. You see these reports that say, “In Germany, 14 percent of the population is immigrants.” It doesn’t matter at the national level, because they’re not distributed evenly across the geography. That means that some communities are going to be at greater risk for conflict than others. But that sort of local variation and sensitivity to it, at least heretofore, has really been missing from the conversation on the research side. Even when you’re in the same place, in the same country within the same state, the same canton, there can still be a ton of variation from neighborhood to neighborhood.
“The other thing that we know matters a lot is not just the diversity of these neighborhoods but the segregation of them. It turns out that these kinds of prejudices and violence are less likely to break out in those places where it’s both diverse and people are interdigitated with how they live. So it’s not just the numbers, it’s also the spatial organization.
“For example, in Singapore, because so much of the real estate is state-owned, they make it so that people who are coming from different countries can’t cluster together because they assign them to live separate from one another in order to prevent these sorts of enclaves. All these structural and meta-level organizational features have really, really important inputs for intergroup dynamics and psychology.”..(More)”.
Why policy failure is a prerequisite for innovation in the public sector
Blog by Philipp Trein and Thenia Vagionaki: “In our article entitled, “Why policy failure is a prerequisite for innovation in the public sector,” we explore the relationship between policy failure and innovation within public governance. Drawing inspiration from the “Innovator’s Dilemma,”—a theory from the management literature—we argue that the very nature of policymaking, characterized by myopia of voters, blame avoidance by decisionmakers, and the complexity (ill-structuredness) of societal challenges, has an inherent tendency to react with innovation only after failure of existing policies.
Our analysis implies that we need to be more critical of what the policy process can achieve in terms of public sector innovation. Cognitive limitations tend to lead to a misperception of problems and inaccurate assessment of risks by decision makers according to the “Innovator’s Dilemma”. This problem implies that true innovation (non-trivial policy changes) are unlikely to happen before an existing policy has failed visibly. However, our perspective does not want to paint a gloomy picture for public policy making but rather offers a more realistic interpretation of what public sector innovation can achieve. As a consequence, learning from experts in the policy process should be expected to correct failures in public sector problem-solving during the political process, rather than raise expectations beyond what is possible.
The potential impact of our findings is profound. For practitioners and policymakers, this insight offers a new lens through which to evaluate the failure and success of public policies. Our work advocates a paradigm shift in how we perceive, manage, and learn from policy failures in the public sector, and for the expectations we have towards learning and the use of evidence in policymaking. By embracing the limitations of innovation in public policy, we can better manage expectations and structure the narrative regarding the capacity of public policy to address collective problems…(More)”.
The Character of Consent
Book by Meg Leta Jones about The History of Cookies and the Future of Technology Policy: “Consent pop-ups continually ask us to download cookies to our computers, but is this all-too-familiar form of privacy protection effective? No, Meg Leta Jones explains in The Character of Consent, rather than promote functionality, privacy, and decentralization, cookie technology has instead made the internet invasive, limited, and clunky. Good thing, then, that the cookie is set for retirement in 2024. In this eye-opening book, Jones tells the little-known story of this broken consent arrangement, tracing it back to the major transnational conflicts around digital consent over the last twenty-five years. What she finds is that the policy controversy is not, in fact, an information crisis—it’s an identity crisis.
Instead of asking how people consent, Jones asks who exactly is consenting and to what. Packed into those cookie pop-ups, she explains, are three distinct areas of law with three different characters who can consent. Within (mainly European) data protection law, the data subject consents. Within communication privacy law, the user consents. And within consumer protection law, the privacy consumer consents. These areas of law have very different histories, motivations, institutional structures, expertise, and strategies, so consent—and the characters who can consent—plays a unique role in those areas of law….(More)”.
Can Artificial Intelligence Bring Deliberation to the Masses?
Chapter by Hélène Landemore: “A core problem in deliberative democracy is the tension between two seemingly equally important conditions of democratic legitimacy: deliberation, on the one hand, and mass participation, on the other. Might artificial intelligence help bring quality deliberation to the masses? The answer is a qualified yes. The chapter first examines the conundrum in deliberative democracy around the trade-off between deliberation and mass participation by returning to the seminal debate between Joshua Cohen and Jürgen Habermas. It then turns to an analysis of the 2019 French Great National Debate, a low-tech attempt to involve millions of French citizens in a two-month-long structured exercise of collective deliberation. Building on the shortcomings of this process, the chapter then considers two different visions for an algorithm-powered form of mass deliberation—Mass Online Deliberation (MOD), on the one hand, and Many Rotating Mini-publics (MRMs), on the other—theorizing various ways artificial intelligence could play a role in them. To the extent that artificial intelligence makes the possibility of either vision more likely to come to fruition, it carries with it the promise of deliberation at the very large scale….(More)”