Paper by Jenni Owen: “Researchers often lament that government decision-makers do not generate or use research evidence. People in government often lament that researchers are not responsive to government’s needs. Yet there is increasing enthusiasm in government, research, and philanthropy sectors for developing, investing in, and sustaining government-research partnerships that focus on government’s use of evidence. There is, however, scant guidance about how to do so. To help fill the gap, this essay addresses (1) Why government-research partnerships matter; (2) Barriers to developing government-research partnerships; (3) Strategies for addressing the barriers; (4) The role of philanthropy in government-research partnerships. The momentum to develop, invest in, and sustain cross-sector partnerships that advance government’s use of evidence is exciting. It is especially encouraging that there are feasible and actionable strategies for doing so…(More)”.
Governance in silico: Experimental sandbox for policymaking over AI Agents
Paper by Denisa Reshef Keraa, Eilat Navonb and Galit Well: “The concept of ‘governance in silico’ summarizes and questions the various design and policy experiments with synthetic data and content in public policy, such as synthetic data simulations, AI agents, and digital twins. While it acknowledges the risks of AI-generated hallucinations, errors, and biases, often reflected in the parameters and weights of the ML models, it focuses on the prompts. Prompts enable stakeholder negotiation and representation of diverse agendas and perspectives that support experimental and inclusive policymaking. To explore the prompts’ engagement qualities, we conducted a pilot study on co-designing AI agents for negotiating contested aspects of the EU Artificial Intelligence Act (EU AI Act). The experiments highlight the value of an ‘exploratory sandbox’ approach, which fosters political agency through direct representation over AI agent simulations. We conclude that such ‘governance in silico’ exploratory approach enhances public consultation and engagement and presents a valuable alternative to the frequently overstated promises of evidence-based policy…(More)”.
Is Software Eating the World?
Paper by Sangmin Aum & Yongseok Shin: “When explaining the declining labor income share in advanced economies, the macro literature finds that the elasticity of substitution between capital and labor is greater than one. However, the vast majority of micro-level estimates shows that capital and labor are complements (elasticity less than one). Using firm- and establishment-level data from Korea, we divide capital into equipment and software, as they may interact with labor in different ways. Our estimation shows that equipment and labor are complements (elasticity 0.6), consistent with other micro-level estimates, but software and labor are substitutes (1.6), a novel finding that helps reconcile the macro vs. micro-literature elasticity discord. As the quality of software improves, labor shares fall within firms because of factor substitution and endogenously rising markups. In addition, production reallocates toward firms that use software more intensively, as they become effectively more productive. Because in the data these firms have higher markups and lower labor shares, the reallocation further raises the aggregate markup and reduces the aggregate labor share. The rise of software accounts for two-thirds of the labor share decline in Korea between 1990 and 2018. The factor substitution and the markup channels are equally important. On the other hand, the falling equipment price plays a minor role, because the factor substitution and the markup channels offset each other…(More)”.
Enrolling Citizens: A Primer on Archetypes of Democratic Engagement with AI
Paper by Wanheng Hu and Ranjit Singh: “In response to rapid advances in artificial intelligence, lawmakers, regulators, academics, and technologists alike are sifting through technical jargon and marketing hype as they take on the challenge of safeguarding citizens from the technology’s potential harms while maximizing their access to its benefits. A common feature of these efforts is including citizens throughout the stages of AI development and governance. Yet doing so is impossible without a clear vision of what citizens ideally should do. This primer takes up this imperative and asks: What approaches can ensure that citizens have meaningful involvement in the development of AI, and how do these approaches envision the role of a “good citizen”?
The primer highlights three major approaches to involving citizens in AI — AI literacy, AI governance, and participatory AI — each of them premised on the importance of enrolling citizens but envisioning different roles for citizens to play. While recognizing that it is largely impossible to come up with a universal standard for building AI in the public interest, and that all approaches will remain local and situated, this primer invites a critical reflection on the underlying assumptions about technology, democracy, and citizenship that ground how we think about the ethics and role of public(s) in large-scale sociotechnical change. ..(More)”.
Why policy failure is a prerequisite for innovation in the public sector
Blog by Philipp Trein and Thenia Vagionaki: “In our article entitled, “Why policy failure is a prerequisite for innovation in the public sector,” we explore the relationship between policy failure and innovation within public governance. Drawing inspiration from the “Innovator’s Dilemma,”—a theory from the management literature—we argue that the very nature of policymaking, characterized by myopia of voters, blame avoidance by decisionmakers, and the complexity (ill-structuredness) of societal challenges, has an inherent tendency to react with innovation only after failure of existing policies.
Our analysis implies that we need to be more critical of what the policy process can achieve in terms of public sector innovation. Cognitive limitations tend to lead to a misperception of problems and inaccurate assessment of risks by decision makers according to the “Innovator’s Dilemma”. This problem implies that true innovation (non-trivial policy changes) are unlikely to happen before an existing policy has failed visibly. However, our perspective does not want to paint a gloomy picture for public policy making but rather offers a more realistic interpretation of what public sector innovation can achieve. As a consequence, learning from experts in the policy process should be expected to correct failures in public sector problem-solving during the political process, rather than raise expectations beyond what is possible.
The potential impact of our findings is profound. For practitioners and policymakers, this insight offers a new lens through which to evaluate the failure and success of public policies. Our work advocates a paradigm shift in how we perceive, manage, and learn from policy failures in the public sector, and for the expectations we have towards learning and the use of evidence in policymaking. By embracing the limitations of innovation in public policy, we can better manage expectations and structure the narrative regarding the capacity of public policy to address collective problems…(More)”.
How to optimize the systematic review process using AI tools
Paper by Nicholas Fabiano et al: “Systematic reviews are a cornerstone for synthesizing the available evidence on a given topic. They simultaneously allow for gaps in the literature to be identified and provide direction for future research. However, due to the ever-increasing volume and complexity of the available literature, traditional methods for conducting systematic reviews are less efficient and more time-consuming. Numerous artificial intelligence (AI) tools are being released with the potential to optimize efficiency in academic writing and assist with various stages of the systematic review process including developing and refining search strategies, screening titles and abstracts for inclusion or exclusion criteria, extracting essential data from studies and summarizing findings. Therefore, in this article we provide an overview of the currently available tools and how they can be incorporated into the systematic review process to improve efficiency and quality of research synthesis. We emphasize that authors must report all AI tools that have been used at each stage to ensure replicability as part of reporting in methods….(More)”.
The Social Value of Hurricane Forecasts
Paper by Renato Molina & Ivan Rudik: “What is the impact and value of hurricane forecasts? We study this question using newly-collected forecast data for major US hurricanes since 2005. We find higher wind speed forecasts increase pre-landfall protective spending, but erroneous under-forecasts increase post-landfall damage and rebuilding expenditures. Our main contribution is a new theoretically-grounded approach for estimating the marginal value of forecast improvements. We find that the average annual improvement reduced total per-hurricane costs, inclusive of unobserved protective spending, by $700,000 per county. Improvements since 2007 reduced costs by 19%, averaging $5 billion per hurricane. This exceeds the annual budget for all federal weather forecasting…(More)”.
Data Statements: From Technical Concept to Community Practice
Paper by Angelina McMillan-Major, Emily M. Bender, and Batya Friedman: “Responsible computing ultimately requires that technical communities develop and adopt tools, processes, and practices that mitigate harms and support human flourishing. Prior efforts toward the responsible development and use of datasets, machine learning models, and other technical systems have led to the creation of documentation toolkits to facilitate transparency, diagnosis, and inclusion. This work takes the next step: to catalyze community uptake, alongside toolkit improvement. Specifically, starting from one such proposed toolkit specialized for language datasets, data statements for natural language processing, we explore how to improve the toolkit in three senses: (1) the content of the toolkit itself, (2) engagement with professional practice, and (3) moving from a conceptual proposal to a tested schema that the intended community of use may readily adopt. To achieve these goals, we first conducted a workshop with natural language processing practitioners to identify gaps and limitations of the toolkit as well as to develop best practices for writing data statements, yielding an interim improved toolkit. Then we conducted an analytic comparison between the interim toolkit and another documentation toolkit, datasheets for datasets. Based on these two integrated processes, we present our revised Version 2 schema and best practices in a guide for writing data statements. Our findings more generally provide integrated processes for co-evolving both technology and practice to address ethical concerns within situated technical communities…(More)”
Effects of Open Access. Literature study on empirical research 2010–2021
Paper by David Hopf, Sarah Dellmann, Christian Hauschke, and Marco Tullney: “Open access — the free availability of scholarly publications — intuitively offers many benefits. At the same time, some academics, university administrators, publishers, and political decision-makers express reservations. Many empirical studies on the effects of open access have been published in the last decade. This report provides an overview of the state of research from 2010 to 2021. The empirical results on the effects of open access help to determine the advantages and disadvantages of open access and serve as a knowledge base for academics, publishers, research funding and research performing institutions, and policy makers. This overview of current findings can inform decisions about open access and publishing strategies. In addition, this report identifies aspects of the impact of open access that are potentially highly relevant but have not yet been sufficiently studied…(More)”.
Artificial intelligence and the local government: A five-decade scientometric analysis on the evolution, state-of-the-art, and emerging trends
Paper by Tan Yigitcanlar et al: “In recent years, the rapid advancement of artificial intelligence (AI) technologies has significantly impacted various sectors, including public governance at the local level. However, there exists a limited understanding of the overarching narrative surrounding the adoption of AI in local governments and its future. Therefore, this study aims to provide a comprehensive overview of the evolution, current state-of-the-art, and emerging trends in the adoption of AI in local government. A comprehensive scientometric analysis was conducted on a dataset comprising 7112 relevant literature records retrieved from the Scopus database in October 2023, spanning over the last five decades. The study findings revealed the following key insights: (a) exponential technological advancements over the last decades ushered in an era of AI adoption by local governments; (b) the primary purposes of AI adoption in local governments include decision support, automation, prediction, and service delivery; (c) the main areas of AI adoption in local governments encompass planning, analytics, security, surveillance, energy, and modelling; and (d) under-researched but critical research areas include ethics of and public participation in AI adoption in local governments. This study informs research, policy, and practice by offering a comprehensive understanding of the literature on AI applications in local governments, providing valuable insights for stakeholders and decision-makers…(More)”.