Enrolling Citizens: A Primer on Archetypes of Democratic Engagement with AI

Paper by Wanheng Hu and Ranjit Singh: “In response to rapid advances in artificial intelligence, lawmakers, regulators, academics, and technologists alike are sifting through technical jargon and marketing hype as they take on the challenge of safeguarding citizens from the technology’s potential harms while maximizing their access to its benefits. A common feature of these efforts is including citizens throughout the stages of AI development and governance. Yet doing so is impossible without a clear vision of what citizens ideally should do. This primer takes up this imperative and asks: What approaches can ensure that citizens have meaningful involvement in the development of AI, and how do these approaches envision the role of a “good citizen”?

The primer highlights three major approaches to involving citizens in AI — AI literacy, AI governance, and participatory AI — each of them premised on the importance of enrolling citizens but envisioning different roles for citizens to play. While recognizing that it is largely impossible to come up with a universal standard for building AI in the public interest, and that all approaches will remain local and situated, this primer invites a critical reflection on the underlying assumptions about technology, democracy, and citizenship that ground how we think about the ethics and role of public(s) in large-scale sociotechnical change. ..(More)”.

Why policy failure is a prerequisite for innovation in the public sector

Blog by Philipp Trein and Thenia Vagionaki: “In our article entitled, “Why policy failure is a prerequisite for innovation in the public sector,” we explore the relationship between policy failure and innovation within public governance. Drawing inspiration from the “Innovator’s Dilemma,”—a theory from the management literature—we argue that the very nature of policymaking, characterized by myopia of voters, blame avoidance by decisionmakers, and the complexity (ill-structuredness) of societal challenges, has an inherent tendency to react with innovation only after failure of existing policies.  

Our analysis implies that we need to be more critical of what the policy process can achieve in terms of public sector innovation. Cognitive limitations tend to lead to a misperception of problems and inaccurate assessment of risks by decision makers according to the “Innovator’s Dilemma”.  This problem implies that true innovation (non-trivial policy changes) are unlikely to happen before an existing policy has failed visibly. However, our perspective does not want to paint a gloomy picture for public policy making but rather offers a more realistic interpretation of what public sector innovation can achieve. As a consequence, learning from experts in the policy process should be expected to correct failures in public sector problem-solving during the political process, rather than raise expectations beyond what is possible. 

The potential impact of our findings is profound. For practitioners and policymakers, this insight offers a new lens through which to evaluate the failure and success of public policies. Our work advocates a paradigm shift in how we perceive, manage, and learn from policy failures in the public sector, and for the expectations we have towards learning and the use of evidence in policymaking. By embracing the limitations of innovation in public policy, we can better manage expectations and structure the narrative regarding the capacity of public policy to address collective problems…(More)”.

How to optimize the systematic review process using AI tools

Paper by Nicholas Fabiano et al: “Systematic reviews are a cornerstone for synthesizing the available evidence on a given topic. They simultaneously allow for gaps in the literature to be identified and provide direction for future research. However, due to the ever-increasing volume and complexity of the available literature, traditional methods for conducting systematic reviews are less efficient and more time-consuming. Numerous artificial intelligence (AI) tools are being released with the potential to optimize efficiency in academic writing and assist with various stages of the systematic review process including developing and refining search strategies, screening titles and abstracts for inclusion or exclusion criteria, extracting essential data from studies and summarizing findings. Therefore, in this article we provide an overview of the currently available tools and how they can be incorporated into the systematic review process to improve efficiency and quality of research synthesis. We emphasize that authors must report all AI tools that have been used at each stage to ensure replicability as part of reporting in methods….(More)”.

The Social Value of Hurricane Forecasts

Paper by Renato Molina & Ivan Rudik: “What is the impact and value of hurricane forecasts? We study this question using newly-collected forecast data for major US hurricanes since 2005. We find higher wind speed forecasts increase pre-landfall protective spending, but erroneous under-forecasts increase post-landfall damage and rebuilding expenditures. Our main contribution is a new theoretically-grounded approach for estimating the marginal value of forecast improvements. We find that the average annual improvement reduced total per-hurricane costs, inclusive of unobserved protective spending, by $700,000 per county. Improvements since 2007 reduced costs by 19%, averaging $5 billion per hurricane. This exceeds the annual budget for all federal weather forecasting…(More)”.

Data Statements: From Technical Concept to Community Practice

Paper by Angelina McMillan-Major, Emily M. Bender, and Batya Friedman: “Responsible computing ultimately requires that technical communities develop and adopt tools, processes, and practices that mitigate harms and support human flourishing. Prior efforts toward the responsible development and use of datasets, machine learning models, and other technical systems have led to the creation of documentation toolkits to facilitate transparency, diagnosis, and inclusion. This work takes the next step: to catalyze community uptake, alongside toolkit improvement. Specifically, starting from one such proposed toolkit specialized for language datasets, data statements for natural language processing, we explore how to improve the toolkit in three senses: (1) the content of the toolkit itself, (2) engagement with professional practice, and (3) moving from a conceptual proposal to a tested schema that the intended community of use may readily adopt. To achieve these goals, we first conducted a workshop with natural language processing practitioners to identify gaps and limitations of the toolkit as well as to develop best practices for writing data statements, yielding an interim improved toolkit. Then we conducted an analytic comparison between the interim toolkit and another documentation toolkit, datasheets for datasets. Based on these two integrated processes, we present our revised Version 2 schema and best practices in a guide for writing data statements. Our findings more generally provide integrated processes for co-evolving both technology and practice to address ethical concerns within situated technical communities…(More)”

Effects of Open Access. Literature study on empirical research 2010–2021

Paper by David Hopf, Sarah Dellmann, Christian Hauschke, and Marco Tullney: “Open access — the free availability of scholarly publications — intuitively offers many benefits. At the same time, some academics, university administrators, publishers, and political decision-makers express reservations. Many empirical studies on the effects of open access have been published in the last decade. This report provides an overview of the state of research from 2010 to 2021. The empirical results on the effects of open access help to determine the advantages and disadvantages of open access and serve as a knowledge base for academics, publishers, research funding and research performing institutions, and policy makers. This overview of current findings can inform decisions about open access and publishing strategies. In addition, this report identifies aspects of the impact of open access that are potentially highly relevant but have not yet been sufficiently studied…(More)”.

Artificial intelligence and the local government: A five-decade scientometric analysis on the evolution, state-of-the-art, and emerging trends

Paper by Tan Yigitcanlar et al: “In recent years, the rapid advancement of artificial intelligence (AI) technologies has significantly impacted various sectors, including public governance at the local level. However, there exists a limited understanding of the overarching narrative surrounding the adoption of AI in local governments and its future. Therefore, this study aims to provide a comprehensive overview of the evolution, current state-of-the-art, and emerging trends in the adoption of AI in local government. A comprehensive scientometric analysis was conducted on a dataset comprising 7112 relevant literature records retrieved from the Scopus database in October 2023, spanning over the last five decades. The study findings revealed the following key insights: (a) exponential technological advancements over the last decades ushered in an era of AI adoption by local governments; (b) the primary purposes of AI adoption in local governments include decision support, automation, prediction, and service delivery; (c) the main areas of AI adoption in local governments encompass planning, analytics, security, surveillance, energy, and modelling; and (d) under-researched but critical research areas include ethics of and public participation in AI adoption in local governments. This study informs research, policy, and practice by offering a comprehensive understanding of the literature on AI applications in local governments, providing valuable insights for stakeholders and decision-makers…(More)”.

Using ChatGPT to Facilitate Truly Informed Medical Consent

Paper by Fatima N. Mirza: “Informed consent is integral to the practice of medicine. Most informed consent documents are written at a reading level that surpasses the reading comprehension level of the average American. Large language models, a type of artificial intelligence (AI) with the ability to summarize and revise content, present a novel opportunity to make the language used in consent forms more accessible to the average American and thus, improve the quality of informed consent. In this study, we present the experience of the largest health care system in the state of Rhode Island in implementing AI to improve the readability of informed consent documents, highlighting one tangible application for emerging AI in the clinical setting…(More)”.

Artificial Intelligence Applications for Social Science Research

Report by Megan Stubbs-Richardson et al: “Our team developed a database of 250 Artificial Intelligence (AI) applications useful for social science research. To be included in our database, the AI tool had to be useful for: 1) literature reviews, summaries, or writing, 2) data collection, analysis, or visualizations, or 3) research dissemination. In the database, we provide a name, description, and links to each of the AI tools that were current at the time of publication on September 29, 2023. Supporting links were provided when an AI tool was found using other databases. To help users evaluate the potential usefulness of each tool, we documented information about costs, log-in requirements, and whether plug-ins or browser extensions are available for each tool. Finally, as we are a team of scientists who are also interested in studying social media data to understand social problems, we also documented when the AI tools were useful for text-based data, such as social media. This database includes 132 AI tools that may have use for literature reviews or writing; 146 tools that may have use for data collection, analyses, or visualizations; and 108 that may be used for dissemination efforts. While 170 of the AI tools within this database can be used for general research purposes, 18 are specific to social media data analyses, and 62 can be applied to both. Our database thus offers some of the recently published tools for exploring the application of AI to social science research…(More)”

Designing for AI Transparency in Public Services: A User-Centred Study of Citizens’ Preferences

Paper by Stefan Schmager, Samrat Gupta, Ilias Pappas & Polyxeni Vassilakopoulou: “Enhancing transparency in AI enabled public services has the potential to improve their adoption and service delivery. Hence, it is important to identify effective design strategies for AI transparency in public services. To this end, we conduct this empirical qualitative study providing insights for responsible deployment of AI in practice by public organizations. We design an interactive prototype for a Norwegian public welfare service organization which aims to use AI to support sick leaves related services. Qualitative analysis of citizens’ data collected through survey, think-aloud interactions with the prototype, and open-ended questions revealed three key themes related to: articulating information in written form, representing information in graphical form, and establishing the appropriate level of information detail for improving AI transparency in public service delivery. This study advances research pertaining to design of public service portals and has implications for AI implementation in the public sector…(More)”.