Handbook on Public Policy and Artificial Intelligence


Book edited by Regine Paul, Emma Carmel and Jennifer Cobbe: “…explores the relationship between public policy and artificial intelligence (AI) technologies across a broad range of geographical, technical, political and policy contexts. It contributes to critical AI studies, focusing on the intersection of the norms, discourses, policies, practices and regulation that shape AI in the public sector.

Expert authors in the field discuss the creation and use of AI technologies, and how public authorities respond to their development, by bringing together emerging scholarly debates about AI technologies with longer-standing insights on public administration, policy, regulation and governance. Contributions in the Handbook mobilize diverse perspectives to critically examine techno-solutionist approaches to public policy and AI, dissect the politico-economic interests underlying AI promotion and analyse implications for sustainable development, fairness and equality. Ultimately, this Handbook questions whether regulatory concepts such as ethical, trustworthy or accountable AI safeguard a democratic future or contribute to a problematic de-politicization of the public sector…(More)”.

 

How to optimize the systematic review process using AI tools


Paper by Nicholas Fabiano et al: “Systematic reviews are a cornerstone for synthesizing the available evidence on a given topic. They simultaneously allow for gaps in the literature to be identified and provide direction for future research. However, due to the ever-increasing volume and complexity of the available literature, traditional methods for conducting systematic reviews are less efficient and more time-consuming. Numerous artificial intelligence (AI) tools are being released with the potential to optimize efficiency in academic writing and assist with various stages of the systematic review process including developing and refining search strategies, screening titles and abstracts for inclusion or exclusion criteria, extracting essential data from studies and summarizing findings. Therefore, in this article we provide an overview of the currently available tools and how they can be incorporated into the systematic review process to improve efficiency and quality of research synthesis. We emphasize that authors must report all AI tools that have been used at each stage to ensure replicability as part of reporting in methods….(More)”.

ChatGPT in Teaching and Learning: A Systematic Review


Paper by Duha Ali: “The increasing use of artificial intelligence (AI) in education has raised questions about the implications of ChatGPT for teaching and learning. A systematic literature review was conducted to answer these questions, analyzing 112 scholarly articles to identify the potential benefits and challenges related to ChatGPT use in educational settings. The selection process was thorough to ensure a comprehensive analysis of the current academic discourse on AI tools in education. Our research sheds light on the significant impact of ChatGPT on improving student engagement and accessibility and the critical issues that need to be considered, including concerns about the quality and bias of generated responses, the risk of plagiarism, and the authenticity of educational content. The study aims to summarize the utilizations of ChatGPT in teaching and learning by addressing the identified benefits and challenges through targeted strategies. The authors outlined some recommendations that will ensure that the integration of ChatGPT into educational frameworks enhances learning outcomes while safeguarding academic standards…(More)”.

Misuse versus Missed use — the Urgent Need for Chief Data Stewards in the Age of AI


Article by Stefaan Verhulst and Richard Benjamins: “In the rapidly evolving landscape of artificial intelligence (AI), the need for and importance of Chief AI Officers (CAIO) are receiving increasing attention. One prominent example came in a recent memo on AI policy, issued by Shalanda Young, Director of the United States Office of Management and Budget. Among the most important — and prominently featured — recommendations were a call, “as required by Executive Order 14110,” for all government agencies to appoint a CAIO within 60 days of the release of the memo.

In many ways, this call is an important development; not even the EU AI Act is requiring this of public agencies. CAIOs have an important role to play in the search for a responsible use of AI for public services that would include guardrails and help protect the public good. Yet while acknowledging the need for CAIOs to safeguard a responsible use of AI, we argue that the duty of Administrations is not only to avoid negative impact, but also to create positive impact. In this sense, much work remains to be done in defining the CAIO role and considering their specific functions. In pursuit of these tasks, we further argue, policymakers and other stakeholders might benefit from looking at the role of another emerging profession in the digital ecology–that of Chief Data Stewards (CDS), which is focused on creating such positive impact for instance to help achieve the UN’s SDGs. Although the CDS position is itself somewhat in flux, we suggest that CDS can nonetheless provide a useful template for the functions and roles of CAIOs.

Image courtesy of Advertising Week

We start by explaining why CDS are relevant to the conversation over CAIOs; this is because data and data governance are foundational to AI governance. We then discuss some particular functions and competencies of CDS, showing how these can be equally applied to the governance of AI. Among the most important (if high-level) of these competencies is an ability to proactively identify opportunities in data sharing, and to balance the risks and opportunities of our data age. We conclude by exploring why this competency–an ethos of positive data responsibility that avoids overly-cautious risk aversion–is so important in the AI and data era…(More)”

Data Statements: From Technical Concept to Community Practice


Paper by Angelina McMillan-Major, Emily M. Bender, and Batya Friedman: “Responsible computing ultimately requires that technical communities develop and adopt tools, processes, and practices that mitigate harms and support human flourishing. Prior efforts toward the responsible development and use of datasets, machine learning models, and other technical systems have led to the creation of documentation toolkits to facilitate transparency, diagnosis, and inclusion. This work takes the next step: to catalyze community uptake, alongside toolkit improvement. Specifically, starting from one such proposed toolkit specialized for language datasets, data statements for natural language processing, we explore how to improve the toolkit in three senses: (1) the content of the toolkit itself, (2) engagement with professional practice, and (3) moving from a conceptual proposal to a tested schema that the intended community of use may readily adopt. To achieve these goals, we first conducted a workshop with natural language processing practitioners to identify gaps and limitations of the toolkit as well as to develop best practices for writing data statements, yielding an interim improved toolkit. Then we conducted an analytic comparison between the interim toolkit and another documentation toolkit, datasheets for datasets. Based on these two integrated processes, we present our revised Version 2 schema and best practices in a guide for writing data statements. Our findings more generally provide integrated processes for co-evolving both technology and practice to address ethical concerns within situated technical communities…(More)”

Green Light


Google Research: “Road transportation is responsible for a significant amount of global and urban greenhouse gas emissions. It is especially problematic at city intersections where pollution can be 29 times higher than on open roads.  At intersections, half of these emissions come from traffic accelerating after stopping. While some amount of stop-and-go traffic is unavoidable, part of it is preventable through the optimization of traffic light timing configurations. To improve traffic light timing, cities need to either install costly hardware or run manual vehicle counts; both of these solutions are expensive and don’t provide all the necessary information. 

Green Light uses AI and Google Maps driving trends, with one of the strongest understandings of global road networks, to model traffic patterns and build intelligent recommendations for city traffic engineers to optimize traffic flow. Early numbers indicate a potential for up to 30% reduction in stops and 10% reduction in greenhouse gas emissions (1). By optimizing each intersection, and coordinating between adjacent intersections, we can create waves of green lights and help cities further reduce stop-and-go traffic. Green Light is now live in 70 intersections in 12 cities, 4 continents, from Haifa, Israel to Bangalore, India to Hamburg, Germany – and in these intersections we are able to save fuel and lower emissions for up to 30M car rides monthly. Green Light reflects Google Research’s commitment to use AI to address climate change and improve millions of lives in cities around the world…(More)”

Artificial intelligence and the local government: A five-decade scientometric analysis on the evolution, state-of-the-art, and emerging trends


Paper by Tan Yigitcanlar et al: “In recent years, the rapid advancement of artificial intelligence (AI) technologies has significantly impacted various sectors, including public governance at the local level. However, there exists a limited understanding of the overarching narrative surrounding the adoption of AI in local governments and its future. Therefore, this study aims to provide a comprehensive overview of the evolution, current state-of-the-art, and emerging trends in the adoption of AI in local government. A comprehensive scientometric analysis was conducted on a dataset comprising 7112 relevant literature records retrieved from the Scopus database in October 2023, spanning over the last five decades. The study findings revealed the following key insights: (a) exponential technological advancements over the last decades ushered in an era of AI adoption by local governments; (b) the primary purposes of AI adoption in local governments include decision support, automation, prediction, and service delivery; (c) the main areas of AI adoption in local governments encompass planning, analytics, security, surveillance, energy, and modelling; and (d) under-researched but critical research areas include ethics of and public participation in AI adoption in local governments. This study informs research, policy, and practice by offering a comprehensive understanding of the literature on AI applications in local governments, providing valuable insights for stakeholders and decision-makers…(More)”.

Brazil hires OpenAI to cut costs of court battles


Article by Marcela Ayres and Bernardo Caram: “Brazil’s government is hiring OpenAI to expedite the screening and analysis of thousands of lawsuits using artificial intelligence (AI), trying to avoid costly court losses that have weighed on the federal budget.

The AI service will flag to government the need to act on lawsuits before final decisions, mapping trends and potential action areas for the solicitor general’s office (AGU).

AGU told Reuters that Microsoft would provide the artificial intelligence services from ChatGPT creator OpenAI through its Azure cloud-computing platform. It did not say how much Brazil will pay for the services.

Court-ordered debt payments have consumed a growing share of Brazil’s federal budget. The government estimated it would spend 70.7 billion reais ($13.2 billion) next year on judicial decisions where it can no longer appeal. The figure does not include small-value claims, which historically amount to around 30 billion reais annually.

The combined amount of over 100 billion reais represents a sharp increase from 37.3 billion reais in 2015. It is equivalent to about 1% of gross domestic product, or 15% more than the government expects to spend on unemployment insurance and wage bonuses to low-income workers next year.

AGU did not provide a reason for Brazil’s rising court costs…(More)”.

Artificial Intelligence Applications for Social Science Research


Report by Megan Stubbs-Richardson et al: “Our team developed a database of 250 Artificial Intelligence (AI) applications useful for social science research. To be included in our database, the AI tool had to be useful for: 1) literature reviews, summaries, or writing, 2) data collection, analysis, or visualizations, or 3) research dissemination. In the database, we provide a name, description, and links to each of the AI tools that were current at the time of publication on September 29, 2023. Supporting links were provided when an AI tool was found using other databases. To help users evaluate the potential usefulness of each tool, we documented information about costs, log-in requirements, and whether plug-ins or browser extensions are available for each tool. Finally, as we are a team of scientists who are also interested in studying social media data to understand social problems, we also documented when the AI tools were useful for text-based data, such as social media. This database includes 132 AI tools that may have use for literature reviews or writing; 146 tools that may have use for data collection, analyses, or visualizations; and 108 that may be used for dissemination efforts. While 170 of the AI tools within this database can be used for general research purposes, 18 are specific to social media data analyses, and 62 can be applied to both. Our database thus offers some of the recently published tools for exploring the application of AI to social science research…(More)”

Designing for AI Transparency in Public Services: A User-Centred Study of Citizens’ Preferences


Paper by Stefan Schmager, Samrat Gupta, Ilias Pappas & Polyxeni Vassilakopoulou: “Enhancing transparency in AI enabled public services has the potential to improve their adoption and service delivery. Hence, it is important to identify effective design strategies for AI transparency in public services. To this end, we conduct this empirical qualitative study providing insights for responsible deployment of AI in practice by public organizations. We design an interactive prototype for a Norwegian public welfare service organization which aims to use AI to support sick leaves related services. Qualitative analysis of citizens’ data collected through survey, think-aloud interactions with the prototype, and open-ended questions revealed three key themes related to: articulating information in written form, representing information in graphical form, and establishing the appropriate level of information detail for improving AI transparency in public service delivery. This study advances research pertaining to design of public service portals and has implications for AI implementation in the public sector…(More)”.