How Will the State Think With the Assistance of ChatGPT? The Case of Customs as an Example of Generative Artificial Intelligence in Public Administrations


Paper by Thomas Cantens: “…discusses the implications of Generative Artificial Intelligence (GAI) in public administrations and the specific questions it raises compared to specialized and « numerical » AI, based on the example of Customs and the experience of the World Customs Organization in the field of AI and data strategy implementation in Member countries.

At the organizational level, the advantages of GAI include cost reduction through internalization of tasks, uniformity and correctness of administrative language, access to broad knowledge, and potential paradigm shifts in fraud detection. At this level, the paper highlights three facts that distinguish GAI from specialized AI : i) GAI is less associated to decision-making process than specialized AI in public administrations so far, ii) the risks usually associated with GAI are often similar to those previously associated with specialized AI, but, while certain risks remain pertinent, others lose significance due to the constraints imposed by the inherent limitations of GAI technology itself when implemented in public administrations, iii) training data corpus for GAI becomes a strategic asset for public administrations, maybe more than the algorithms themselves, which was not the case for specialized AI.

At the individual level, the paper emphasizes the “language-centric” nature of GAI in contrast to “number-centric” AI systems implemented within public administrations up until now. It discusses the risks of replacement or enslavement of civil servants to the machines by exploring the transformative impact of GAI on the intellectual production of the State. The paper pleads for the development of critical vigilance and critical thinking as specific skills for civil servants who are highly specialized and will have to think with the assistance of a machine that is eclectic by nature…(More)”.

Valuing Data: The Role of Satellite Data in Halting the Transmission of Polio in Nigeria


Article by Mariel Borowitz, Janet Zhou, Krystal Azelton & Isabelle-Yara Nassar: “There are more than 1,000 satellites in orbit right now collecting data about what’s happening on the Earth. These include government and commercial satellites that can improve our understanding of climate change; monitor droughts, floods, and forest fires; examine global agricultural output; identify productive locations for fishing or mining; and many other purposes. We know the data provided by these satellites is important, yet it is very difficult to determine the exact value that each of these systems provides. However, with only a vague sense of “value,” it is hard for policymakers to ensure they are making the right investments in Earth observing satellites.

NASA’s Consortium for the Valuation of Applications Benefits Linked with Earth Science (VALUABLES), carried out in collaboration with Resources for the Future, aimed to address this by analyzing specific use cases of satellite data to determine their monetary value. VALUABLES proposed a “value of information” approach focusing on cases in which satellite data informed a specific decision. Researchers could then compare the outcome of that decision with what would have occurredif no satellite data had been available. Our project, which was funded under the VALUABLES program, examined how satellite data contributed to efforts to halt the transmission of Polio in Nigeria…(More)”

Promoting Sustainable Data Use in State Programs


Toolkit by Chapin Hall:”…helps public sector agencies build the culture and infrastructure to apply data analysis routinely, effectively, and accurately—what we call “sustainable data use.”  It is meant to serve as a hands-on resource, containing strategies and tools for agencies seeking to grow their analytic capacity. 

Administrative data can be a rich source of information for human services agencies seeking to improve programs. But too often, data use in state agencies is temporary, dependent on funds and training from short-term resources such as pilot projects and grants. How can agencies instead move from data to knowledge to action routinely, creating a reinforcing cycle of evidence-building and program improvement?

Chapin Hall experts and experts at partner organizations set out to determine who achieves sustainable data use and how they go about doing so. Building on previous work and the results of a literature review, we identified domains that can significantly influence an agency’s ability to establish sustainable data practices. We then focused on eight state TANF agencies and three partner organizations with demonstrated successes in one or more of these domains, and we interviewed staff who work directly with data to learn more about what strategies they used to achieve success. We focused on what worked rather than what didn’t. From those interviews, we identified common themes, developed case studies, and generated tools to help agencies develop sustainable data practices…(More)”.

Unleashing possibilities, ignoring risks: Why we need tools to manage AI’s impact on jobs


Article by Katya Klinova and Anton Korinek: “…Predicting the effects of a new technology on labor demand is difficult and involves significant uncertainty. Some would argue that, given the uncertainty, we should let the “invisible hand” of the market decide our technological destiny. But we believe that the difficulty of answering the question “Who is going to benefit and who is going to lose out?” should not serve as an excuse for never posing the question in the first place. As we emphasized, the incentives for cutting labor costs are artificially inflated. Moreover, the invisible hand theorem does not hold for technological change. Therefore, a failure to investigate the distribution of benefits and costs of AI risks invites a future with too many “so-so” uses of AI—uses that concentrate gains while distributing the costs. Although predictions about the downstream impacts of AI systems will always involve some uncertainty, they are nonetheless useful to spot applications of AI that pose the greatest risks to labor early on and to channel the potential of AI where society needs it the most.

In today’s society, the labor market serves as a primary mechanism for distributing income as well as for providing people with a sense of meaning, community, and purpose. It has been documented that job loss can lead to regional decline, a rise in “deaths of despair,” addiction and mental health problems. The path that we lay out aims to prevent abrupt job losses or declines in job quality on the national and global scale, providing an additional tool for managing the pace and shape of AI-driven labor market transformation.

Nonetheless, we do not want to rule out the possibility that humanity may eventually be much happier in a world where machines do a lot more economically valuable work. Even despite our best efforts to manage the pace and shape of AI labor market disruption through regulation and worker-centric practices, we may still face a future with significantly reduced human labor demand. Should the demand for human labor decrease permanently with the advancement of AI, timely policy responses will be needed to address both the lost incomes as well as the lost sense of meaning and purpose. In the absence of significant efforts to distribute the gains from advanced AI more broadly, the possible devaluation of human labor would deeply impact income distribution and democratic institutions’ sustainability. While a jobless future is not guaranteed, its mere possibility and the resulting potential societal repercussions demand serious consideration. One promising proposal to consider is to create an insurance policy against a dramatic decrease in the demand for human labor that automatically kicks in if the share of income received by workers declines, for example a “seed” Universal Basic Income that starts at a very small level and remains unchanged if workers continue to prosper but automatically rises if there is large scale worker displacement…(More)”.

Informing the Global Data Future: Benchmarking Data Governance Frameworks


Paper by Sara Marcucci, Natalia González Alarcón, Stefaan G. Verhulst and Elena Wüllhorst: “Data has become a critical trans-national and cross-border resource. Yet, the lack of a well-defined approach to using it poses challenges to harnessing its value. This article explores the increasing importance of global data governance due to the rapid growth of data, and the need for responsible data practices. The purpose of this paper is to compare approaches and identify patterns in the emergent data governance ecosystem within sectors close to the international development field, ultimately presenting key takeaways and reflections on when and why a global data governance framework may be needed. Overall, the paper provides information about the conditions when a more holistic, coordinated transnational approach to data governance may be needed to responsibly manage the global flow of data. The report does this by (a) considering conditions specified by the literature that may be conducive to global data governance, and (b) analyzing and comparing existing frameworks, specifically investigating six key elements: purpose, principles, anchoring documents, data description and lifecycle, processes, and practices. The article closes with a series of final recommendations, which include adopting a broader concept of data stewardship to reconcile data protection and promotion, focusing on responsible reuse of data to unlock socioeconomic value, harmonizing meanings to operationalize principles, incorporating global human rights frameworks to provide common North Stars, unifying key definitions of data, adopting a data lifecycle approach, incorporating participatory processes and collective agency, investing in new professions with specific roles, improving accountability through oversight and compliance mechanisms, and translating recommendations into practical tools…(More)”

The Age of Prediction: Algorithms, AI, and the Shifting Shadows of Risk


Book by Igor Tulchinsky and Christopher E. Mason: “… about two powerful, and symbiotic, trends: the rapid development and use of artificial intelligence and big data to enhance prediction, as well as the often paradoxical effects of these better predictions on our understanding of risk and the ways we live. Beginning with dramatic advances in quantitative investing and precision medicine, this book explores how predictive technology is quietly reshaping our world in fundamental ways, from crime fighting and warfare to monitoring individual health and elections.

As prediction grows more robust, it also alters the nature of the accompanying risk, setting up unintended and unexpected consequences. The Age of Prediction details how predictive certainties can bring about complacency or even an increase in risks—genomic analysis might lead to unhealthier lifestyles or a GPS might encourage less attentive driving. With greater predictability also comes a degree of mystery, and the authors ask how narrower risks might affect markets, insurance, or risk tolerance generally. Can we ever reduce risk to zero? Should we even try? This book lays an intriguing groundwork for answering these fundamental questions and maps out the latest tools and technologies that power these projections into the future, sometimes using novel, cross-disciplinary tools to map out cancer growth, people’s medical risks, and stock dynamics…(More)”.

Do People Like Algorithms? A Research Strategy


Paper by Cass R. Sunstein and Lucia Reisch: “Do people like algorithms? In this study, intended as a promissory note and a description of a research strategy, we offer the following highly preliminary findings. (1) In a simple choice between a human being and an algorithm, across diverse settings and without information about the human being or the algorithm, people in our tested groups are about equally divided in their preference. (2) When people are given a very brief account of the data on which an algorithm relies, there is a large shift in favor of the algorithm over the human being. (3) When people are given a very brief account of the experience of the relevant human being, without an account of the data on which the relevant algorithm relies, there is a moderate shift in favor of the human being. (4) When people are given both (a) a very brief account of the experience of the relevant human being and (b) a very brief account of the data on which the relevant algorithm relies, there is a large shift in favor of the algorithm over the human being. One lesson is that in the tested groups, at least one-third of people seem to have a clear preference for either a human being or an algorithm – a preference that is unaffected by brief information that seems to favor one or the other. Another lesson is that a brief account of the data on which an algorithm relies does have a significant effect on a large percentage of the tested groups, whether or not people are also given positive information about the human alternative. Across the various surveys, we do not find persistent demographic differences, with one exception: men appear to like algorithms more than women do. These initial findings are meant as proof of concept, or more accurately as a suggestion of concept, intended to inform a series of larger and more systematic studies of whether and when people prefer to rely on algorithms or human beings, and also of international and demographic differences…(More)”.

The Legal Singularity


Book by Abdi Aidid and Benjamin Alarie: “…argue that the proliferation of artificial intelligence–enabled technology – and specifically the advent of legal prediction – is on the verge of radically reconfiguring the law, our institutions, and our society for the better.

Revealing the ways in which our legal institutions underperform and are expensive to administer, the book highlights the negative social consequences associated with our legal status quo. Given the infirmities of the current state of the law and our legal institutions, the silver lining is that there is ample room for improvement. With concerted action, technology can help us to ameliorate the problems of the law and improve our legal institutions. Inspired in part by the concept of the “technological singularity,” The Legal Singularity presents a future state in which technology facilitates the functional “completeness” of law, where the law is at once extraordinarily more complex in its specification than it is today, and yet operationally, the law is vastly more knowable, fairer, and clearer for its subjects. Aidid and Alarie describe the changes that will culminate in the legal singularity and explore the implications for the law and its institutions…(More)”.

Data can help decarbonize cities – let us explain


Article by Stephen Lorimer and Andrew Collinge: “The University of Birmingham, Alan Turing Institute and Centre for Net Zero are working together, using a tool developed by the Centre, called Faraday, to model a more detailed understanding of energy flows within the district and between it and the neighbouring 8,000 residents. Faraday is a generative AI model trained on one of the UK’s largest smart metre datasets. The model is helping to unlock a more granular view of energy sources and changing energy usage, providing the basis for modelling future energy consumption and local smart grid management.

The partners are investigating the role that trusted data aggregators can play if they can take raw data and desensitize it to a point where it can be shared without eroding consumer privacy or commercial advantage.

Data is central to both initiatives and all cities seeking a renewable energy transition. But there are issues to address, such as common data standards, governance and data competency frameworks (especially across the built environment supply chain)…

Building the governance, standards and culture that delivers confidence in energy data exchange is essential to maximizing the potential of carbon reduction technologies. This framework will ultimately support efficient supply chains and coordinate market activity. There are lessons from the Open Banking initiative, which provided the framework for traditional financial institutions, fintech and regulators to deliver innovation in financial products and services with carefully shared consumer data.

In the energy domain, there are numerous advantageous aspects to data sharing. It helps overcome barriers in the product supply chain, from materials to low-carbon technologies (heat pumps, smart thermostats, electric vehicle chargers etc). Free and Open-Source Software (FOSS) providers can use data to support installers and property owners.

Data interoperability allows third-party products and services to communicate with any end-user device through open or proprietary Internet of Things gateway platforms such as Tuya or IFTTT. A growing bank of post-installation data on the operation of buildings (such as energy efficiency and air quality) will boost confidence in the future quality of retrofits and make for easier decisions on planning approval and grid connections. Finally, data is increasingly considered key in securing the financing and private sector investment crucial to the net zero effort.

None of the above is easy. Organizational and technical complexity can slow progress but cities must be at the forefront of efforts to coordinate the energy data ecosystem and make the case for “data for decarbonization.”…(More)”.

Health Data Sharing to Support Better Outcomes: Building a Foundation of Stakeholder Trust


A Special Publication from the National Academy of Medicine: “The effective use of data is foundational to the concept of a learning health system—one that leverages and shares data to learn from every patient experience, and feeds the results back to clinicians, patients and families, and health care executives to transform health, health care, and health equity. More than ever, the American health care system is in a position to harness new technologies and new data sources to improve individual and population health.

Learning health systems are driven by multiple stakeholders—patients, clinicians and clinical teams, health care organizations, academic institutions, government, industry, and payers. Each stakeholder group has its own sources of data, its own priorities, and its own goals and needs with respect to sharing that data. However, in America’s current health system, these stakeholders operate in silos without a clear understanding of the motivations and priorities of other groups. The three stakeholder working groups that served as the authors of this Special Publication identified many cultural, ethical, regulatory, and financial barriers to greater data sharing, linkage, and use. What emerged was the foundational role of trust in achieving the full vision of a learning health system.

This Special Publication outlines a number of potentially valuable policy changes and actions that will help drive toward effective, efficient, and ethical data sharing, including more compelling and widespread communication efforts to improve awareness, understanding, and participation in data sharing. Achieving the vision of a learning health system will require eliminating the artificial boundaries that exist today among patient care, health system improvement, and research. Breaking down these barriers will require an unrelenting commitment across multiple stakeholders toward a shared goal of better, more equitable health.

We can improve together by sharing and using data in ways that produce trust and respect. Patients and families deserve nothing less…(More)”.