AI and the Future of Government: Unexpected Effects and Critical Challenges


Policy Brief by Tiago C. Peixoto, Otaviano Canuto, and Luke Jordan: “Based on observable facts, this policy paper explores some of the less- acknowledged yet critically important ways in which artificial intelligence (AI) may affect the public sector and its role. Our focus is on those areas where AI’s influence might be understated currently, but where it has substantial implications for future government policies and actions.

We identify four main areas of impact that could redefine the public sector role, require new answers from it, or both. These areas are the emergence of a new language-based digital divide, jobs displacement in the public administration, disruptions in revenue mobilization, and declining government responsiveness.

This discussion not only identifies critical areas but also underscores the importance of transcending conventional approaches in tackling them. As we examine these challenges, we shed light on their significance, seeking to inform policymakers and stakeholders about the nuanced ways in which AI may quietly, yet profoundly, alter the public sector landscape…(More)”.

AI for Good: Applications in Sustainability, Humanitarian Action, and Health


Book by Juan M. Lavista Ferres and William B. Weeks: “…an insightful and fascinating discussion of how one of the world’s most recognizable software companies is tacking intractable social problems with the power of artificial intelligence (AI). In the book, you’ll learn about how climate change, illness and disease, and challenges to fundamental human rights are all being fought using replicable methods and reusable AI code.

The authors also provide:

  • Easy-to-follow, non-technical explanations of what AI is and how it works
  • Examinations of how healthcare is being improved, climate change is being addressed, and humanitarian aid is being facilitated around the world with AI
  • Discussions of the future of AI in the realm of social benefit organizations and efforts

An essential guide to impactful social change with artificial intelligence, AI for Good is a must-read resource for technical and non-technical professionals interested in AI’s social potential, as well as policymakers, regulators, NGO professionals, and, and non-profit volunteers…(More)”.

The Cambridge Handbook of Facial Recognition in the Modern State


Book edited by Rita Matulionyte and Monika Zalnieriute: “In situations ranging from border control to policing and welfare, governments are using automated facial recognition technology (FRT) to collect taxes, prevent crime, police cities and control immigration. FRT involves the processing of a person’s facial image, usually for identification, categorisation or counting. This ambitious handbook brings together a diverse group of legal, computer, communications, and social and political science scholars to shed light on how FRT has been developed, used by public authorities, and regulated in different jurisdictions across five continents. Informed by their experiences working on FRT across the globe, chapter authors analyse the increasing deployment of FRT in public and private life. The collection argues for the passage of new laws, rules, frameworks, and approaches to prevent harms of FRT in the modern state and advances the debate on scrutiny of power and accountability of public authorities which use FRT…(More)”.

Third Millennium Thinking: Creating Sense in a World of Nonsense


Book by Saul Perlmutter, John Campbell and Robert MacCoun: “In our deluge of information, it’s getting harder and harder to distinguish the revelatory from the contradictory. How do we make health decisions in the face of conflicting medical advice? Does the research cited in that article even show what the authors claim? How can we navigate the next Thanksgiving discussion with our in-laws, who follow completely different experts on the topic of climate change?

In Third Millennium Thinking, a physicist, a psychologist, and a philosopher introduce readers to the tools and frameworks that scientists have developed to keep from fooling themselves, to understand the world, and to make decisions. We can all borrow these trust-building techniques to tackle problems both big and small.

Readers will learn: 

  • How to achieve a ground-level understanding of the facts of the modern world
  • How to chart a course through a profusion of possibilities  
  • How to work together to take on the challenges we face today
  • And much more

Using provocative thought exercises, jargon-free language, and vivid illustrations drawn from history, daily life, and scientists’ insider stories, Third Millennium Thinking offers a novel approach for readers to make sense of the nonsense…(More)”

EBP+: Integrating science into policy evaluation using Evidential Pluralism


Article by Joe Jones, Alexandra Trofimov, Michael Wilde & Jon Williamson: “…While the need to integrate scientific evidence in policymaking is clear, there isn’t a universally accepted framework for doing so in practice. Orthodox evidence-based approaches take Randomised Controlled Trials (RCTs) as the gold standard of evidence. Others argue that social policy issues require theory-based methods to understand the complexities of policy interventions. These divisions may only further decrease trust in science at this critical time.

EBP+ offers a broader framework within which both orthodox and theory-based methods can sit. EBP+ also provides a systematic account of how to integrate and evaluate these different types of evidence. EBP+ can offer consistency and objectivity in policy evaluation, and could yield a unified approach that increases public trust in scientifically-informed policy…

EBP+ is motivated by Evidential Pluralism, a philosophical theory of causal enquiry that has been developed over the last 15 years. Evidential Pluralism encompasses two key claims. The first, object pluralism, says that establishing that A is a cause of B (e.g., that a policy intervention causes a specific outcome) requires establishing both that and B are appropriately correlated and that there is some mechanism which links the two and which can account for the extent of the correlation. The second claim, study pluralism, maintains that assessing whether is a cause of B requires assessing both association studies (studies that repeatedly measure and B, together with potential confounders, to measure their association) and mechanistic studies (studies of features of the mechanisms linking A to B), where available…(More)”.

A diagrammatic representation of Evidential Pluralism
Evidential Pluralism (© Jon Williamson)

The Power of Noticing What Was Always There


Book by Tali Sharot and Cass R. Sunstein: “Have you ever noticed that what is thrilling on Monday tends to become boring on Friday? Even exciting relationships, stimulating jobs, and breathtaking works of art lose their sparkle after a while. People stop noticing what is most wonderful in their own lives. They also stop noticing what is terrible. They get used to dirty air. They stay in abusive relationships. People grow to accept authoritarianism and take foolish risks. They become unconcerned by their own misconduct, blind to inequality, and are more liable to believe misinformation than ever before.

But what if we could find a way to see everything anew? What if you could regain sensitivity, not only to the great things in your life, but also to the terrible things you stopped noticing and so don’t try to change?

Now, neuroscience professor Tali Sharot and Harvard law professor (and presidential advisor) Cass R. Sunstein investigate why we stop noticing both the great and not-so-great things around us and how to “dishabituate” at the office, in the bedroom, at the store, on social media, and in the voting booth. This groundbreaking work, based on decades of research in the psychological and biological sciences, illuminates how we can reignite the sparks of joy, innovate, and recognize where improvements urgently need to be made. The key to this disruption—to seeing, feeling, and noticing again—is change. By temporarily changing your environment, changing the rules, changing the people you interact with—or even just stepping back and imagining change—you regain sensitivity, allowing you to more clearly identify the bad and more deeply appreciate the good…(More)”.

Designing Digital Voting Systems for Citizens


Paper by Joshua C. Yang et al: “Participatory Budgeting (PB) has evolved into a key democratic instrument for resource allocation in cities. Enabled by digital platforms, cities now have the opportunity to let citizens directly propose and vote on urban projects, using different voting input and aggregation rules. However, the choices cities make in terms of the rules of their PB have often not been informed by academic studies on voter behaviour and preferences. Therefore, this work presents the results of behavioural experiments where participants were asked to vote in a fictional PB setting. We identified approaches to designing PB voting that minimise cognitive load and enhance the perceived fairness and legitimacy of the digital process from the citizens’ perspective. In our study, participants preferred voting input formats that are more expressive (like rankings and distributing points) over simpler formats (like approval voting). Participants also indicated a desire for the budget to be fairly distributed across city districts and project categories. Participants found the Method of Equal Shares voting rule to be fairer than the conventional Greedy voting rule. These findings offer actionable insights for digital governance, contributing to the development of fairer and more transparent digital systems and collective decision-making processes for citizens…(More)”.

AI Accountability Policy Report


Report by NTIA: “Artificial intelligence (AI) systems are rapidly becoming part of the fabric of everyday American life. From customer service to image generation to manufacturing, AI systems are everywhere.

Alongside their transformative potential for good, AI systems also pose risks of harm. These risks include inaccurate or false outputs; unlawful discriminatory algorithmic decision making; destruction of jobs and the dignity of work; and compromised privacy, safety, and security. Given their influence and ubiquity, these systems must be subject to security and operational mechanisms that mitigate risk and warrant stakeholder trust that they will not cause harm….


The AI Accountability Policy Report
 conceives of accountability as a chain of inputs linked to consequences. It focuses on how information flow (documentation, disclosures, and access) supports independent evaluations (including red-teaming and audits), which in turn feed into consequences (including liability and regulation) to create accountability. It concludes with recommendations for federal government action, some of which elaborate on themes in the AI EO, to encourage and possibly require accountability inputs…(More)”.

Graphic showing the AI Accountability Chain model

A.I.-Generated Garbage Is Polluting Our Culture


Article by Eric Hoel: “Increasingly, mounds of synthetic A.I.-generated outputs drift across our feeds and our searches. The stakes go far beyond what’s on our screens. The entire culture is becoming affected by A.I.’s runoff, an insidious creep into our most important institutions.

Consider science. Right after the blockbuster release of GPT-4, the latest artificial intelligence model from OpenAI and one of the most advanced in existence, the language of scientific research began to mutate. Especially within the field of A.I. itself.

study published this month examined scientists’ peer reviews — researchers’ official pronouncements on others’ work that form the bedrock of scientific progress — across a number of high-profile and prestigious scientific conferences studying A.I. At one such conference, those peer reviews used the word “meticulous” more than 34 times as often as reviews did the previous year. Use of “commendable” was around 10 times as frequent, and “intricate,” 11 times. Other major conferences showed similar patterns.

Such phrasings are, of course, some of the favorite buzzwords of modern large language models like ChatGPT. In other words, significant numbers of researchers at A.I. conferences were caught handing their peer review of others’ work over to A.I. — or, at minimum, writing them with lots of A.I. assistance. And the closer to the deadline the submitted reviews were received, the more A.I. usage was found in them.

If this makes you uncomfortable — especially given A.I.’s current unreliability — or if you think that maybe it shouldn’t be A.I.s reviewing science but the scientists themselves, those feelings highlight the paradox at the core of this technology: It’s unclear what the ethical line is between scam and regular usage. Some A.I.-generated scams are easy to identify, like the medical journal paper featuring a cartoon rat sporting enormous genitalia. Many others are more insidious, like the mislabeled and hallucinated regulatory pathway described in that same paper — a paper that was peer reviewed as well (perhaps, one might speculate, by another A.I.?)…(More)”.

How Public Polling Has Changed in the 21st Century


Report by Pew Research: “The 2016 and 2020 presidential elections left many Americans wondering whether polling was broken and what, if anything, pollsters might do about it. A new Pew Research Center study finds that most national pollsters have changed their approach since 2016, and in some cases dramatically. Most (61%) of the pollsters who conducted and publicly released national surveys in both 2016 and 2022 used methods in 2022 that differed from what they used in 2016. The study also finds the use of multiple methods increasing. Last year 17% of national pollsters used at least three different methods to sample or interview people (sometimes in the same survey), up from 2% in 2016….(More)”.