The New Artificial Intelligentsia


Essay by Ruha Benjamin: “In the Fall of 2016, I gave a talk at the Institute for Advanced Study in Princeton titled “Are Robots Racist?” Headlines such as “Can Computers Be Racist? The Human-Like Bias of Algorithms,” “Artificial Intelligence’s White Guy Problem,” and “Is an Algorithm Any Less Racist Than a Human?” had captured my attention in the months before. What better venue to discuss the growing concerns about emerging technologies, I thought, than an institution established during the early rise of fascism in Europe, which once housed intellectual giants like J. Robert Oppenheimer and Albert Einstein, and prides itself on “protecting and promoting independent inquiry.”

My initial remarks focused on how emerging technologies reflect and reproduce social inequities, using specific examples of what some termed “algorithmic discrimination” and “machine bias.” A lively discussion ensued. The most memorable exchange was with a mathematician who politely acknowledged the importance of the issues I raised but then assured me that “as AI advances, it will eventually show us how to address these problems.” Struck by his earnest faith in technology as a force for good, I wanted to sputter, “But what about those already being harmed by the deployment of experimental AI in healthcareeducationcriminal justice, and more—are they expected to wait for a mythical future where sentient systems act as sage stewards of humanity?”

Fast-forward almost 10 years, and we are living in the imagination of AI evangelists racing to build artificial general intelligence (AGI), even as they warn of its potential to destroy us. This gospel of love and fear insists on “aligning” AI with human values to rein in these digital deities. OpenAI, the company behind ChatGPT, echoed the sentiment of my IAS colleague: “We are improving our AI systems’ ability to learn from human feedback and to assist humans at evaluating AI. Our goal is to build a sufficiently aligned AI system that can help us solve all other alignment problems.” They envision a time when, eventually, “our AI systems can take over more and more of our alignment work and ultimately conceive, implement, study, and develop better alignment techniques than we have now. They will work together with humans to ensure that their own successors are more aligned with humans.” For many, this is not reassuring…(More)”.

G7 Toolkit for Artificial Intelligence in the Public Sector


Toolkit by OECD: “…a comprehensive guide designed to help policymakers and public sector leaders translate principles for safe, secure, and trustworthy Artificial Intelligence (AI) into actionable policies. AI can help improve the efficiency of internal operations, the effectiveness of policymaking, the responsiveness of public services, and overall transparency and accountability. Recognising both the opportunities and risks posed by AI, this toolkit provides practical insights, shares good practices for the use of AI in and by the public sector, integrates ethical considerations, and provides an overview of G7 trends. It further showcases public sector AI use cases, detailing their benefits, as well as the implementation challenges faced by G7 members, together with the emerging policy responses to guide and coordinate the development, deployment, and use of AI in the public sector. The toolkit finally highlights key stages and factors characterising the journey of public sector AI solutions…(More)”

How elderly dementia patients are unwittingly fueling political campaigns


Article by Blake Ellis, et al: “The 80-year-old communications engineer from Texas had saved for decades, driving around in an old car and buying clothes from thrift stores so he’d have enough money to enjoy his retirement years.

But as dementia robbed him of his reasoning abilities, he began making online political donations over and over again — eventually telling his son he believed he was part of a network of political operatives communicating with key Republican leaders. In less than two years, the man became one of the country’s largest grassroots supporters of the Republican Party, ultimately giving away nearly half a million dollars to former President Donald Trump and other candidates. Now, the savings account he spent his whole life building is practically empty.

The story of this unlikely political benefactor is one of many playing out across the country.

More than 1,000 reports filed with government agencies and consumer advocacy groups reviewed by CNN, along with an analysis of campaign finance data and interviews with dozens of contributors and their family members, show how deceptive political fundraisers have victimized hundreds of elderly Americans and misled those battling dementia or other cognitive impairments into giving away millions of dollars — far more than they ever intended. Some unintentionally joined the ranks of the top grassroots political donors in the country as they tapped into retirement savings and went into debt, contributing six-figure sums through thousands of transactions…(More)”.

Contractual Freedom and Fairness in EU Data Sharing Agreements


Paper by Thomas Margoni and Alain M. Strowel: “This chapter analyzes the evolving landscape of EU data-sharing agreements, particularly focusing on the balance between contractual freedom and fairness in the context of non-personal data. The discussion highlights the complexities introduced by recent EU legislation, such as the Data Act, Data Governance Act, and Open Data Directive, which collectively aim to regulate data markets and enhance data sharing. The chapter emphasizes how these laws impose obligations that limit contractual freedom to ensure fairness, particularly in business-to-business (B2B) and Internet of Things (IoT) data transactions. It also explores the tension between private ordering and public governance, suggesting that the EU’s approach marks a shift from property-based models to governance-based models in data regulation. This chapter underscores the significant impact these regulations will have on data contracts and the broader EU data economy…(More)”.

AI can help humans find common ground in democratic deliberation


Paper by Michael Henry Tessler et al: “We asked whether an AI system based on large language models (LLMs) could successfully capture the underlying shared perspectives of a group of human discussants by writing a “group statement” that the discussants would collectively endorse. Inspired by Jürgen Habermas’s theory of communicative action, we designed the “Habermas Machine” to iteratively generate group statements that were based on the personal opinions and critiques from individual users, with the goal of maximizing group approval ratings. Through successive rounds of human data collection, we used supervised fine-tuning and reward modeling to progressively enhance the Habermas Machine’s ability to capture shared perspectives. To evaluate the efficacy of AI-mediated deliberation, we conducted a series of experiments with over 5000 participants from the United Kingdom. These experiments investigated the impact of AI mediation on finding common ground, how the views of discussants changed across the process, the balance between minority and majority perspectives in group statements, and potential biases present in those statements. Lastly, we used the Habermas Machine for a virtual citizens’ assembly, assessing its ability to support deliberation on controversial issues within a demographically representative sample of UK residents…(More)”.

A shared destiny for public sector data


Blog post by Shona Nicol: “As a data professional, it can sometime feel hard to get others interested in data. Perhaps like many in this profession, I can often express the importance and value of data for good in an overly technical way. However when our biggest challenges in Scotland include eradicating child poverty, growing the economy and tackling the climate emergency, I would argue that we should all take an interest in data because it’s going to be foundational in helping us solve these problems.

Data is already intrinsic to shaping our society and how services are delivered. And public sector data is a vital component in making sure that services for the people of Scotland are being delivered efficiently and effectively. Despite an ever growing awareness of the transformative power of data to improve the design and delivery of services, feedback from public sector staff shows that they can face difficulties when trying to influence colleagues and senior leaders around the need to invest in data.

A vision gap

In the Scottish Government’s data maturity programme and more widely, we regularly hear about the challenges data professionals encounter when trying to enact change. This community tell us that a long-term vision for public sector data for Scotland could help them by providing the context for what they are trying to achieve locally.

Earlier this year we started to scope how we might do this. We recognised that organisations are already working to deliver local and national strategies and policies that relate to data, so any vision had to be able to sit alongside those, be meaningful in different settings, agnostic of technology and relevant to any public sector organisation. We wanted to offer opportunities for alignment, not enforce an instruction manual…(More)”.

Statistical Significance—and Why It Matters for Parenting


Blog by Emily Oster: “…When we say an effect is “statistically significant at the 5% level,” what this means is that there is less than a 5% chance that we’d see an effect of this size if the true effect were zero. (The “5% level” is a common cutoff, but things can be significant at the 1% or 10% level also.) 

The natural follow-up question is: Why would any effect we see occur by chance? The answer lies in the fact that data is “noisy”: it comes with error. To see this a bit more, we can think about what would happen if we studied a setting where we know our true effect is zero. 

My fake study 

Imagine the following (fake) study. Participants are randomly assigned to eat a package of either blue or green M&Ms, and then they flip a (fair) coin and you see if it is heads. Your analysis will compare the number of heads that people flip after eating blue versus green M&Ms and report whether this is “statistically significant at the 5% level.”…(More)”.

It is about time! Exploring the clashing timeframes of politics and public policy experiments


Paper by Ringa Raudla, Külli Sarapuu, Johanna Vallistu, and Nastassia Harbuzova: “Although existing studies on experimental policymaking have acknowledged the importance of the political setting in which policy experiments take place, we lack systematic knowledge on how various political dimensions affect experimental policymaking. In this article, we address a specific gap in the existing understanding of the politics of experimentation: how political timeframes influence experimental policymaking. Drawing on theoretical discussions on experimental policymaking, public policy, electoral politics, and mediatization of politics, we outline expectations about how electoral and problem cycles may influence the timing, design, and learning from policy experiments. We argue electoral timeframes are likely to discourage politicians from undertaking large-scale policy experiments and if politicians decide to launch experiments, they prefer shorter designs. The electoral cycle may lead politicians to draw too hasty conclusions or ignore the experiment’s results altogether. We expect problem cycles to shorten politicians’ time horizons further as there is pressure to solve problems quickly. We probe the plausibility of our theoretical expectations using interview data from two different country contexts: Estonia and Finland…(More)“.

What AI Can Do for Your Country


Article by Jylana L. Sheats: “..Although most discussions of artificial intelligence focus on its impacts on business and research, AI is also poised to transform government in the United States and beyond. AI-guided disaster response is just one piece of the picture. The U.S. Department of Health and Human Services has an experimental AI program to diagnose COVID-19 and flu cases by analyzing the sound of patients coughing into their smartphones. The Department of Justice uses AI algorithms to help prioritize which tips in the FBI’s Threat Intake Processing System to act on first. Other proposals, still at the concept stage, aim to extend the applications of AI to improve the efficiency and effectiveness of nearly every aspect of public services. 

The early applications illustrate the potential for AI to make government operations more effective and responsive. They illustrate the looming challenges, too. The federal government will have to recruit, train, and retain skilled workers capable of managing the new technology, competing with the private sector for top talent. The government also faces a daunting task ensuring the ethical and equitable use of AI. Relying on algorithms to direct disaster relief or to flag high-priority crimes raises immediate concerns: What if biases built into the AI overlook some of the groups that most need assistance, or unfairly target certain populations? As AI becomes embedded into more government operations, the opportunities for misuse and unintended consequences will only expand…(More)”.

Ensuring citizens’ assemblies land


Article by Graham Smith: “…the evidence shows that while the recommendations of assemblies are well considered and could help shape more robust policy, too often they fail to land. Why is this?

The simple answer is that so much time, resources and energy is spent on organising the assembly itself – ensuring the best possible experience for citizens – that the relationship with the local authority and its decision-making processes is neglected.

First, the question asked of the assembly does not always relate to a specific set of decisions about to be made by an authority. Is the relevant policy process open and ready for input? On a number of occasions assemblies have taken place just after a new policy or strategy has been agreed. Disastrous timing.

This does not mean assemblies should only be run when they are tied to a particular decision-making process. Sometimes it is important to open up a policy area with a broad question. And sometimes it makes sense to empower citizens to set the agenda and focus on the issues they find most compelling

The second element is the failure of authorities to prepare to receive recommendations from citizens.

One story is where the first a public official knew about an assembly was when its recommendations landed on their desk. They were not received in the best spirit.

Too often assemblies are commissioned by enthusiastic politicians and public officials who have not done the necessary work to ensure their colleagues are willing to give a considered response to the citizens’ recommendations. Too often an assembly will be organised by a department or ministry where the results require others in the authority to respond – but those other politicians and officials feel no connection to the process.

And too often, an assembly ends, and it is not clear who within the public authority has the responsibility to take the recommendations forward to ensure they are given a fair hearing across the authority.

For citizens’ assemblies to be effective requires political and administrative work well beyond just organising the assembly. If this is not done, it is not only a waste of resources, but it can do serious damage to democracy and trust as those citizens who have invested their time and energy into the process become disillusioned.

Those authorities where citizens’ assemblies have had meaningful impacts are those that have not only invested in the assembly, but also into preparing the authority to receive the recommendations. Often this has meant continuing support and resourcing for assembly members after the process. They are the best advocates for their work…(More)”