A.I. Is Starting to Wear Down Democracy


Article by Steven Lee Myers and Stuart A. Thompson: “Since the explosion of generative artificial intelligence over the last two years, the technology has demeaned or defamed opponents and, for the first time, officials and experts said, begun to have an impact on election results.

Free and easy to use, A.I. tools have generated a flood of fake photos and videos of candidates or supporters saying things they did not or appearing in places they were not — all spread with the relative impunity of anonymity online.

The technology has amplified social and partisan divisions and bolstered antigovernment sentiment, especially on the far right, which has surged in recent elections in Germany, Poland and Portugal.

In Romania, a Russian influence operation using A.I. tainted the first round of last year’s presidential election, according to government officials. A court there nullified that result, forcing a new vote last month and bringing a new wave of fabrications. It was the first major election in which A.I. played a decisive role in the outcome. It is unlikely to be the last.

As the technology improves, officials and experts warn, it is undermining faith in electoral integrity and eroding the political consensus necessary for democratic societies to function.

Madalina Botan, a professor at the National University of Political Studies and Public Administration in Romania’s capital, Bucharest, said there was no question that the technology was already “being used for obviously malevolent purposes” to manipulate voters.

“These mechanics are so sophisticated that they truly managed to get a piece of content to go very viral in a very limited amount of time,” she said. “What can compete with this?”

In the unusually concentrated wave of elections that took place in 2024, A.I. was used in more than 80 percent, according to the International Panel on the Information Environment, an independent organization of scientists based in Switzerland.

It documented 215 instances of A.I. in elections that year, based on government statements, research and news reports. Already this year, A.I. has played a role in at least nine more major elections, from Canada to Australia…(More)”.

AI Scraping Bots Are Breaking Open Libraries, Archives, and Museums


Article by Emanuel Maiberg: “The report, titled “Are AI Bots Knocking Cultural Heritage Offline?” was written by Weinberg of the GLAM-E Lab, a joint initiative between the Centre for Science, Culture and the Law at the University of Exeter and the Engelberg Center on Innovation Law & Policy at NYU Law, which works with smaller cultural institutions and community organizations to build open access capacity and expertise. GLAM is an acronym for galleries, libraries, archives, and museums. The report is based on a survey of 43 institutions with open online resources and collections in Europe, North America, and Oceania. Respondents also shared data and analytics, and some followed up with individual interviews. The data is anonymized so institutions could share information more freely, and to prevent AI bot operators from undermining their counter measures.  

Of the 43 respondents, 39 said they had experienced a recent increase in traffic. Twenty-seven of those 39 attributed the increase in traffic to AI training data bots, with an additional seven saying the AI bots could be contributing to the increase. 

“Multiple respondents compared the behavior of the swarming bots to more traditional online behavior such as Distributed Denial of Service (DDoS) attacks designed to maliciously drive unsustainable levels of traffic to a server, effectively taking it offline,” the report said. “Like a DDoS incident, the swarms quickly overwhelm the collections, knocking servers offline and forcing administrators to scramble to implement countermeasures. As one respondent noted, ‘If they wanted us dead, we’d be dead.’”…(More)”

Robodebt: When automation fails


Article by Don Moynihan: “From 2016 to 2020, the Australian government operated an automated debt assessment and recovery system, known as “Robodebt,” to recover fraudulent or overpaid welfare benefits. The goal was to save $4.77 billion through debt recovery and reduced public service costs. However, the algorithm and policies at the heart of Robodebt caused wildly inaccurate assessments, and administrative burdens that disproportionately impacted those with the least resources. After a federal court ruled the policy unlawful, the government was forced to terminate Robodebt and agree to a $1.8 billion settlement.

Robodebt is important because it is an example of a costly failure with automation. By automation, I mean the use of data to create digital defaults for decisions. This could involve the use of AI, or it could mean the use of algorithms reading administrative data. Cases like Robodebt serve as canaries in the coalmine for policymakers interested in using AI or algorithms as an means to downsize public services on the hazy notion that automation will pick up the slack. But I think they are missing the very real risks involved.

To be clear, the lesson is not “all automation is bad.” Indeed, it offer real benefits in potentially reducing administrative costs and hassles and increasing access to public services (e.g. the use of automated or “ex parte” renewals for Medicaid, for example, which Republicans are considering limiting in their new budget bill). It is this promise that makes automation so attractive to policymakers. But it is also the case that automation can be used to deny access to services, and to put people into digital cages that are burdensome to escape from. This is why we need to learn from cases where it has been deployed.

The experience of Robodebt underlines the dangers of using citizens as lab rats to adopt AI on a broad scale before it is has been proven to work. Alongside the parallel collapse of the Dutch government childcare system, Robodebt provides an extraordinarily rich text to understand how automated decision processes can go wrong.

I recently wrote about Robodebt (with co-authors Morten Hybschmann, Kathryn Gimborys, Scott Loudin, Will McClellan), both in the journal of Perspectives on Public Management and Governance and as a teaching case study at the Better Government Lab...(More)”.

The Next Wave of Innovation Districts


Article by Bruce Katz and Julie Wagner: “A next wave of innovation districts is gaining momentum given the structural changes underway in the global economy. The examples cited above telegraph where existing innovation districts are headed and explain why new districts are forming. The districts highlighted and many others are responding to fast-changing and highly volatile macro forces and the need to de-riskdecarbonize, and diversify talent.

The next wave of innovation districts is distinctive for multiple reasons.

  • The sectors leveraging this innovation geography expand way beyond the traditional focus on life sciences to include advanced manufacturing for military and civilian purposes.
  • The deeper emphasis on decarbonization is driving the use of basic and applied R&D to invent new clean technology products and solutions as well as organizing energy generation and distribution within the districts themselves to meet crucial carbon targets.
  • The stronger emphasis on the diversification of talent includes the upskilling of workers for new production activities and a broader set of systems to drive inclusive innovation to address long-standing inequities.
  • The districts are attracting a broader group of stakeholders, including manufacturing companies, utilities, university industrial design and engineering departments and hard tech startups.
  • The districts ultimately are looking to engage a wider base of investors given the disparate resources and traditions of capitalization that support defense tech, clean tech, med tech and other favored forms of innovation.

Some regions or states are also seeking ways to connect a constellation of districts and other economic hubs to harness the imperative to innovate accentuated by these and other macro forces. The state of South Australia is one such example. It has prioritized several innovation hubs across this region to foster South Australia’s knowledge and innovation ecosystem, as well as identify emerging economic clusters in industry sectors of global competitiveness to advance the broader economy…(More)”.

The Meanings of Voting for Citizens: A Scientific Challenge, a Portrait, and Implications


Book by Carolina Plescia: “On election day, citizens typically place a mark beside a party or candidate on a ballot paper. The right to cast this mark has been a historic conquest and today, voting is among the most frequent political acts citizens perform. But what does that mark mean to them? This book explores the diverse conceptualizations of voting among citizens in 13 countries across Europe, Africa, the Americas, and Oceania. This book presents empirical evidence based on nearly a million words about voting from over 25,000 people through an open-ended survey and both qualitative and quantitative methods. The book’s innovative approach includes conceptual, theoretical, and empirical advancements and provides a comprehensive understanding of what voting means to citizens and how these meanings influence political engagement. This book challenges assumptions about universal views on democracy and reveals how meanings of voting vary among individuals and across both liberal democracies and electoral autocracies. The book also examines the implications of these meanings for political behaviour and election reforms. The Meanings of Voting for Citizens is a critical reference for scholars of public opinion, behaviour, and democratization, as well as a valuable resource for undergraduate and graduate courses in comparative political behaviour, empirical methods, and survey research. Practitioners working on election reforms will find it particularly relevant via its insights into how citizens’ meanings of voting impact the effectiveness of electoral reforms…(More)”.

The Overlooked Importance of Data Reuse in AI Infrastructure


Essay by Oxford Insights and The Data Tank: “Employing data stewards and embedding responsible data reuse principles in the programme or ecosystem and within participating organisations is one of the pathways forward. Data stewards are proactive agents responsible for catalysing collaboration, tackling these challenges and embedding data reuse practices in their organisations. 

The role of Chief Data Officer for government agencies has become more common in recent years and we suggest the same needs to happen with the role of the Chief Data Steward. Chief Data Officers are mostly focused on internal data management and have a technical focus. With the changes in the data governance landscape, this profession needs to be reimagined and iterated. Embedded in both the demand and the supply sides of data, data stewards are proactive agents empowered to create public value by re-using data and data expertise. They are tasked to identify opportunities for productive cross-sectoral collaboration, and proactively request or enable functional access to data, insights, and expertise. 

One exception comes from New Zealand. The UN has released a report on the role of data stewards and National Statistical Offices (NSOs) in the new data ecosystem. This report provides many use-cases that can be adopted by governments seeking to establish such a role. In New Zealand, there is an appointed Government Chief Data Steward, who is in charge of setting the strategic direction for government’s data management, and focuses on data reuse altogether. 

Data stewards can play an important role in organisations leading data reuse programmes. Data stewards would be responsible for responding to the challenges with participation introduced above. 

A Data Steward’s role includes attracting participation for data reuse programmes by:

  • Demonstrating and communicating the value proposition of data reuse and collaborations, by engaging in partnerships and steering data reuse and sharing among data commons, cooperatives, or collaborative infrastructures. 
  • Developing responsible data lifecycle governance, and communicating insights to raise awareness and build trust among stakeholders; 

A Data Steward’s role includes maintaining and scaling participation for data reuse programmes by:

  • Maintaining trust by engaging with wider stakeholders and establishing clear engagement methodologies. For example, by embedding a social license, data stewards assure the digital self determination principle is embedded in data reuse processes. 
  • Fostering sustainable partnerships and collaborations around data, via developing business cases for data sharing and reuse, and measuring impact to build the societal case for data collaboration; and
  • Innovating in the sector by turning data to decision intelligence to ensure that insights derived from data are more effectively integrated into decision-making processes…(More)”.

Democratic Resilience: Moving from Theoretical Frameworks to a Practical Measurement Agenda


Paper by Nicholas Biddle, Alexander Fischer, Simon D. Angus, Selen Ercan, Max Grömping, andMatthew Gray: “Global indices and media narratives indicate a decline in democratic institutions, values, and practices. Simultaneously, democratic innovators are experimenting with new ways to strengthen democracy at local and national levels. These both suggest democracies are not static; they evolve as society, technology and the environment change.

This paper examines democracy as a resilient system, emphasizing the role of applied analysis in shaping effective policy and programs, particularly in Australia. Grounded in adaptive processes, democratic resilience is the capacity of a democracy to identify problems, and collectively respond to changing conditions, balancing institutional stability with transformative. It outlines the ambition of a national network of scholars, civil society leaders, and policymakers to equip democratic innovators with practical insights and foresight underpinning new ideas. These insights are essential for strengthening both public institutions, public narratives and community programs.

We review current literature on resilient democracies and highlight a critical gap: current measurement efforts focus heavily on composite indices—especially trust—while neglecting dynamic flows and causal drivers. They focus on the descriptive features and identify weaknesses, they do not focus on the diagnostics or evidence to what strengths democracies. This is reflected in the lack of cross-sector networked, living evidence systems to track what works and why across the intersecting dynamics of democratic practices. To address this, we propose a practical agenda centred on three core strengthening flows of democratic resilience: trusted institutions, credible information, and social inclusion.

The paper reviews six key data sources and several analytic methods for continuously monitoring democratic institutions, diagnosing causal drivers, and building an adaptive evidence system to inform innovation and reform. By integrating resilience frameworks and policy analysis, we demonstrate how real-time monitoring and analysis can enable innovation, experimentation and cross-sector ingenuity.

This article presents a practical research agenda connecting a national network of scholars and civil society leaders. We suggest this agenda be problem-driven, facilitated by participatory approaches to asking and prioritising the questions that matter most. We propose a connected approach to collectively posing key questions that matter most, expanding data sources, and fostering applied ideation between communities, civil society, government, and academia—ensuring democracy remains resilient in an evolving global and national context…(More)”.

Policymaking assessment framework


Guide by the Susan McKinnon Foundation: “This assessment tool supports the measurement of the quality of policymaking processes – both existing and planned – across  sectors. It provides a flexible framework for rating public policy processes using information available in the public domain. The framework’s objective is to simplify the path towards best practice, evidence-informed policy.

It is intended to accommodate the complexity of policymaking processes and reflect the realities and context within which policymaking is undertaken. The criteria can be tailored for different policy problems and policy types and applied across sectors and levels of government.

The framework is structured around five key domains:

  1. understanding the problem
  2. engagement with stakeholders and partners
  3. outcomes focus
  4. evidence for the solution, and
  5. design and communication…(More)”.

Must NLP be Extractive?


Paper by Steven Bird: “How do we roll out language technologies across a world with 7,000 languages? In one story, we scale the successes of NLP further into ‘low-resource’ languages, doing ever more with less. However, this approach does not recognise the fact that – beyond the 500 institutional languages – the remaining languages are oral vernaculars. These speech communities interact with the outside world using a ‘con-
tact language’. I argue that contact languages are the appropriate target for technologies like speech recognition and machine translation, and that the 6,500 oral vernaculars should be approached differently. I share stories from an Indigenous community where local people reshaped an extractive agenda to align with their relational agenda. I describe the emerging paradigm of Relational NLP and explain how it opens the way to non-extractive methods and to solutions that enhance human agency…(More)”

Social licence for health data


Evidence Brief by NSW Government: “Social licence, otherwise referred to as social licence to operate, refers to an approval or consensus from the society members or the community for the users, either as a public or private enterprise or individual, to use their health data as desired or accepted under certain conditions. Social licence is a dynamic and fluid concept and is subject to change over time often influenced by societal and contextual factors.
The social licence is usually indicated through ongoing engagement and negotiations with the public and is not a contract with strict terms and conditions. It is, rather, a moral and ethical responsibility assumed by the data users based on trust and legitimacy, It supplements the techno-legal mechanisms to regulate the use of data.
For example, through public engagement, certain values and principles can emerge as pertinent to public support for using their data. Similarly, the public may view certain activities relating to their data use as acceptable and beneficial, implying their permission for certain activities or usecase scenarios. Internationally, although not always explicitly referred to as a social licence, the most common approach to establishing public trust and support and identifying common grounds or agreements on acceptable practices for use of data is through public engagement. Engagement methods and mechanisms for gaining public perspectives vary across countries (Table 1).
− Canada – Health Data Research Network Canada reports on social licence for uses of health data, based on deliberative discussions with 20 experienced public and patient advisors. The output is a list of agreements and disagreements on what uses and users of health data have social licence.
− New Zealand – In 2022, the Ministry of Health commissioned a survey on public perceptions on use of personal health information. This report identified conditions under which the public supports the re-use of their data…(More)”.