Frontier AI: double-edged sword for public sector


Article by Zeynep Engin: “The power of the latest AI technologies, often referred to as ‘frontier AI’, lies in their ability to automate decision-making by harnessing complex statistical insights from vast amounts of unstructured data, using models that surpass human understanding. The introduction of ChatGPT in late 2022 marked a new era for these technologies, making advanced AI models accessible to a wide range of users, a development poised to permanently reshape how our societies function.

From a public policy perspective, this capacity offers the optimistic potential to enable personalised services at scale, potentially revolutionising healthcare, education, local services, democratic processes, and justice, tailoring them to everyone’s unique needs in a digitally connected society. The ambition is to achieve better outcomes than humanity has managed so far without AI assistance. There is certainly a vast opportunity for improvement, given the current state of global inequity, environmental degradation, polarised societies, and other chronic challenges facing humanity.

However, it is crucial to temper this optimism with recognising the significant risks. In their current trajectories, these technologies are already starting to undermine hard-won democratic gains and civil rights. Integrating AI into public policy and decision-making processes risks exacerbating existing inequalities and unfairness, potentially leading to new, uncontrollable forms of discrimination at unprecedented speed and scale. The environmental impacts, both direct and indirect, could be catastrophic, while the rise of AI-powered personalised misinformation and behavioural manipulation is contributing to increasingly polarised societies.

Steering the direction of AI to be in the public interest requires a deeper understanding of its characteristics and behaviour. To imagine and design new approaches to public policy and decision-making, we first need a comprehensive understanding of what this remarkable technology offers and its potential implications…(More)”.

Policies must be justified by their wellbeing-to-cost ratio


Article by Richard Layard: “…What is its value for money — that is, how much wellbeing does it deliver per (net) pound it costs the government? This benefit/cost ratio (or BCR) should be central to every discussion.

The science exists to produce these numbers and, if the British government were to require them of the spending departments, it would be setting an example of rational government to the whole world.

Such a move would, of course, lead to major changes in priorities. At the London School of Economics we have been calculating the benefits and costs of policies across a whole range of government departments.

In our latest report on value for money, the best policies are those that save the government more money than they cost — for example by getting people back to work. Classic examples of this are treatments for mental health. The NHS Talking Therapies programme now treats 750,000 people a year for anxiety disorders and depression. Half of them recover and the service demonstrably pays for itself. It needs to expand.

But we also need a parallel service for those addicted to alcohol, drugs and gambling. These individuals are more difficult to treat — but the savings if they recover are greater. Again, it will pay for itself. And so will the improved therapy service for children and young people that Labour has promised.

However, most spending policies do cost more than they save. For these it is crucial to measure the benefit/cost ratio, converting the wellbeing benefit into its monetary equivalent. For example, we can evaluate the wellbeing gain to a community of having more police and subsequently less crime. Once this is converted into money, we calculate that the benefit/cost ratio is 12:1 — very high…(More)”.

Children and Young People’s Participation in Climate Assemblies


Guide by KNOCA: “This guide draws on the experiences and advice of children, young people and adults involved in citizens’ assemblies that have taken place at national, city and community levels across nine countries, highlighting that:

  • Involving children and young people can enrich the intergenerational legitimacy and impact of climate assemblies: adult assembly members are reminded of their responsibilities to younger and future generations, and children and young people feel listened to, valued and taken seriously.
  • Involving children and young people has significant potential to strengthen the future of democracy and climate governance by enhancing democratic and climate literacy within education systems.
  • Children and young people can and should be involved in climate assemblies in different ways. Most importantly, children and young people should be involved from the very beginning of the process to ensure it reflects children and young people’s own ideas.
  • There are practical, ethical and design factors to consider when working with children and young people which can often be positively navigated by taking a child rights-based approach to the conceptualisation, design and delivery of climate assemblies…(More)”.

The Imperial Origins of Big Data


Blog and book by Asheesh Kapur Siddique: “We live in a moment of massive transformation in the nature of information. In 2020, according to one report, users of the Internet created 64.2 zetabytes of data, a quantity greater than the “number of detectable stars in the cosmos,” a colossal increase whose origins can be traced to the emergence of the World Wide Web in 1993.1 Facilitated by technologies like satellites, smartphones, and artificial intelligence, the scale and speed of data creation seems like it may only balloon over the rest of our lifetimes—and with it, the problem of how to govern ourselves in relation to the inequalities and opportunities that the explosion of data creates.

But while much about our era of big data is indeed revolutionary, the political questions that it raises—How should information be used? Who should control it? And how should it be preserved?—are ones with which societies have long grappled. These questions attained a particular importance in Europe from the eleventh century due to a technological change no less significant than the ones we are witnessing today: the introduction of paper into Europe. Initially invented in China, paper travelled to Europe via the conduit of Islam around the eleventh century after the Moors conquered Spain. Over the twelfth, thirteenth, and fourteenth centuries, paper emerged as the fundamental substrate which politicians, merchants, and scholars relied on to record and circulate information in governance, commerce, and learning. At the same time, governing institutions sought to preserve and control the spread of written information through the creation of archives: repositories where they collected, organized, and stored documents.

The expansion of European polities overseas from the late fifteenth century onward saw governments massively scale up their use of paper—and confront the challenge of controlling its dissemination across thousands of miles of ocean and land. These pressures were felt particularly acutely in what eventually became the largest empire in world history, the British empire. As people from the British isles from the early seventeenth century fought, traded, and settled their way to power in the Atlantic world and South Asia, administrators faced the problem of how to govern both their emigrating subjects and the non-British peoples with whom they interacted. This meant collecting information about their behavior through the technology of paper. Just as we struggle to organize, search, and control our email boxes, text messages, and app notifications, so too did these early moderns confront the attendant challenges of developing practices of collection and storage to manage the resulting information overload. And despite the best efforts of states and companies to control information, it constantly escaped their grasp, falling into the hands of their opponents and rivals who deployed it to challenge and contest ruling powers.

The history of the early modern information state offers no simple or straightforward answers to the questions that data raises for us today. But it does remind us of a crucial truth, all too readily obscured by the deluge of popular narratives glorifying technological innovation: that questions of data are inherently questions about politics—about who gets to collect, control, and use information, and the ends to which information should be put. We should resist any effort to insulate data governance from democratic processes—and having an informed perspective on the politics of data requires that we attend not just to its present, but also to its past…(More)”.

Even laypeople use legalese


Paper by Eric Martínez, Francis Mollica and Edward Gibson: “Whereas principles of communicative efficiency and legal doctrine dictate that laws be comprehensible to the common world, empirical evidence suggests legal documents are largely incomprehensible to lawyers and laypeople alike. Here, a corpus analysis (n = 59) million words) first replicated and extended prior work revealing laws to contain strikingly higher rates of complex syntactic structures relative to six baseline genres of English. Next, two preregistered text generation experiments (n = 286) tested two leading hypotheses regarding how these complex structures enter into legal documents in the first place. In line with the magic spell hypothesis, we found people tasked with writing official laws wrote in a more convoluted manner than when tasked with writing unofficial legal texts of equivalent conceptual complexity. Contrary to the copy-and-edit hypothesis, we did not find evidence that people editing a legal document wrote in a more convoluted manner than when writing the same document from scratch. From a cognitive perspective, these results suggest law to be a rare exception to the general tendency in human language toward communicative efficiency. In particular, these findings indicate law’s complexity to be derived from its performativity, whereby low-frequency structures may be inserted to signal law’s authoritative, world-state-altering nature, at the cost of increased processing demands on readers. From a law and policy perspective, these results suggest that the tension between the ubiquity and impenetrability of the law is not an inherent one, and that laws can be simplified without a loss or distortion of communicative content…(More)”.

Policy for responsible use of AI in government


Policy by the Australian Government: “The Policy for the responsible use of AI in government ensures that government plays a leadership role in embracing AI for the benefit of Australians while ensuring its safe, ethical and responsible use, in line with community expectations. The policy:

  • provides a unified approach for government to engage with AI confidently, safely and responsibly, and realise its benefits
  • aims to strengthen public trust in government’s use of AI by providing enhanced transparency, governance and risk assurance
  • aims to embed a forward leaning, adaptive approach for government’s use of AI that is designed to evolve and develop over time…(More)”.

Policy Fit for the Future


Primer by the Australian Government: “The Futures Primer is part of the “Policy Fit for the Future” project, building Australian Public Service capability to use futures techniques in policymaking through horizon scanning, visioning and scenario planning. These tools help anticipate and navigate future risks and opportunities.

The tools and advice can be adapted to any policy challenge, and reflect the views of global experts in futures and strategic foresight, both within and outside the APS…The Futures Primer offers a range of flexible tools and advice that can be adapted to any policy challenge. It reflects the views of global experts in futures and strategic foresight, both within and outside the APS…(More)”.

The Power of Volunteers: Remote Mapping Gaza and Strategies in Conflict Areas


Blog by Jessica Pechmann: “…In Gaza, increased conflict since October 2023 has caused a prolonged humanitarian crisis. Understanding the impact of the conflict on buildings has been challenging, since pre-existing datasets from artificial intelligence and machine learning (AI/ML) models and OSM were not accurate enough to create a full building footprint baseline. The area’s buildings were too dense, and information on the ground was impossible to collect safely. In these hard-to-reach areas, HOT’s remote and crowdsourced mapping methodology was a good fit for collecting detailed information visible on aerial imagery.

In February 2024, after consultation with humanitarian and UN actors working in Gaza, HOT decided to create a pre-conflict dataset of all building footprints in the area in OSM. HOT’s community of OpenStreetMap volunteers did all the data work, coordinating through HOT’s Tasking Manager. The volunteers made meticulous edits to add missing data and to improve existing data. Due to protection and data quality concerns, only expert volunteer teams were assigned to map and validate the area. As in other areas that are hard to reach due to conflict, HOT balanced the data needs with responsible data practices based on the context.

Comparing AI/ML with human-verified OSM building datasets in conflict zones

AI/ML is becoming an increasingly common and quick way to obtain building footprints across large areas. Sources for automated building footprints range from worldwide datasets by Microsoft or Google to smaller-scale open community-managed tools such as HOT’s new application, fAIr.

Now that HOT volunteers have completely updated and validated all OSM buildings in visible imagery pre-conflict, OSM has 18% more individual buildings in the Gaza strip than Microsoft’s ML buildings dataset (estimated 330,079 buildings vs 280,112 buildings). However, in contexts where there has not been a coordinated update effort in OSM, the numbers may differ. For example, in Sudan where there has not been a large organized editing campaign, there are just under 1,500,000 in OSM, compared to over 5,820,000 buildings in Microsoft’s ML data. It is important to note that the ML datasets have not been human-verified and their accuracy is not known. Google Open Buildings has over 26 million building features in Sudan, but on visual inspection, many of these features are noise in the data that the model incorrectly identified as buildings in the uninhabited desert…(More)”.

Under which conditions can civic monitoring be admitted as a source of evidence in courts?


Blog by Anna Berti Suman: “The ‘Sensing for Justice’ (SensJus) research project – running between 2020 and 2023 – explored how people use monitoring technologies or just their senses to gather evidence of environmental issues and claim environmental justice in a variety of fora. Among the other research lines, we looked at successful and failed cases of civic-gathered data introduced in courts. The guiding question was: what are the enabling factors and/or barriers for the introduction of civic evidence in environmental litigation?

Civic environmental monitoring is the use by ordinary people of monitoring devices (e.g., a sensor) or their bare senses (e.g., smell, hearing) to detect environmental issues. It can be regarded as a form of reaction to environmental injustices, a form of political contestation through data and even as a form of collective care. The practice is fast growing, especially thanks to the widespread availability of audio and video-recording devices in the hand of diverse publics, but also due to the increase in public literacy and concern on environmental matters.

Civic monitoring can be a powerful source of evidence for law enforcement, especially when it sheds light on official informational gaps associated with the shortages of public agencies’ resources to detect environmental wrongdoings. Both legal scholars and practitioners as well as civil society organizations and institutional actors should look at the practice and its potential applications with attention.

Among the cases explored for the SensJus project, the Formosa case, Texas, United States, stands out as it sets a key precedent: issued in June 2019, the landmark ruling found a Taiwanese petrochemical company liable for violating the US Clean Water Act, mostly on the basis of citizen-collected evidence involving volunteer observations of plastic contamination over years. The contamination could not be proven through existing data held by competent authorities because the company never filed any record of pollution. Our analysis of the case highlights some key determinants of the case’s success…(More)”.

Data Protection Law and Emotion


Book by Damian Clifford: “Data protection law is often positioned as a regulatory solution to the risks posed by computational systems. Despite the widespread adoption of data protection laws, however, there are those who remain sceptical as to their capacity to engender change. Much of this criticism focuses on our role as ‘data subjects’. It has been demonstrated repeatedly that we lack the capacity to act in our own best interests and, what is more, that our decisions have negative impacts on others. Our decision-making limitations seem to be the inevitable by-product of the technological, social, and economic reality. Data protection law bakes in these limitations by providing frameworks for notions such as consent and subjective control rights and by relying on those who process our data to do so fairly.

Despite these valid concerns, Data Protection Law and Emotion argues that the (in)effectiveness of these laws are often more difficult to discern than the critical literature would suggest, while also emphasizing the importance of the conceptual value of subjective control. These points are explored (and indeed, exposed) by investigating data protection law through the lens of the insights provided by law and emotion scholarship and demonstrating the role emotions play in our decision-making. The book uses the development of Emotional Artificial Intelligence, a particularly controversial technology, as a case study to analyse these issues.

Original and insightful, Data Protection Law and Emotion offers a unique contribution to a contentious debate that will appeal to students and academics in data protection and privacy, policymakers, practitioners, and regulators…(More)”.