The Ethics of Automated Warfare and Artificial Intelligence


Essay series introduced by Bessma Momani, Aaron Shull and Jean-François Bélanger: “…begins with a piece written by Alex Wilner titled “AI and the Future of Deterrence: Promises and Pitfalls.” Wilner looks at the issue of deterrence and provides an account of the various ways AI may impact our understanding and framing of deterrence theory and its practice in the coming decades. He discusses how different countries have expressed diverging views over the degree of AI autonomy that should be permitted in a conflict situation — as those more willing to cut humans out of the decision-making loop could gain a strategic advantage. Wilner’s essay emphasizes that differences in states’ technological capability are large, and this will hinder interoperability among allies, while diverging views on regulation and ethical standards make global governance efforts even more challenging.

Looking to the future of non-state use of drones as an example, the weapon technology transfer from nation-state to non-state actors can help us to understand how next-generation technologies may also slip into the hands of unsavoury characters such as terrorists, criminal gangs or militant groups. The effectiveness of Ukrainian drone strikes against the much larger Russian army should serve as a warning to Western militaries, suggests James Rogers in his essay “The Third Drone Age: Visions Out to 2040.” This is a technology that can level the field by asymmetrically advantaging conventionally weaker forces. The increased diffusion of drone technology enhances the likelihood that future wars will also be drone wars, whether these drones are autonomous systems or not. This technology, in the hands of non-state actors, implies future Western missions against, say, insurgent or guerilla forces will be more difficult.

Data is the fuel that powers AI and the broader digital transformation of war. In her essay “Civilian Data in Cyber Conflict: Legal and Geostrategic Considerations,” Eleonore Pauwels discusses how offensive cyber operations are aiming to alter the very data sets of other actors to undermine adversaries — whether through targeting centralized biometric facilities or individuals’ DNA sequence in genomic analysis databases, or injecting fallacious data into satellite imagery used in situational awareness. Drawing on the implications of international humanitarian law, Pauwels argues that adversarial data manipulation constitutes another form of “grey zone” operation that falls below a threshold of armed conflict. She evaluates the challenges associated with adversarial data manipulation, given that there is no internationally agreed upon definition of what constitutes cyberattacks or cyber hostilities within international humanitarian law (IHL).

In “AI and the Actual International Humanitarian Law Accountability Gap,” Rebecca Crootoff argues that technologies can complicate legal analysis by introducing geographic, temporal and agency distance between a human’s decision and its effects. This makes it more difficult to hold an individual or state accountable for unlawful harmful acts. But in addition to this added complexity surrounding legal accountability, novel military technologies are bringing an existing accountability gap in IHL into sharper focus: the relative lack of legal accountability for unintended civilian harm. These unintentional acts can be catastrophic, but technically within the confines of international law, which highlights the need for new accountability mechanisms to better protect civilians.

Some assert that the deployment of autonomous weapon systems can strengthen compliance with IHL by limiting the kinetic devastation of collateral damage, but AI’s fragility and apparent capacity to behave in unexpected ways poses new and unexpected risks. In “Autonomous Weapons: The False Promise of Civilian Protection,” Branka Marijan opines that AI will likely not surpass human judgment for many decades, if ever, suggesting that there need to be regulations mandating a certain level of human control over weapon systems. The export of weapon systems to states willing to deploy them on a looser chain-of-command leash should be monitored…(More)”.

Govtech against corruption: What are the integrity dividends of government digitalization?


Paper by Carlos Santiso: “Does digitalization reduce corruption? What are the integrity benefits of government digitalization? While the correlation between digitalization and corruption is well established, there is less actionable evidence on the integrity dividends of specific digitalization reforms on different types of corruption and the policy channels through which they operate. These linkages are especially relevant in high corruption risk environments. This article unbundles the integrity dividends of digital reforms undertaken by governments around the world, accelerated by the pandemic. It analyzes the rise of data-driven integrity analytics as promising tools in the anticorruption space deployed by tech-savvy integrity actors. It also assesses the broader integrity benefits of the digitalization of government services and the automation of bureaucratic processes, which contribute to reducing bribe solicitation risks by front-office bureaucrats. It analyzes in particular the impact of digitalization on social transfers. It argues that government digitalization can be an implicit yet effective anticorruption strategy, with subtler yet deeper effects, but there needs to be greater synergies between digital reforms and anticorruption strategies….(More)”.

OECD Good Practice Principles for Public Service Design and Delivery in the Digital Age


OECD Report: “The digital age provides great opportunities to transform how public services are designed and delivered. The OECD Good Practice Principles for Service Design and Delivery in the Digital Age provide a clear, actionable and comprehensive set of objectives for the high-quality digital transformation of public services. Reflecting insights gathered from across OECD member countries, these nine principles are arranged under three pillars of “Build accessible, ethical and equitable public services that prioritise user needs, rather than government needs”; “Deliver with impact, at scale and with pace”; and “Be accountable and transparent in the design and delivery of public services to reinforce and strengthen public trust”. The principles are advisory rather than prescriptive, allowing for local interpretation and implementation. They should also be considered in conjunction with wider OECD work to equip governments in harnessing the potential of digital technology and data to improve outcomes for all…(More)”.

People watching: Abstractions and orthodoxies of monitoring


Paper by Victoria Wang and John V.Tucker: “Our society has an insatiable appetite for data. Much of the data is collected to monitor the activities of people, e.g., for discovering the purchasing behaviour of customers, observing the users of apps, managing the performance of personnel, and conforming to regulations and laws, etc. Although monitoring practices are ubiquitous, monitoring as a general concept has received little analytical attention. We explore: (i) the nature of monitoring facilitated by software; (ii) the structure of monitoring processes; and (iii) the classification of monitoring systems. We propose an abstract definition of monitoring as a theoretical tool to analyse, document, and compare disparate monitoring applications. For us, monitoring is simply the systematic collection of data about the behaviour of people and objects. We then extend this concept with mechanisms for detecting events that require interventions and changes in behaviour, and describe five types of monitoring…(More)”.

How many yottabytes in a quettabyte? Extreme numbers get new names


Article by Elizabeth Gibney: “By the 2030s, the world will generate around a yottabyte of data per year — that’s 1024 bytes, or the amount that would fit on DVDs stacked all the way to Mars. Now, the booming growth of the data sphere has prompted the governors of the metric system to agree on new prefixes beyond that magnitude, to describe the outrageously big and small.

Representatives from governments worldwide, meeting at the General Conference on Weights and Measures (CGPM) outside Paris on 18 November, voted to introduce four new prefixes to the International System of Units (SI) with immediate effect. The prefixes ronna and quetta represent 1027 and 1030, and ronto and quecto signify 10−27 and 10−30. Earth weighs around one ronnagram, and an electron’s mass is about one quectogram.

This is the first update to the prefix system since 1991, when the organization added zetta (1021), zepto (1021), yotta (1024) and yocto (10−24). In that case, metrologists were adapting to fit the needs of chemists, who wanted a way to express SI units on the scale of Avogadro’s number — the 6 × 1023 units in a mole, a measure of the quantity of substances. The more familiar prefixes peta and exa were added in 1975 (see ‘Extreme figures’).

Extreme figures

Advances in scientific fields have led to increasing need for prefixes to describe very large and very small numbers.

FactorNameSymbolAdopted
1030quettaQ2022
1027ronnaR2022
1024yottaY1991
1021zettaZ1991
1018exaE1975
1015petaP1975
10−15femtof1964
10−18attoa1964
10−21zeptoz1991
10−24yoctoy1991
10−27rontor2022
10−30quectoq2022

Prefixes are agreed at the General Conference on Weights and Measures.

Today, the driver is data science, says Richard Brown, a metrologist at the UK National Physical Laboratory in Teddington. He has been working on plans to introduce the latest prefixes for five years, and presented the proposal to the CGPM on 17 November. With the annual volume of data generated globally having already hit zettabytes, informal suggestions for 1027 — including ‘hella’ and ‘bronto’ — were starting to take hold, he says. Google’s unit converter, for example, already tells users that 1,000 yottabytes is 1 hellabyte, and at least one UK government website quotes brontobyte as the correct term….(More)”

AI Localism in Practice: Examining How Cities Govern AI


Report by Sara Marcucci, Uma Kalkar, and Stefaan Verhulst: “…serves as a primer for policymakers and practitioners to learn about current governance practices and inspire their own work in the field. In this report, we present the fundamentals of AI governance, the value proposition of such initiatives, and their application in cities worldwide to identify themes among city- and state-led governance actions. We close with ten lessons on AI localism for policymakers, data, AI experts, and the informed public to keep in mind as cities grow increasingly ‘smarter’, which include: 

  • Principles provide a North Star for governance;
  • Public engagement provides a social license;
  • AI literacy enables meaningful engagement;
  • Tap into local expertise;
  • Innovate in how transparency is provided;
  • Establish new means for accountability and oversight;
  • Signal boundaries through binding laws and policies;
  • Use procurement to shape responsible AI markets;
  • Establish data collaboratives to tackle asymmetries; and
  • Make good governance strategic.

Considered together, we look to use our understanding of governance practices, local AI governance examples, and the ten overarching lessons to create an incipient framework for implementing and assessing AI localism initiatives in cities around the world….(More)”

Measuring the environmental impacts of artificial intelligence compute and applications


OECD Paper: “Artificial intelligence (AI) systems can use massive computational resources, raising sustainability concerns. This report aims to improve understanding of the environmental impacts of AI, and help measure and decrease AI’s negative effects while enabling it to accelerate action for the good of the planet. It distinguishes between the direct environmental impacts of developing, using and disposing of AI systems and related equipment, and the indirect costs and benefits of using AI applications. It recommends the establishment of measurement standards, expanding data collection, identifying AI-specific impacts, looking beyond operational energy use and emissions, and improving transparency and equity to help policy makers make AI part of the solution to sustainability challenges…(More)”.

Marine Data Sharing: Challenges, Technology Drivers and Quality Attributes


Paper by Keila Lima et al: “Many companies have been adopting data-driven applications in which products and services are centered around data analysis to approach new segments of the marketplace. Data ecosystems rise from data sharing among organizations premeditatedly. However, this migration to this new data sharing paradigm has not come that far in the marine domain. Nevertheless, better utilizing the ocean data might be crucial for humankind in the future, for food production, and minerals, to ensure the ocean’s health….We investigate the state-of-the-art regarding data sharing in the marine domain with a focus on aspects that impact the speed of establishing a data ecosystem for the ocean.We conducted an exploratory case study based on focus groups and workshops to understand the sharing of data in this context. Results: We identified main challenges of current systems that need to be addressed with respect to data sharing. Additionally, aspects related to the establishment of a data ecosystem were elicited and analyzed in terms of benefits, conflicts, and solutions…(More)”.

What is PeaceTech?


Report by Behruz Davletov, Uma Kalkar, Marine Ragnet, and Stefaan Verhulst: “From sensors to detect explosives to geographic data for disaster relief to artificial intelligence verifying misleading online content, data and technology are essential assets for peace efforts. Indeed, the ongoing Russia-Ukraine war is a direct example of data, data science, and technology as a whole has been mobilized to assist and monitor conflict responses and support peacebuilding.

Yet understanding the ways in which technology can be applied for peace, and what kinds of peace promotion they can serve, as well as their associated risks remain muddled. Thus, a framework for the governance of these peace technologies—#PeaceTech—is needed at an international and transnational level to guide the responsible and purposeful use of technology and data to strengthen peace and justice initiatives.

Today, The GovLab is proud to announce the release of the “PeaceTech Topic Map: A Research Base for an Emerging Field,” an overview of the key themes and challenges of technologies used by and created for peace efforts…(More)”.

The Future of Self-Governing, Thriving Democracies


Book by Brigitte Geissel: “This book offers a new approach for the future of democracy by advocating to give citizens the power to deliberate and to decide how to govern themselves.

Innovatively building on and integrating components of representative, deliberative and participatory theories of democracy with empirical findings, the book provides practices and procedures that support communities of all sizes to develop their own visions of democracy. It revitalizes and reinfuses the ‘democratic spirit’ going back to the roots of democracy as an endeavor by, with and for the people, and should inspire us in our search for the democracy we want to live in.

This book is of key interest to scholars and students in democracy, democratic innovations, deliberation, civic education and governance and further for policy-makers, civil society groups and activists. It encourages us to reshape democracy based on citizens’ perspectives, aspirations and preferences…(More)”.