The Ethics of Automated Warfare and Artificial Intelligence


Essay series introduced by Bessma Momani, Aaron Shull and Jean-François Bélanger: “…begins with a piece written by Alex Wilner titled “AI and the Future of Deterrence: Promises and Pitfalls.” Wilner looks at the issue of deterrence and provides an account of the various ways AI may impact our understanding and framing of deterrence theory and its practice in the coming decades. He discusses how different countries have expressed diverging views over the degree of AI autonomy that should be permitted in a conflict situation — as those more willing to cut humans out of the decision-making loop could gain a strategic advantage. Wilner’s essay emphasizes that differences in states’ technological capability are large, and this will hinder interoperability among allies, while diverging views on regulation and ethical standards make global governance efforts even more challenging.

Looking to the future of non-state use of drones as an example, the weapon technology transfer from nation-state to non-state actors can help us to understand how next-generation technologies may also slip into the hands of unsavoury characters such as terrorists, criminal gangs or militant groups. The effectiveness of Ukrainian drone strikes against the much larger Russian army should serve as a warning to Western militaries, suggests James Rogers in his essay “The Third Drone Age: Visions Out to 2040.” This is a technology that can level the field by asymmetrically advantaging conventionally weaker forces. The increased diffusion of drone technology enhances the likelihood that future wars will also be drone wars, whether these drones are autonomous systems or not. This technology, in the hands of non-state actors, implies future Western missions against, say, insurgent or guerilla forces will be more difficult.

Data is the fuel that powers AI and the broader digital transformation of war. In her essay “Civilian Data in Cyber Conflict: Legal and Geostrategic Considerations,” Eleonore Pauwels discusses how offensive cyber operations are aiming to alter the very data sets of other actors to undermine adversaries — whether through targeting centralized biometric facilities or individuals’ DNA sequence in genomic analysis databases, or injecting fallacious data into satellite imagery used in situational awareness. Drawing on the implications of international humanitarian law, Pauwels argues that adversarial data manipulation constitutes another form of “grey zone” operation that falls below a threshold of armed conflict. She evaluates the challenges associated with adversarial data manipulation, given that there is no internationally agreed upon definition of what constitutes cyberattacks or cyber hostilities within international humanitarian law (IHL).

In “AI and the Actual International Humanitarian Law Accountability Gap,” Rebecca Crootoff argues that technologies can complicate legal analysis by introducing geographic, temporal and agency distance between a human’s decision and its effects. This makes it more difficult to hold an individual or state accountable for unlawful harmful acts. But in addition to this added complexity surrounding legal accountability, novel military technologies are bringing an existing accountability gap in IHL into sharper focus: the relative lack of legal accountability for unintended civilian harm. These unintentional acts can be catastrophic, but technically within the confines of international law, which highlights the need for new accountability mechanisms to better protect civilians.

Some assert that the deployment of autonomous weapon systems can strengthen compliance with IHL by limiting the kinetic devastation of collateral damage, but AI’s fragility and apparent capacity to behave in unexpected ways poses new and unexpected risks. In “Autonomous Weapons: The False Promise of Civilian Protection,” Branka Marijan opines that AI will likely not surpass human judgment for many decades, if ever, suggesting that there need to be regulations mandating a certain level of human control over weapon systems. The export of weapon systems to states willing to deploy them on a looser chain-of-command leash should be monitored…(More)”.

The Socio-Legal Lab: An Experiential Approach to Research on Law in Action


Guide by Siddharth Peter de Souza and Lisa Hahn: “..interactive workbook for socio-legal research projects. It employs the idea of a “lab” as a space for interactive and experiential learning. As an introductory book, it addresses researchers of all levels who are beginning to explore interdisciplinary research on law and are looking for guidance on how to do so. Likewise, the book can be used by teachers and peer groups to experiment with teaching and thinking about law in action through lab-based learning…

The book covers themes and questions that may arise during a socio-legal research project. This starts with examining what research and interdisciplinarity mean and in which forms they can be practiced. After an overview of the research process, we will discuss how research in action is often unpredictable and messy. Thus, the practical and ethical challenges of doing research will be discussed along with processes of knowledge production and assumptions that we have as researchers. 

Conducting a socio-legal research project further requires an overview of the theoretical landscape. We will introduce general debates about the nature, functions, and effects of law in society. Further, common dichotomies in socio-legal research such as “law” and “the social” or “qualitative” and “quantitative” research will be explored, along with suggested ways on how to bridge them. 

Turning to the application side of socio-legal research, the book delves deeper into questions of data on law and society, where to collect it and how to deal with it in a reflexive manner. It discusses different methods of qualitative socio-legal research and offers ways in which they can be experienced through exercises and simulations. In the research process, generating research results is followed by publishing and communicating them. We will explore different ways to ensure the outreach and impact of one’s research by communicating results through journals, blogs or social media. Finally, the book also discusses academia as a social space and the value of creating and using networks and peer groups for mutual support.

Overall, the workbook is designed to accompany and inspire researchers on their way through a socio-legal research project and to empower the reader into thinking more creatively about their methods, while at the same time demystifying them…(More)”.

OECD Good Practice Principles for Public Service Design and Delivery in the Digital Age


OECD Report: “The digital age provides great opportunities to transform how public services are designed and delivered. The OECD Good Practice Principles for Service Design and Delivery in the Digital Age provide a clear, actionable and comprehensive set of objectives for the high-quality digital transformation of public services. Reflecting insights gathered from across OECD member countries, these nine principles are arranged under three pillars of “Build accessible, ethical and equitable public services that prioritise user needs, rather than government needs”; “Deliver with impact, at scale and with pace”; and “Be accountable and transparent in the design and delivery of public services to reinforce and strengthen public trust”. The principles are advisory rather than prescriptive, allowing for local interpretation and implementation. They should also be considered in conjunction with wider OECD work to equip governments in harnessing the potential of digital technology and data to improve outcomes for all…(More)”.

Machine Learning in Public Policy: The Perils and the Promise of Interpretability


Report by Evan D. Peet, Brian G. Vegetabile, Matthew Cefalu, Joseph D. Pane, Cheryl L. Damberg: “Machine learning (ML) can have a significant impact on public policy by modeling complex relationships and augmenting human decisionmaking. However, overconfidence in results and incorrectly interpreted algorithms can lead to peril, such as the perpetuation of structural inequities. In this Perspective, the authors give an overview of ML and discuss the importance of its interpretability. In addition, they offer the following recommendations, which will help policymakers develop trustworthy, transparent, and accountable information that leads to more-objective and more-equitable policy decisions: (1) improve data through coordinated investments; (2) approach ML expecting interpretability, and be critical; and (3) leverage interpretable ML to understand policy values and predict policy impacts…(More)”.

A Data Capability Framework for the not-for-profit sector


Report by Anthony McCosker, Frances Shaw, Xiaofang Yao and Kath Albury: “As community services rapidly digitise, they are generating more data than ever before. These transformations are leading to innovation in data analysis and enthusiasm about the potential for data-driven decision making. However, increased use of personal data and automated systems raises ethical issues including gaining community trust, and introduces challenges in building knowledge, skills and capability.

Despite optimism across the not-for-profit (NFP) sector about the use of data analysis and automation to improve services and social impact, we are already seeing a growing data divide. Private sector companies have for some time invested heavily in data science and machine learning. However, many in the NFP sector are unsure how to meet the demands of these digital and data transformations. With limited resources, small, medium and large organisations alike face challenges in building their data capability and channelling it toward improved social outcomes. Working with marginalised clients, collecting sensitive personal information, and tackling seemingly intractable cycles of disadvantage, the sector needs a data capability revolution.

This short guide sets out a Data Capability Framework developed with and for the NFP sector and explains how it can be used to raise the bar in the use of data for impact and innovation. It conceptualises the core dimensions of data capability that need to be addressed. These dimensions can be tailored to meet an organisation’s specific strategic goals, impact and outcomes.

The Framework distils the challenges and successes of organisations we have worked with. It represents both the factors that underpin effective data capability and the pathways to achieving it. In other words, as technologies and data science techniques continue to change, data capability is both an outcome to aspire to, and a dynamic, ongoing process of experimentation and adaption…(More)”.

AI Audit-Washing and Accountability


Report by Ellen P. Goodman and Julia Tréhu: “.. finds that auditing could be a robust means for holding AI systems accountable, but today’s auditing regimes are not yet adequate to the job. The report assesses the effectiveness of various auditing regimes and proposes guidelines for creating trustworthy auditing systems.

Various government and private entities rely on or have proposed audits as a way of ensuring AI systems meet legal, ethical and other standards. This report finds that audits can in fact provide an agile co-regulatory approach—one that relies on both governments and private entities—to ensure societal accountability for algorithmic systems through private oversight.

But the “algorithmic audit” remains ill-defined and inexact, whether concerning social media platforms or AI systems generally. The risk is significant that inadequate audits will obscure problems with algorithmic systems. A poorly designed or executed audit is at best meaningless and at worst even excuses harms that the audits claim to mitigate.

Inadequate audits or those without clear standards provide false assurance of compliance with norms and laws, “audit-washing” problematic or illegal practices. Like green-washing and ethics-washing before, the audited entity can claim credit without doing the work.

The paper identifies the core specifications needed in order for algorithmic audits to be a reliable AI accountability mechanism:

  • Who” conducts the audit—clearly defined qualifications, conditions for data access, and guardrails for internal audits;
  • What” is the type and scope of audit—including its position within a larger sociotechnical system;
  • Why” is the audit being conducted—whether for narrow legal standards or broader ethical goals, essential for audit comparison, along with potential costs; and
  • How” are the audit standards determined—an important baseline for the development of audit certification mechanisms and to guard against audit-washing.

Algorithmic audits have the potential to increase the reliability and innovation of technology in the twenty-first century, much as financial audits transformed the way businesses operated in the twentieth century. They will take different forms, either within a sector or across sectors, especially for systems that pose the highest risk. Ensuring that AI is accountable and trusted is key to ensuring that democracies remain centers of innovation while shaping technology to democratic values…(More)”

AI Localism in Practice: Examining How Cities Govern AI


Report by Sara Marcucci, Uma Kalkar, and Stefaan Verhulst: “…serves as a primer for policymakers and practitioners to learn about current governance practices and inspire their own work in the field. In this report, we present the fundamentals of AI governance, the value proposition of such initiatives, and their application in cities worldwide to identify themes among city- and state-led governance actions. We close with ten lessons on AI localism for policymakers, data, AI experts, and the informed public to keep in mind as cities grow increasingly ‘smarter’, which include: 

  • Principles provide a North Star for governance;
  • Public engagement provides a social license;
  • AI literacy enables meaningful engagement;
  • Tap into local expertise;
  • Innovate in how transparency is provided;
  • Establish new means for accountability and oversight;
  • Signal boundaries through binding laws and policies;
  • Use procurement to shape responsible AI markets;
  • Establish data collaboratives to tackle asymmetries; and
  • Make good governance strategic.

Considered together, we look to use our understanding of governance practices, local AI governance examples, and the ten overarching lessons to create an incipient framework for implementing and assessing AI localism initiatives in cities around the world….(More)”

Digital rights and principles: a digital transformation for EU citizens


Press Release: “The Commission welcomes the agreement reached yesterday with the Parliament and the Council on the European declaration on digital rights and principles. The declaration, proposed in January, establishes a clear reference point about the kind of human-centred digital transformation that the EU promotes and defends, at home and abroad.

graphic showing a circle with text Your Digital Principles and different icons with a text below the circle At the heart of Europe's digital transformation

It builds on key EU values and freedoms and will benefit all individuals and businesses. The declaration will also provide a guide for policymakers and companies when dealing with new technologies. The declaration focuses on six key areas: putting people at the centre of the digital transformation; solidarity and inclusion; freedom of choice; participation in digital life; safety and security; and sustainability…(More)” See also: European Digital Rights and Principles

Measuring the environmental impacts of artificial intelligence compute and applications


OECD Paper: “Artificial intelligence (AI) systems can use massive computational resources, raising sustainability concerns. This report aims to improve understanding of the environmental impacts of AI, and help measure and decrease AI’s negative effects while enabling it to accelerate action for the good of the planet. It distinguishes between the direct environmental impacts of developing, using and disposing of AI systems and related equipment, and the indirect costs and benefits of using AI applications. It recommends the establishment of measurement standards, expanding data collection, identifying AI-specific impacts, looking beyond operational energy use and emissions, and improving transparency and equity to help policy makers make AI part of the solution to sustainability challenges…(More)”.

What is PeaceTech?


Report by Behruz Davletov, Uma Kalkar, Marine Ragnet, and Stefaan Verhulst: “From sensors to detect explosives to geographic data for disaster relief to artificial intelligence verifying misleading online content, data and technology are essential assets for peace efforts. Indeed, the ongoing Russia-Ukraine war is a direct example of data, data science, and technology as a whole has been mobilized to assist and monitor conflict responses and support peacebuilding.

Yet understanding the ways in which technology can be applied for peace, and what kinds of peace promotion they can serve, as well as their associated risks remain muddled. Thus, a framework for the governance of these peace technologies—#PeaceTech—is needed at an international and transnational level to guide the responsible and purposeful use of technology and data to strengthen peace and justice initiatives.

Today, The GovLab is proud to announce the release of the “PeaceTech Topic Map: A Research Base for an Emerging Field,” an overview of the key themes and challenges of technologies used by and created for peace efforts…(More)”.