The Ethics of Automated Warfare and Artificial Intelligence


Essay series introduced by Bessma Momani, Aaron Shull and Jean-François Bélanger: “…begins with a piece written by Alex Wilner titled “AI and the Future of Deterrence: Promises and Pitfalls.” Wilner looks at the issue of deterrence and provides an account of the various ways AI may impact our understanding and framing of deterrence theory and its practice in the coming decades. He discusses how different countries have expressed diverging views over the degree of AI autonomy that should be permitted in a conflict situation — as those more willing to cut humans out of the decision-making loop could gain a strategic advantage. Wilner’s essay emphasizes that differences in states’ technological capability are large, and this will hinder interoperability among allies, while diverging views on regulation and ethical standards make global governance efforts even more challenging.

Looking to the future of non-state use of drones as an example, the weapon technology transfer from nation-state to non-state actors can help us to understand how next-generation technologies may also slip into the hands of unsavoury characters such as terrorists, criminal gangs or militant groups. The effectiveness of Ukrainian drone strikes against the much larger Russian army should serve as a warning to Western militaries, suggests James Rogers in his essay “The Third Drone Age: Visions Out to 2040.” This is a technology that can level the field by asymmetrically advantaging conventionally weaker forces. The increased diffusion of drone technology enhances the likelihood that future wars will also be drone wars, whether these drones are autonomous systems or not. This technology, in the hands of non-state actors, implies future Western missions against, say, insurgent or guerilla forces will be more difficult.

Data is the fuel that powers AI and the broader digital transformation of war. In her essay “Civilian Data in Cyber Conflict: Legal and Geostrategic Considerations,” Eleonore Pauwels discusses how offensive cyber operations are aiming to alter the very data sets of other actors to undermine adversaries — whether through targeting centralized biometric facilities or individuals’ DNA sequence in genomic analysis databases, or injecting fallacious data into satellite imagery used in situational awareness. Drawing on the implications of international humanitarian law, Pauwels argues that adversarial data manipulation constitutes another form of “grey zone” operation that falls below a threshold of armed conflict. She evaluates the challenges associated with adversarial data manipulation, given that there is no internationally agreed upon definition of what constitutes cyberattacks or cyber hostilities within international humanitarian law (IHL).

In “AI and the Actual International Humanitarian Law Accountability Gap,” Rebecca Crootoff argues that technologies can complicate legal analysis by introducing geographic, temporal and agency distance between a human’s decision and its effects. This makes it more difficult to hold an individual or state accountable for unlawful harmful acts. But in addition to this added complexity surrounding legal accountability, novel military technologies are bringing an existing accountability gap in IHL into sharper focus: the relative lack of legal accountability for unintended civilian harm. These unintentional acts can be catastrophic, but technically within the confines of international law, which highlights the need for new accountability mechanisms to better protect civilians.

Some assert that the deployment of autonomous weapon systems can strengthen compliance with IHL by limiting the kinetic devastation of collateral damage, but AI’s fragility and apparent capacity to behave in unexpected ways poses new and unexpected risks. In “Autonomous Weapons: The False Promise of Civilian Protection,” Branka Marijan opines that AI will likely not surpass human judgment for many decades, if ever, suggesting that there need to be regulations mandating a certain level of human control over weapon systems. The export of weapon systems to states willing to deploy them on a looser chain-of-command leash should be monitored…(More)”.

Behavioral Economics and the Energy Crisis in Europe


Blog by Carlos Scartascini: “European nations, stunned by Russia’s aggression, have mostly rallied in support of Ukraine, sending weapons and welcoming millions of refugees. But European citizens are paying dearly for it. Apart from the costs in direct assistance, the energy conflict with Russia had sent prices of gas soaring to eight times their 10-year average by the end of September and helped push inflation to around 10%. With a partial embargo of Russian oil going into effect in December and cold weather coming, many Europeans now fear an icy, bitter and poorer winter of 2023.

European governments hope to take the edge off by enacting price regulations, providing energy subsidies for households, and crucially curbing energy demand. Germany’s government, for example, imposed limits on heating in public offices and buildings to 19 degrees Celsius (66.2 Fahrenheit). France has introduced a raft of voluntary measures ranging from asking public officials to travel by train rather than car, suggesting that municipalities swap old lamps for LEDs and designing incentives to get people to car share…

As we know from years of experiments at the IDB in using behavioral economics to achieve policy goals, however, rules and recommendations are not enough. Trust in fellow citizens and in the government are also crucial when calling for a shared sacrifice. That means not appealing to fear, which can lead to deeper divisions in society, energy hoarding, resignation and indifference. Rather, it means appealing to social norms of morality and community.

In using behavioral economics to boost tax compliance in Argentina, for example, we found that sending messages that revealed how fellow citizens were paying their taxes significantly improved tax collection. Revealing how the government was using tax funds to improve people’s lives provided an additional boost to the effort. Posters and television ads in Europe showing people wearing sweaters, turning down their thermostats, insulating their homes and putting up solar panels might similarly instill a sense of common purpose. And signals that governments are trying to relieve hardship might help instill in citizens the need for sacrifice…(More)”.

How Technology Companies Are Shaping the Ukraine Conflict


Article by Abishur Prakash: “Earlier this year, Meta, the company that owns Facebook and Instagram, announced that people could create posts calling for violence against Russia on its social media platforms. This was unprecedented. One of the world’s largest technology firms very publicly picked sides in a geopolitical conflict. Russia was now not just fighting a country but also multinational companies with financial stakes in the outcome. In response, Russia announced a ban on Instagram within its borders. The fallout was significant. The ban, which eventually included Facebook, cost Meta close to $2 billion.

Through the war in Ukraine, technology companies are showing how their decisions can affect geopolitics, which is a massive shift from the past. Technology companies have been either dragged into conflicts because of how customers were using their services (e.g., people putting their houses in the West Bank on Airbnb) or have followed the foreign policy of governments (e.g., SpaceX supplying Internet to Iran after the United States removed some sanctions)…(More)”.

Democratised and declassified: the era of social media war is here


Essay by David V. Gioe & Ken Stolworthy: “In October 1962, Adlai Stevenson, US ambassador to the United Nations, grilled Soviet Ambassador Valerian Zorin about whether the Soviet Union had deployed nuclear-capable missiles to Cuba. While Zorin waffled (and didn’t know in any case), Stevenson went in for the kill: ‘I am prepared to wait for an answer until Hell freezes over… I am also prepared to present the evidence in this room.’ Stevenson then theatrically revealed several poster-sized photographs from a US U-2 spy plane, showing Soviet missile bases in Cuba, directly contradicting Soviet claims to the contrary. It was the first time that (formerly classified) imagery intelligence (IMINT) had been marshalled as evidence to publicly refute another state in high-stakes diplomacy, but it also revealed the capabilities of US intelligence collection to a stunned audience. 

During the Cuban missile crisis — and indeed until the end of the Cold War — such exquisite airborne and satellite collection was exclusively the purview of the US, UK and USSR. The world (and the world of intelligence) has come a long way in the past 60 years. By the time President Putin launched his ‘special military operation’ in Ukraine in late February 2022, IMINT and geospatial intelligence (GEOINT) was already highly democratised. Commercial satellite companies, such as Maxar or Google Earth, provide high resolution images free of charge. Thanks to such ubiquitous imagery online, anyone could see – in remarkable clarity – that the Russian military was massing on Ukraine’s border. Geolocation stamped photos and user generated videos uploaded to social media platforms, such as Telegram or TikTok, enabled  further refinement of – and confidence in – the view of Russian military activity. And continued citizen collection showed a change in Russian positions over time without waiting for another satellite to pass over the area. Of course, such a show of force was not guaranteed to presage an invasion, but there was no hiding the composition and scale of the build-up. 

Once the Russians actually invaded, there was another key development – the democratisation of near real-time battlefield awareness. In a digitally connected context, everyone can be a sensor or intelligence collector, wittingly or unwittingly. This dispersed and crowd-sourced collection against the Russian campaign was based on the huge number of people taking pictures of Russian military equipment and formations in Ukraine and posting them online. These average citizens likely had no idea what exactly they were snapping a picture of, but established military experts on the internet do. Sometimes within minutes, internet platforms such as Twitter had threads and threads of what the pictures were, and what they revealed, providing what intelligence professionals call Russian ‘order of battle’…(More)”.

Big Tech Goes to War


Article by Christine H. Fox and Emelia S. Probasco: “Even before he made a bid to buy Twitter, Elon Musk was an avid user of the site. It is a reason Ukraine’s Minister of Digital Transformation Mykhailo Fedorov took to the social media platform to prod the SpaceX CEO to activate Starlink, a SpaceX division that provides satellite internet, to help his country in the aftermath of Russia’s invasion. “While you try to colonize Mars—Russia try [sic] to occupy Ukraine!” Fedorov wrote on February 26. “We ask you to provide Ukraine with Starlink stations.”

“Starlink service is now active in Ukraine,” Musk tweeted that same day. This was a coup for Ukraine: it facilitated Ukrainian communications in the conflict. Starlink later helped fend off Russian jamming attacks against its service to Ukraine with a quick and relatively simple code update. Now, however, Musk has gone back and forth on whether the company will continue funding the Starlink satellite service that has kept Ukraine and its military online during the war.

The tensions and uncertainty Musk is injecting into the war effort demonstrate the challenges that can emerge when companies play a key role in military conflict. Technology companies ranging from Microsoft to Silicon Valley start-ups have provided cyberdefense, surveillance, and reconnaissance services—not by direction of a government contract or even as a part of a government plan but instead through the independent decision-making of individual companies. These companies’ efforts have rightly garnered respect and recognition; their involvement, after all, were often pro bono and could have provoked Russian attacks on their networks, or even their people, in retaliation…(More)”.

How Do You Prove a Secret?


Essay by Sheon Han: “Imagine you had some useful knowledge — maybe a secret recipe, or the key to a cipher. Could you prove to a friend that you had that knowledge, without revealing anything about it? Computer scientists proved over 30 years ago that you could, if you used what’s called a zero-knowledge proof.

For a simple way to understand this idea, let’s suppose you want to show your friend that you know how to get through a maze, without divulging any details about the path. You could simply traverse the maze within a time limit, while your friend was forbidden from watching. (The time limit is necessary because given enough time, anyone can eventually find their way out through trial and error.) Your friend would know you could do it, but they wouldn’t know how.

Zero-knowledge proofs are helpful to cryptographers, who work with secret information, but also to researchers of computational complexity, which deals with classifying the difficulty of different problems. “A lot of modern cryptography relies on complexity assumptions — on the assumption that certain problems are hard to solve, so there has always been some connections between the two worlds,” said Claude Crépeau, a computer scientist at McGill University. “But [these] proofs have created a whole world of connection.”

Zero-knowledge proofs belong to a category known as interactive proofs, so to learn how the former work, it helps to understand the latter. First described in a 1985 paper by the computer scientists Shafi Goldwasser, Silvio Micali and Charles Rackoff, interactive proofs work like an interrogation: Over a series of messages, one party (the prover) tries to convince the other (the verifier) that a given statement is true. An interactive proof must satisfy two properties. First, a true statement will always eventually convince an honest verifier. Second, if the given statement is false, no prover — even one pretending to possess certain knowledge — can convince the verifier, except with negligibly small probability…(More)”

The European Union-U.S. Data Privacy Framework


White House Fact Sheet: “Today, President Biden signed an Executive Order on Enhancing Safeguards for United States Signals Intelligence Activities (E.O.) directing the steps that the United States will take to implement the U.S. commitments under the European Union-U.S. Data Privacy Framework (EU-U.S. DPF) announced by President Biden and European Commission President von der Leyen in March of 2022. 

Transatlantic data flows are critical to enabling the $7.1 trillion EU-U.S. economic relationship.  The EU-U.S. DPF will restore an important legal basis for transatlantic data flows by addressing concerns that the Court of Justice of the European Union raised in striking down the prior EU-U.S. Privacy Shield framework as a valid data transfer mechanism under EU law. 

The Executive Order bolsters an already rigorous array of privacy and civil liberties safeguards for U.S. signals intelligence activities. It also creates an independent and binding mechanism enabling individuals in qualifying states and regional economic integration organizations, as designated under the E.O., to seek redress if they believe their personal data was collected through U.S. signals intelligence in a manner that violated applicable U.S. law.

U.S. and EU companies large and small across all sectors of the economy rely upon cross-border data flows to participate in the digital economy and expand economic opportunities. The EU-U.S. DPF represents the culmination of a joint effort by the United States and the European Commission to restore trust and stability to transatlantic data flows and reflects the strength of the enduring EU-U.S. relationship based on our shared values…(More)”.

How one group of ‘fellas’ is winning the meme war in support of Ukraine


Article by Suzanne Smalley: “The North Atlantic Fella Organization, or NAFO, has arrived.

Ukraine’s Defense Ministry celebrated the group on Twitter for waging a “fierce fight” against Kremlin trolls. And Rep. Adam Kinzinger, D-Ill., tweeted that he was “self-declaring as a proud member of #NAFO” and “the #fellas shall prevail.”

The brainchild of former Marine Matt Moores, NAFO launched in May and quickly blew up on Twitter. It’s become something of a movement, drawing support in military and cybersecurity circles who circulate its meme backing Ukraine in its war against Russia.

“The power of what we’re doing is that instead of trying to come in and point-by-point refute, and argue about what’s true and what isn’t, it’s coming and saying, ‘Hey, that’s dumb,’” Moores said during a panel on Wednesday at the Center for International and Strategic Studies in Washington. “And the moment somebody’s replying to a cartoon dog online, you’ve lost if you work for the government of Russia.”

Memes have figured heavily in the information war following the Russian invasion. The Ukrainian government has proven eager to highlight memes on agency websites and officials have been known to personally thank online communities that spread anti-Russian memes. The NAFO meme shared by the defense ministry in August showed a Shiba Inu dog in a military uniform appearing to celebrate a missile launch.

The Shiba Inu has long been a motif in internet culture. According to Vice’s Motherboard, the use of Shiba Inu to represent a “fella” waging online war against the Russians dates to at least May when an artist started rewarding fellas who donated money to the Georgian Legion by creating customized fella art for online use…(More)”.

Applications of an Analytic Framework on Using Public Opinion Data for Solving Intelligence Problems


Report by the National Academies of Sciences, Engineering, and Medicine: “Measuring and analyzing public opinion comes with tremendous challenges, as evidenced by recent struggles to predict election outcomes and to anticipate mass mobilizations. The National Academies of Sciences, Engineering, and Medicine publication Measurement and Analysis of Public Opinion: An Analytic Framework presents in-depth information from experts on how to collect and glean insights from public opinion data, particularly in conditions where contextual issues call for applying caveats to those data. The Analytic Framework is designed specifically to help intelligence community analysts apply insights from the social and behavioral sciences on state-of-the-art approaches to analyze public attitudes in non- Western populations. Sponsored by the intelligence community, the National Academies’ Board on Behavioral, Cognitive, and Sensory Sciences hosted a 2-day hybrid workshop on March 8–9, 2022, to present the Analytic Framework and to demonstrate its application across a series of hypothetical scenarios that might arise for an intelligence analyst tasked with summarizing public attitudes to inform a policy decision. Workshop participants explored cutting-edge methods for using large-scale data as well as cultural and ethical considerations for the collection and use of public opinion data. This publication summarizes the presentations and discussions of the workshop…(More)”.

Supporting peace negotiations in the Yemen war through machine learning


Paper by Miguel Arana-Catania, Felix-Anselm van Lier and Rob Procter: “Today’s conflicts are becoming increasingly complex, fluid, and fragmented, often involving a host of national and international actors with multiple and often divergent interests. This development poses significant challenges for conflict mediation, as mediators struggle to make sense of conflict dynamics, such as the range of conflict parties and the evolution of their political positions, the distinction between relevant and less relevant actors in peace-making, or the identification of key conflict issues and their interdependence. International peace efforts appear ill-equipped to successfully address these challenges. While technology is already being experimented with and used in a range of conflict related fields, such as conflict predicting or information gathering, less attention has been given to how technology can contribute to conflict mediation. This case study contributes to emerging research on the use of state-of-the-art machine learning technologies and techniques in conflict mediation processes. Using dialogue transcripts from peace negotiations in Yemen, this study shows how machine-learning can effectively support mediating teams by providing them with tools for knowledge management, extraction and conflict analysis. Apart from illustrating the potential of machine learning tools in conflict mediation, the article also emphasizes the importance of interdisciplinary and participatory, cocreation methodology for the development of context-sensitive and targeted tools and to ensure meaningful and responsible implementation…(More)”.