Open-source intelligence is piercing the fog of war in Ukraine


The Economist: “…The rise of open-source intelligenceOSINT to insiders, has transformed the way that people receive news. In the run-up to war, commercial satellite imagery and video footage of Russian convoys on TikTok, a social-media site, allowed journalists and researchers to corroborate Western claims that Russia was preparing an invasion. OSINT even predicted its onset. Jeffrey Lewis of the Middlebury Institute in California used Google Maps’ road-traffic reports to identify a tell-tale jam on the Russian side of the border at 3:15am on February 24th. “Someone’s on the move”, he tweeted. Less than three hours later Vladimir Putin launched his war.

Satellite imagery still plays a role in tracking the war. During the Kherson offensive, synthetic-aperture radar (SAR) satellites, which can see at night and through clouds, showed Russia building pontoon bridges over the Dnieper river before its retreat from Kherson, boats appearing and disappearing as troops escaped east and, later, Russia’s army building new defensive positions along the M14 highway on the river’s left bank. And when Ukrainian drones struck two air bases deep inside Russia on December 5th, high-resolution satellite images showed the extent of the damage…(More)”.

Big Data and the Law of War


Essay by Paul Stephan: “Big data looms large in today’s world. Much of the tech sector regards the building up of large sets of searchable data as part (sometimes the greater part) of its business model. Surveillance-oriented states, of which China is the foremost example, use big data to guide and bolster monitoring of their own people as well as potential foreign threats. Many other states are not far behind in the surveillance arms race, notwithstanding the attempts of the European Union to put its metaphorical finger in the dike. Finally, ChatGPT has revived popular interest in artificial intelligence (AI), which uses big data as a means of optimizing the training and algorithm design on which it depends, as a cultural, economic, and social phenomenon. 

If big data is growing in significance, might it join territory, people, and property as objects of international conflict, including armed conflict? So far it has not been front and center in Russia’s invasion of Ukraine, the war that currently consumes much of our attention. But future conflicts could certainly feature attacks on big data. China and Taiwan, for example, both have sophisticated technological infrastructures that encompass big data and AI capabilities. The risk that they might find themselves at war in the near future is larger than anyone would like. What, then, might the law of war have to say about big data? More generally, if existing law does not meet our needs,  how might new international law address the issue?

In a recent essay, part of an edited volume on “The Future Law of Armed Conflict,” I argue that big data is a resource and therefore a potential target in an armed conflict. I address two issues: Under the law governing the legality of war (jus ad bellum), what kinds of attacks on big data might justify an armed response, touching off a bilateral (or multilateral) armed conflict (a war)? And within an existing armed conflict, what are the rules (jus in bello, also known as international humanitarian law, or IHL) governing such attacks?

The distinction is meaningful. If cyber operations rise to the level of an armed attack, then the targeted state has, according to Article 51 of the U.N. Charter, an “inherent right” to respond with armed force. Moreover, the target need not confine its response to a symmetrical cyber operation. Once attacked, a state may use all forms of armed force in response, albeit subject to the restrictions imposed by IHL. If the state regards, say, a takedown of its financial system as an armed attack, it may respond with missiles…(More)”.

Open Secrets: Ukraine and the Next Intelligence Revolution


Article by Amy Zegart: “Russia’s invasion of Ukraine has been a watershed moment for the world of intelligence. For weeks before the shelling began, Washington publicly released a relentless stream of remarkably detailed findings about everything from Russian troop movements to false-flag attacks the Kremlin would use to justify the invasion. 

This disclosure strategy was new: spy agencies are accustomed to concealing intelligence, not revealing it. But it was very effective. By getting the truth out before Russian lies took hold, the United States was able to rally allies and quickly coordinate hard-hitting sanctions. Intelligence disclosures set Russian President Vladimir Putin on his back foot, wondering who and what in his government had been penetrated so deeply by U.S. agencies, and made it more difficult for other countries to hide behind Putin’s lies and side with Russia.

The disclosures were just the beginning. The war has ushered in a new era of intelligence sharing between Ukraine, the United States, and other allies and partners, which has helped counter false Russian narratives, defend digital systems from cyberattacks, and assisted Ukrainian forces in striking Russian targets on the battlefield. And it has brought to light a profound new reality: intelligence isn’t just for government spy agencies anymore…

The explosion of open-source information online, commercial satellite capabilities, and the rise of AI are enabling all sorts of individuals and private organizations to collect, analyze, and disseminate intelligence.

In the past several years, for instance, the amateur investigators of Bellingcat—a volunteer organization that describes itself as “an intelligence agency for the people”—have made all kinds of discoveries. Bellingcat identified the Russian hit team that tried to assassinate former Russian spy officer Sergei Skripal in the United Kingdom and located supporters of the Islamic State (also known as ISIS) in Europe. It also proved that Russians were behind the shootdown of Malaysia Airlines flight 17 over Ukraine. 

Bellingcat is not the only civilian intelligence initiative. When the Iranian government claimed in 2020 that a small fire had broken out in an industrial shed, two U.S. researchers working independently and using nothing more than their computers and the Internet proved within hours that Tehran was lying….(More)”.

How the algorithm tipped the balance in Ukraine


David Ignatius at The Washington Post: “Two Ukrainian military officers peer at a laptop computer operated by a Ukrainian technician using software provided by the American technology company Palantir. On the screen are detailed digital maps of the battlefield at Bakhmut in eastern Ukraine, overlaid with other targeting intelligence — most of it obtained from commercial satellites.

As we lean closer, we see can jagged trenches on the Bakhmut front, where Russian and Ukrainian forces are separated by a few hundred yards in one of the bloodiest battles of the war. A click of the computer mouse displays thermal images of Russian and Ukrainian artillery fire; another click shows a Russian tank marked with a “Z,” seen through a picket fence, an image uploaded by a Ukrainian spy on the ground.

If this were a working combat operations center, rather than a demonstration for a visiting journalist, the Ukrainian officers could use a targeting program to select a missile, artillery piece or armed drone to attack the Russian positions displayed on the screen. Then drones could confirm the strike, and a damage assessment would be fed back into the system.

This is the “wizard war” in the Ukraine conflict — a secret digital campaign that has never been reported before in detail — and it’s a big reason David is beating Goliath here. The Ukrainians are fusing their courageous fighting spirit with the most advanced intelligence and battle-management software ever seen in combat.

“Tenacity, will and harnessing the latest technology give the Ukrainians a decisive advantage,” Gen. Mark A. Milley, chairman of the Joint Chiefs of Staff, told me last week. “We are witnessing the ways wars will be fought, and won, for years to come.”

I think Milley is right about the transformational effect of technology on the Ukraine battlefield. And for me, here’s the bottom line: With these systems aiding brave Ukrainian troops, the Russians probably cannot win this war…(More)” See also Part 2.

How data restrictions erode internet freedom


Article by Tom Okman: “Countries across the world – small, large, powerful and weak – are accelerating efforts to control and restrict private data. According to the Information Technology and Innovation Foundation, the number of laws, regulations and policies that restrict or require data to be stored in a specific country more than doubled between 2017 and 2021, rising from 67 to 144.

Some of these laws may be driven by benevolent intentions. After all, citizens will support stopping the spread of online disinformation, hate, and extremism or systemic cyber-snooping. Cyber-libertarian John Perry Barlow’s call for the government to “leave us alone” in cyberspace rings hollow in this context.

Government internet oversight is on the rise.

Government internet oversight is on the rise. Image: Information Technology and Innovation Foundation

But some digital policies may prove to be repressive for companies and citizens alike. They extend the justifiable concern over the dominance of large tech companies to other areas of the digital realm.

These “digital iron curtains” can take many forms. What they have in common is that they seek to silo the internet (or parts of it) and private data into national boxes. This risks dividing the internet, reducing its connective potential, and infringing basic digital freedoms…(More)”.

The Ethics of Automated Warfare and Artificial Intelligence


Essay series introduced by Bessma Momani, Aaron Shull and Jean-François Bélanger: “…begins with a piece written by Alex Wilner titled “AI and the Future of Deterrence: Promises and Pitfalls.” Wilner looks at the issue of deterrence and provides an account of the various ways AI may impact our understanding and framing of deterrence theory and its practice in the coming decades. He discusses how different countries have expressed diverging views over the degree of AI autonomy that should be permitted in a conflict situation — as those more willing to cut humans out of the decision-making loop could gain a strategic advantage. Wilner’s essay emphasizes that differences in states’ technological capability are large, and this will hinder interoperability among allies, while diverging views on regulation and ethical standards make global governance efforts even more challenging.

Looking to the future of non-state use of drones as an example, the weapon technology transfer from nation-state to non-state actors can help us to understand how next-generation technologies may also slip into the hands of unsavoury characters such as terrorists, criminal gangs or militant groups. The effectiveness of Ukrainian drone strikes against the much larger Russian army should serve as a warning to Western militaries, suggests James Rogers in his essay “The Third Drone Age: Visions Out to 2040.” This is a technology that can level the field by asymmetrically advantaging conventionally weaker forces. The increased diffusion of drone technology enhances the likelihood that future wars will also be drone wars, whether these drones are autonomous systems or not. This technology, in the hands of non-state actors, implies future Western missions against, say, insurgent or guerilla forces will be more difficult.

Data is the fuel that powers AI and the broader digital transformation of war. In her essay “Civilian Data in Cyber Conflict: Legal and Geostrategic Considerations,” Eleonore Pauwels discusses how offensive cyber operations are aiming to alter the very data sets of other actors to undermine adversaries — whether through targeting centralized biometric facilities or individuals’ DNA sequence in genomic analysis databases, or injecting fallacious data into satellite imagery used in situational awareness. Drawing on the implications of international humanitarian law, Pauwels argues that adversarial data manipulation constitutes another form of “grey zone” operation that falls below a threshold of armed conflict. She evaluates the challenges associated with adversarial data manipulation, given that there is no internationally agreed upon definition of what constitutes cyberattacks or cyber hostilities within international humanitarian law (IHL).

In “AI and the Actual International Humanitarian Law Accountability Gap,” Rebecca Crootoff argues that technologies can complicate legal analysis by introducing geographic, temporal and agency distance between a human’s decision and its effects. This makes it more difficult to hold an individual or state accountable for unlawful harmful acts. But in addition to this added complexity surrounding legal accountability, novel military technologies are bringing an existing accountability gap in IHL into sharper focus: the relative lack of legal accountability for unintended civilian harm. These unintentional acts can be catastrophic, but technically within the confines of international law, which highlights the need for new accountability mechanisms to better protect civilians.

Some assert that the deployment of autonomous weapon systems can strengthen compliance with IHL by limiting the kinetic devastation of collateral damage, but AI’s fragility and apparent capacity to behave in unexpected ways poses new and unexpected risks. In “Autonomous Weapons: The False Promise of Civilian Protection,” Branka Marijan opines that AI will likely not surpass human judgment for many decades, if ever, suggesting that there need to be regulations mandating a certain level of human control over weapon systems. The export of weapon systems to states willing to deploy them on a looser chain-of-command leash should be monitored…(More)”.

Behavioral Economics and the Energy Crisis in Europe


Blog by Carlos Scartascini: “European nations, stunned by Russia’s aggression, have mostly rallied in support of Ukraine, sending weapons and welcoming millions of refugees. But European citizens are paying dearly for it. Apart from the costs in direct assistance, the energy conflict with Russia had sent prices of gas soaring to eight times their 10-year average by the end of September and helped push inflation to around 10%. With a partial embargo of Russian oil going into effect in December and cold weather coming, many Europeans now fear an icy, bitter and poorer winter of 2023.

European governments hope to take the edge off by enacting price regulations, providing energy subsidies for households, and crucially curbing energy demand. Germany’s government, for example, imposed limits on heating in public offices and buildings to 19 degrees Celsius (66.2 Fahrenheit). France has introduced a raft of voluntary measures ranging from asking public officials to travel by train rather than car, suggesting that municipalities swap old lamps for LEDs and designing incentives to get people to car share…

As we know from years of experiments at the IDB in using behavioral economics to achieve policy goals, however, rules and recommendations are not enough. Trust in fellow citizens and in the government are also crucial when calling for a shared sacrifice. That means not appealing to fear, which can lead to deeper divisions in society, energy hoarding, resignation and indifference. Rather, it means appealing to social norms of morality and community.

In using behavioral economics to boost tax compliance in Argentina, for example, we found that sending messages that revealed how fellow citizens were paying their taxes significantly improved tax collection. Revealing how the government was using tax funds to improve people’s lives provided an additional boost to the effort. Posters and television ads in Europe showing people wearing sweaters, turning down their thermostats, insulating their homes and putting up solar panels might similarly instill a sense of common purpose. And signals that governments are trying to relieve hardship might help instill in citizens the need for sacrifice…(More)”.

How Technology Companies Are Shaping the Ukraine Conflict


Article by Abishur Prakash: “Earlier this year, Meta, the company that owns Facebook and Instagram, announced that people could create posts calling for violence against Russia on its social media platforms. This was unprecedented. One of the world’s largest technology firms very publicly picked sides in a geopolitical conflict. Russia was now not just fighting a country but also multinational companies with financial stakes in the outcome. In response, Russia announced a ban on Instagram within its borders. The fallout was significant. The ban, which eventually included Facebook, cost Meta close to $2 billion.

Through the war in Ukraine, technology companies are showing how their decisions can affect geopolitics, which is a massive shift from the past. Technology companies have been either dragged into conflicts because of how customers were using their services (e.g., people putting their houses in the West Bank on Airbnb) or have followed the foreign policy of governments (e.g., SpaceX supplying Internet to Iran after the United States removed some sanctions)…(More)”.

Democratised and declassified: the era of social media war is here


Essay by David V. Gioe & Ken Stolworthy: “In October 1962, Adlai Stevenson, US ambassador to the United Nations, grilled Soviet Ambassador Valerian Zorin about whether the Soviet Union had deployed nuclear-capable missiles to Cuba. While Zorin waffled (and didn’t know in any case), Stevenson went in for the kill: ‘I am prepared to wait for an answer until Hell freezes over… I am also prepared to present the evidence in this room.’ Stevenson then theatrically revealed several poster-sized photographs from a US U-2 spy plane, showing Soviet missile bases in Cuba, directly contradicting Soviet claims to the contrary. It was the first time that (formerly classified) imagery intelligence (IMINT) had been marshalled as evidence to publicly refute another state in high-stakes diplomacy, but it also revealed the capabilities of US intelligence collection to a stunned audience. 

During the Cuban missile crisis — and indeed until the end of the Cold War — such exquisite airborne and satellite collection was exclusively the purview of the US, UK and USSR. The world (and the world of intelligence) has come a long way in the past 60 years. By the time President Putin launched his ‘special military operation’ in Ukraine in late February 2022, IMINT and geospatial intelligence (GEOINT) was already highly democratised. Commercial satellite companies, such as Maxar or Google Earth, provide high resolution images free of charge. Thanks to such ubiquitous imagery online, anyone could see – in remarkable clarity – that the Russian military was massing on Ukraine’s border. Geolocation stamped photos and user generated videos uploaded to social media platforms, such as Telegram or TikTok, enabled  further refinement of – and confidence in – the view of Russian military activity. And continued citizen collection showed a change in Russian positions over time without waiting for another satellite to pass over the area. Of course, such a show of force was not guaranteed to presage an invasion, but there was no hiding the composition and scale of the build-up. 

Once the Russians actually invaded, there was another key development – the democratisation of near real-time battlefield awareness. In a digitally connected context, everyone can be a sensor or intelligence collector, wittingly or unwittingly. This dispersed and crowd-sourced collection against the Russian campaign was based on the huge number of people taking pictures of Russian military equipment and formations in Ukraine and posting them online. These average citizens likely had no idea what exactly they were snapping a picture of, but established military experts on the internet do. Sometimes within minutes, internet platforms such as Twitter had threads and threads of what the pictures were, and what they revealed, providing what intelligence professionals call Russian ‘order of battle’…(More)”.

Big Tech Goes to War


Article by Christine H. Fox and Emelia S. Probasco: “Even before he made a bid to buy Twitter, Elon Musk was an avid user of the site. It is a reason Ukraine’s Minister of Digital Transformation Mykhailo Fedorov took to the social media platform to prod the SpaceX CEO to activate Starlink, a SpaceX division that provides satellite internet, to help his country in the aftermath of Russia’s invasion. “While you try to colonize Mars—Russia try [sic] to occupy Ukraine!” Fedorov wrote on February 26. “We ask you to provide Ukraine with Starlink stations.”

“Starlink service is now active in Ukraine,” Musk tweeted that same day. This was a coup for Ukraine: it facilitated Ukrainian communications in the conflict. Starlink later helped fend off Russian jamming attacks against its service to Ukraine with a quick and relatively simple code update. Now, however, Musk has gone back and forth on whether the company will continue funding the Starlink satellite service that has kept Ukraine and its military online during the war.

The tensions and uncertainty Musk is injecting into the war effort demonstrate the challenges that can emerge when companies play a key role in military conflict. Technology companies ranging from Microsoft to Silicon Valley start-ups have provided cyberdefense, surveillance, and reconnaissance services—not by direction of a government contract or even as a part of a government plan but instead through the independent decision-making of individual companies. These companies’ efforts have rightly garnered respect and recognition; their involvement, after all, were often pro bono and could have provoked Russian attacks on their networks, or even their people, in retaliation…(More)”.