Five Enablers for a New Phase of Behavioral Science


Article by Michael Hallsworth: “Over recent weeks I’ve been sharing parts of a “manifesto” that tries to give a coherent vision for the future of applied behavioral science. Stepping back, if I had to identify a theme that comes through the various proposals, it would be the need for self-reflective practice.

Behavioral science has seen a tremendous amount of growth and interest over the last decade, largely focused on expanding its uses and methods. My sense is it’s ready for a new phase of maturity. That maturity involves behavioral scientists reflecting on the various ways that their actions are shaped by structural, institutional, environmental, economic, and historical factors.

I’m definitely not exempt from this need for self-reflection. There are times when I’ve focused on a cognitive bias when I should have been spending more time exploring the context and motivations for a decision instead. Sometimes I’ve homed in on a narrow slice of a problem that we can measure, even if that means dispensing with wider systemic effects and challenges. Once I spent a long time trying to apply the language of heuristics and biases to explain why people were failing to use the urgent care alternatives to hospital emergency departments, before realizing that their behavior was completely reasonable.     

The manifesto critiques things like this, but it doesn’t have all the answers. Because it tries to both cover a lot of ground and go into detail, many of the hard knots of implementation go unpicked. The truth is that writing reports and setting goals is the easy part. Turning those goals into practice is much tougher; as behavioral scientists know, there is often a gap between intention and action.

Right now, I and others don’t always realize the ambitions set out in the manifesto. Changing that is going to take time and effort, and it will involve the discomfort of disrupting familiar practices. Some have made public commitments in this direction; my organization is working on upgrading its practices in line with proposals around making predictions prior to implementation, strengthening RCTs to cope with complexity, and enabling people to use behavioral science, among others.

The truth is that writing reports and setting goals is the easy part. Turning those goals into practice is much tougher; as behavioral scientists know, there is often a gap between intention and action.

But changes by individual actors will not be enough. The big issue is that several of the proposals require coordination. For example, one of the key ideas is the need for more multisite studies that are well coordinated and have clear goals. Another prioritizes developing international professional networks to support projects in low- and middle-income countries…(More)”.

The Coming Age of AI-Powered Propaganda


Essay by Josh A. Goldstein and Girish Sastry: “In the seven years since Russian operatives interfered in the 2016 U.S. presidential election, in part by posing as Americans in thousands of fake social media accounts, another technology with the potential to accelerate the spread of propaganda has taken center stage: artificial intelligence, or AI. Much of the concern has focused on the risks of audio and visual “deepfakes,” which use AI to invent images or events that did not actually occur. But another AI capability is just as worrisome. Researchers have warned for years that generative AI systems trained to produce original language—“language models,” for short—could be used by U.S. adversaries to mount influence operations. And now, these models appear to be on the cusp of enabling users to generate a near limitless supply of original text with limited human effort. This could improve the ability of propagandists to persuade unwitting voters, overwhelm online information environments, and personalize phishing emails. The danger is twofold: not only could language models sway beliefs; they could also corrode public trust in the information people rely on to form judgments and make decisions.

The progress of generative AI research has outpaced expectations. Last year, language models were used to generate functional proteins, beat human players in strategy games requiring dialogue, and create online assistants. Conversational language models have come into wide use almost overnight: more than 100 million people used OpenAI’s ChatGPT program in the first two months after it was launched, in December 2022, and millions more have likely used the AI tools that Google and Microsoft introduced soon thereafter. As a result, risks that seemed theoretical only a few years ago now appear increasingly realistic. For example, the AI-powered “chatbot” that powers Microsoft’s Bing search engine has shown itself to be capable of attempting to manipulate users—and even threatening them.

As generative AI tools sweep the world, it is hard to imagine that propagandists will not make use of them to lie and mislead…(More)”.

How the Digital Transformation Changed Geopolitics


Paper by Dan Ciuriak: “In the late 2000s, a set of connected technological developments – introduction of the iPhone, deep learning through stacked neural nets, and application of GPUs to neural nets – resulted in the generation of truly astronomical amounts of data and provided the tools to exploit it. As the world emerged from the Great Financial Crisis of 2008-2009, data was decisively transformed from a mostly valueless by-product – “data exhaust” – to the “new oil”, the essential capital asset of the data-driven economy, and the “new plutonium” when deployed in social and political applications. This economy featured steep economies of scale, powerful economies of scope, network externalities in many applications, and pervasive information asymmetry. Strategic commercial policies at the firm and national levels were incentivized by the newfound scope to capture economic rents, destabilizing the rules-based system for trade and investment. At the same time, the new disruptive general-purpose technologies built on the nexus of Big Data, machine learning and artificial intelligence reconfigured geopolitical rivalry in several ways: by shifting great power rivalry onto new and critical grounds on which none had a decisive established advantage; by creating new vulnerabilities to information warfare in societies, especially open societies; and by enhancing the tools for social manipulation and the promotion of political personality cults. Machine learning, which essentially industrialized the very process of learning, drove an acceleration in the pace of innovation, which precipitated industrial policies driven by the desire to capture first mover advantage and by the fear of falling behind.

These developments provide a unifying framework to understand the progressive unravelling of the US-led global system as the decade unfolded, despite the fact that all the major innovations that drove the transition were within the US sphere and the US enjoyed first mover advantages. This is in stark contrast to the previous major economic transition to the knowledge-based economy, in which case US leadership on the key innovations extended its dominance for decades and indeed powered its rise to its unipolar moment. The world did not respond well to the changed technological and economic conditions and hence we are war: hot war, cold war, technological war, trade war, social war, and internecine political war. This paper focuses on the role of technological and economic conditions in shaping geopolitics, which is critical to understand if we are to respond to the current world disorder and to prepare to handle the coming transition in technological and economic conditions to yet another new economic era based on machine knowledge capital…(More)”.

Innovation Power: Why Technology Will Define the Future of Geopolitics


Essay by Eric Schmidt: “When Russian forces marched on Kyiv in February 2022, few thought Ukraine could survive. Russia had more than twice as many soldiers as Ukraine. Its military budget was more than ten times as large. The U.S. intelligence community estimated that Kyiv would fall within one to two weeks at most.

Outgunned and outmanned, Ukraine turned to one area in which it held an advantage over the enemy: technology. Shortly after the invasion, the Ukrainian government uploaded all its critical data to the cloud, so that it could safeguard information and keep functioning even if Russian missiles turned its ministerial offices into rubble. The country’s Ministry of Digital Transformation, which Ukrainian President Volodymyr Zelensky had established just two years earlier, repurposed its e-government mobile app, Diia, for open-source intelligence collection, so that citizens could upload photos and videos of enemy military units. With their communications infrastructure in jeopardy, the Ukrainians turned to Starlink satellites and ground stations provided by SpaceX to stay connected. When Russia sent Iranian-made drones across the border, Ukraine acquired its own drones specially designed to intercept their attacks—while its military learned how to use unfamiliar weapons supplied by Western allies. In the cat-and-mouse game of innovation, Ukraine simply proved nimbler. And so what Russia had imagined would be a quick and easy invasion has turned out to be anything but.

Ukraine’s success can be credited in part to the resolve of the Ukrainian people, the weakness of the Russian military, and the strength of Western support. But it also owes to the defining new force of international politics: innovation power. Innovation power is the ability to invent, adopt, and adapt new technologies. It contributes to both hard and soft power. High-tech weapons systems increase military might, new platforms and the standards that govern them provide economic leverage, and cutting-edge research and technologies enhance global appeal. There is a long tradition of states harnessing innovation to project power abroad, but what has changed is the self-perpetuating nature of scientific advances. Developments in artificial intelligence in particular not only unlock new areas of scientific discovery; they also speed up that very process. Artificial intelligence supercharges the ability of scientists and engineers to discover ever more powerful technologies, fostering advances in artificial intelligence itself as well as in other fields—and reshaping the world in the process…(More)”.

Open-source intelligence is piercing the fog of war in Ukraine


The Economist: “…The rise of open-source intelligenceOSINT to insiders, has transformed the way that people receive news. In the run-up to war, commercial satellite imagery and video footage of Russian convoys on TikTok, a social-media site, allowed journalists and researchers to corroborate Western claims that Russia was preparing an invasion. OSINT even predicted its onset. Jeffrey Lewis of the Middlebury Institute in California used Google Maps’ road-traffic reports to identify a tell-tale jam on the Russian side of the border at 3:15am on February 24th. “Someone’s on the move”, he tweeted. Less than three hours later Vladimir Putin launched his war.

Satellite imagery still plays a role in tracking the war. During the Kherson offensive, synthetic-aperture radar (SAR) satellites, which can see at night and through clouds, showed Russia building pontoon bridges over the Dnieper river before its retreat from Kherson, boats appearing and disappearing as troops escaped east and, later, Russia’s army building new defensive positions along the M14 highway on the river’s left bank. And when Ukrainian drones struck two air bases deep inside Russia on December 5th, high-resolution satellite images showed the extent of the damage…(More)”.

Big Data and the Law of War


Essay by Paul Stephan: “Big data looms large in today’s world. Much of the tech sector regards the building up of large sets of searchable data as part (sometimes the greater part) of its business model. Surveillance-oriented states, of which China is the foremost example, use big data to guide and bolster monitoring of their own people as well as potential foreign threats. Many other states are not far behind in the surveillance arms race, notwithstanding the attempts of the European Union to put its metaphorical finger in the dike. Finally, ChatGPT has revived popular interest in artificial intelligence (AI), which uses big data as a means of optimizing the training and algorithm design on which it depends, as a cultural, economic, and social phenomenon. 

If big data is growing in significance, might it join territory, people, and property as objects of international conflict, including armed conflict? So far it has not been front and center in Russia’s invasion of Ukraine, the war that currently consumes much of our attention. But future conflicts could certainly feature attacks on big data. China and Taiwan, for example, both have sophisticated technological infrastructures that encompass big data and AI capabilities. The risk that they might find themselves at war in the near future is larger than anyone would like. What, then, might the law of war have to say about big data? More generally, if existing law does not meet our needs,  how might new international law address the issue?

In a recent essay, part of an edited volume on “The Future Law of Armed Conflict,” I argue that big data is a resource and therefore a potential target in an armed conflict. I address two issues: Under the law governing the legality of war (jus ad bellum), what kinds of attacks on big data might justify an armed response, touching off a bilateral (or multilateral) armed conflict (a war)? And within an existing armed conflict, what are the rules (jus in bello, also known as international humanitarian law, or IHL) governing such attacks?

The distinction is meaningful. If cyber operations rise to the level of an armed attack, then the targeted state has, according to Article 51 of the U.N. Charter, an “inherent right” to respond with armed force. Moreover, the target need not confine its response to a symmetrical cyber operation. Once attacked, a state may use all forms of armed force in response, albeit subject to the restrictions imposed by IHL. If the state regards, say, a takedown of its financial system as an armed attack, it may respond with missiles…(More)”.

Open Secrets: Ukraine and the Next Intelligence Revolution


Article by Amy Zegart: “Russia’s invasion of Ukraine has been a watershed moment for the world of intelligence. For weeks before the shelling began, Washington publicly released a relentless stream of remarkably detailed findings about everything from Russian troop movements to false-flag attacks the Kremlin would use to justify the invasion. 

This disclosure strategy was new: spy agencies are accustomed to concealing intelligence, not revealing it. But it was very effective. By getting the truth out before Russian lies took hold, the United States was able to rally allies and quickly coordinate hard-hitting sanctions. Intelligence disclosures set Russian President Vladimir Putin on his back foot, wondering who and what in his government had been penetrated so deeply by U.S. agencies, and made it more difficult for other countries to hide behind Putin’s lies and side with Russia.

The disclosures were just the beginning. The war has ushered in a new era of intelligence sharing between Ukraine, the United States, and other allies and partners, which has helped counter false Russian narratives, defend digital systems from cyberattacks, and assisted Ukrainian forces in striking Russian targets on the battlefield. And it has brought to light a profound new reality: intelligence isn’t just for government spy agencies anymore…

The explosion of open-source information online, commercial satellite capabilities, and the rise of AI are enabling all sorts of individuals and private organizations to collect, analyze, and disseminate intelligence.

In the past several years, for instance, the amateur investigators of Bellingcat—a volunteer organization that describes itself as “an intelligence agency for the people”—have made all kinds of discoveries. Bellingcat identified the Russian hit team that tried to assassinate former Russian spy officer Sergei Skripal in the United Kingdom and located supporters of the Islamic State (also known as ISIS) in Europe. It also proved that Russians were behind the shootdown of Malaysia Airlines flight 17 over Ukraine. 

Bellingcat is not the only civilian intelligence initiative. When the Iranian government claimed in 2020 that a small fire had broken out in an industrial shed, two U.S. researchers working independently and using nothing more than their computers and the Internet proved within hours that Tehran was lying….(More)”.

How the algorithm tipped the balance in Ukraine


David Ignatius at The Washington Post: “Two Ukrainian military officers peer at a laptop computer operated by a Ukrainian technician using software provided by the American technology company Palantir. On the screen are detailed digital maps of the battlefield at Bakhmut in eastern Ukraine, overlaid with other targeting intelligence — most of it obtained from commercial satellites.

As we lean closer, we see can jagged trenches on the Bakhmut front, where Russian and Ukrainian forces are separated by a few hundred yards in one of the bloodiest battles of the war. A click of the computer mouse displays thermal images of Russian and Ukrainian artillery fire; another click shows a Russian tank marked with a “Z,” seen through a picket fence, an image uploaded by a Ukrainian spy on the ground.

If this were a working combat operations center, rather than a demonstration for a visiting journalist, the Ukrainian officers could use a targeting program to select a missile, artillery piece or armed drone to attack the Russian positions displayed on the screen. Then drones could confirm the strike, and a damage assessment would be fed back into the system.

This is the “wizard war” in the Ukraine conflict — a secret digital campaign that has never been reported before in detail — and it’s a big reason David is beating Goliath here. The Ukrainians are fusing their courageous fighting spirit with the most advanced intelligence and battle-management software ever seen in combat.

“Tenacity, will and harnessing the latest technology give the Ukrainians a decisive advantage,” Gen. Mark A. Milley, chairman of the Joint Chiefs of Staff, told me last week. “We are witnessing the ways wars will be fought, and won, for years to come.”

I think Milley is right about the transformational effect of technology on the Ukraine battlefield. And for me, here’s the bottom line: With these systems aiding brave Ukrainian troops, the Russians probably cannot win this war…(More)” See also Part 2.

How data restrictions erode internet freedom


Article by Tom Okman: “Countries across the world – small, large, powerful and weak – are accelerating efforts to control and restrict private data. According to the Information Technology and Innovation Foundation, the number of laws, regulations and policies that restrict or require data to be stored in a specific country more than doubled between 2017 and 2021, rising from 67 to 144.

Some of these laws may be driven by benevolent intentions. After all, citizens will support stopping the spread of online disinformation, hate, and extremism or systemic cyber-snooping. Cyber-libertarian John Perry Barlow’s call for the government to “leave us alone” in cyberspace rings hollow in this context.

Government internet oversight is on the rise.

Government internet oversight is on the rise. Image: Information Technology and Innovation Foundation

But some digital policies may prove to be repressive for companies and citizens alike. They extend the justifiable concern over the dominance of large tech companies to other areas of the digital realm.

These “digital iron curtains” can take many forms. What they have in common is that they seek to silo the internet (or parts of it) and private data into national boxes. This risks dividing the internet, reducing its connective potential, and infringing basic digital freedoms…(More)”.

The Ethics of Automated Warfare and Artificial Intelligence


Essay series introduced by Bessma Momani, Aaron Shull and Jean-François Bélanger: “…begins with a piece written by Alex Wilner titled “AI and the Future of Deterrence: Promises and Pitfalls.” Wilner looks at the issue of deterrence and provides an account of the various ways AI may impact our understanding and framing of deterrence theory and its practice in the coming decades. He discusses how different countries have expressed diverging views over the degree of AI autonomy that should be permitted in a conflict situation — as those more willing to cut humans out of the decision-making loop could gain a strategic advantage. Wilner’s essay emphasizes that differences in states’ technological capability are large, and this will hinder interoperability among allies, while diverging views on regulation and ethical standards make global governance efforts even more challenging.

Looking to the future of non-state use of drones as an example, the weapon technology transfer from nation-state to non-state actors can help us to understand how next-generation technologies may also slip into the hands of unsavoury characters such as terrorists, criminal gangs or militant groups. The effectiveness of Ukrainian drone strikes against the much larger Russian army should serve as a warning to Western militaries, suggests James Rogers in his essay “The Third Drone Age: Visions Out to 2040.” This is a technology that can level the field by asymmetrically advantaging conventionally weaker forces. The increased diffusion of drone technology enhances the likelihood that future wars will also be drone wars, whether these drones are autonomous systems or not. This technology, in the hands of non-state actors, implies future Western missions against, say, insurgent or guerilla forces will be more difficult.

Data is the fuel that powers AI and the broader digital transformation of war. In her essay “Civilian Data in Cyber Conflict: Legal and Geostrategic Considerations,” Eleonore Pauwels discusses how offensive cyber operations are aiming to alter the very data sets of other actors to undermine adversaries — whether through targeting centralized biometric facilities or individuals’ DNA sequence in genomic analysis databases, or injecting fallacious data into satellite imagery used in situational awareness. Drawing on the implications of international humanitarian law, Pauwels argues that adversarial data manipulation constitutes another form of “grey zone” operation that falls below a threshold of armed conflict. She evaluates the challenges associated with adversarial data manipulation, given that there is no internationally agreed upon definition of what constitutes cyberattacks or cyber hostilities within international humanitarian law (IHL).

In “AI and the Actual International Humanitarian Law Accountability Gap,” Rebecca Crootoff argues that technologies can complicate legal analysis by introducing geographic, temporal and agency distance between a human’s decision and its effects. This makes it more difficult to hold an individual or state accountable for unlawful harmful acts. But in addition to this added complexity surrounding legal accountability, novel military technologies are bringing an existing accountability gap in IHL into sharper focus: the relative lack of legal accountability for unintended civilian harm. These unintentional acts can be catastrophic, but technically within the confines of international law, which highlights the need for new accountability mechanisms to better protect civilians.

Some assert that the deployment of autonomous weapon systems can strengthen compliance with IHL by limiting the kinetic devastation of collateral damage, but AI’s fragility and apparent capacity to behave in unexpected ways poses new and unexpected risks. In “Autonomous Weapons: The False Promise of Civilian Protection,” Branka Marijan opines that AI will likely not surpass human judgment for many decades, if ever, suggesting that there need to be regulations mandating a certain level of human control over weapon systems. The export of weapon systems to states willing to deploy them on a looser chain-of-command leash should be monitored…(More)”.