AI-enabled Peacekeeping Tech for the Digital Age


Springwise: “There are countless organisations and government agencies working to resolve conflicts around the globe, but they often lack the tools to know if they are making the right decisions. Project Didi is developing those technological tools – helping peacemakers plan appropriately and understand the impact of their actions in real time.

Project Didi Co-founder and CCO Gabe Freund explained to Springwise that the project uses machine learning, big data, and AI to analyse conflicts and “establish a new standard for best practice when it comes to decision-making in the world of peacebuilding.”

In essence, the company is attempting to analyse the many factors that are involved in conflict in order to identify a ‘ripe moment’ when both parties will be willing to negotiate for peace. The tools can track the impact and effect of all actors across a conflict. This allows them to identify and create connections between organisations and people who are doing similar work, amplifying their effects…(More)” See also: Project Didi (Kluz Prize)

Lethal AI weapons are here: how can we control them?


Article by David Adam: “The development of lethal autonomous weapons (LAWs), including AI-equipped drones, is on the rise. The US Department of Defense, for example, has earmarked US$1 billion so far for its Replicator programme, which aims to build a fleet of small, weaponized autonomous vehicles. Experimental submarines, tanks and ships have been made that use AI to pilot themselves and shoot. Commercially available drones can use AI image recognition to zero in on targets and blow them up. LAWs do not need AI to operate, but the technology adds speed, specificity and the ability to evade defences. Some observers fear a future in which swarms of cheap AI drones could be dispatched by any faction to take out a specific person, using facial recognition.

Warfare is a relatively simple application for AI. “The technical capability for a system to find a human being and kill them is much easier than to develop a self-driving car. It’s a graduate-student project,” says Stuart Russell, a computer scientist at the University of California, Berkeley, and a prominent campaigner against AI weapons. He helped to produce a viral 2017 video called Slaughterbots that highlighted the possible risks.

The emergence of AI on the battlefield has spurred debate among researchers, legal experts and ethicists. Some argue that AI-assisted weapons could be more accurate than human-guided ones, potentially reducing both collateral damage — such as civilian casualties and damage to residential areas — and the numbers of soldiers killed and maimed, while helping vulnerable nations and groups to defend themselves. Others emphasize that autonomous weapons could make catastrophic mistakes. And many observers have overarching ethical concerns about passing targeting decisions to an algorithm…(More)”

Limiting Data Broker Sales in the Name of U.S. National Security: Questions on Substance and Messaging


Article by Peter Swire and Samm Sacks: “A new executive order issued today contains multiple provisions, most notably limiting bulk sales of personal data to “countries of concern.” The order has admirable national security goals but quite possibly would be ineffective and may be counterproductive. There are serious questions about both the substance and the messaging of the order. 

The new order combines two attractive targets for policy action. First, in this era of bipartisan concern about China, the new order would regulate transactions specifically with “countries of concern,” notably China, but also others such as Iran and North Korea. A key rationale for the order is to prevent China from amassing sensitive information about Americans, for use in tracking and potentially manipulating military personnel, government officials, or anyone else of interest to the Chinese regime. 

Second, the order targets bulk sales, to countries of concern, of sensitive personal information by data brokers, such as genomic, biometric, and precise geolocation data. The large and growing data broker industry has come under well-deserved bipartisan scrutiny for privacy risks. Congress has held hearings and considered bills to regulate such brokers. California has created a data broker registry and last fall passed the Delete Act to enable individuals to require deletion of their personal data. In January, the Federal Trade Commission issued an order prohibiting data broker Outlogic from sharing or selling sensitive geolocation data, finding that the company had acted without customer consent, in an unfair and deceptive manner. In light of these bipartisan concerns, a new order targeting both China and data brokers has a nearly irresistible political logic.

Accurate assessment of the new order, however, requires an understanding of this order as part of a much bigger departure from the traditional U.S. support for free and open flows of data across borders. Recently, in part for national security reasons, the U.S. has withdrawn its traditional support in the World Trade Organization (WTO) for free and open data flows, and the Department of Commerce has announced a proposed rule, in the name of national security, that would regulate U.S.-based cloud providers when selling to foreign countries, including for purposes of training artificial intelligence (AI) models. We are concerned that these initiatives may not sufficiently account for the national security advantages of the long-standing U.S. position and may have negative effects on the U.S. economy.

Despite the attractiveness of the regulatory targets—data brokers and countries of concern—U.S. policymakers should be cautious as they implement this order and the other current policy changes. As discussed below, there are some possible privacy advances as data brokers have to become more careful in their sales of data, but a better path would be to ensure broader privacy and cybersecurity safeguards to better protect data and critical infrastructure systems from sophisticated cyberattacks from China and elsewhere…(More)”.

Ukrainians Are Using an App to Return Home


Article by Yuliya Panfil and Allison Price: “Two years into Russia’s invasion of Ukraine, the human toll continues to mount. At least 11 million people have been displaced by heavy bombing, drone strikes, and combat, and well over a million homes have been damaged or destroyed. But just miles from the front lines of what is a conventional land invasion, something decidedly unconventional has been deployed to help restore Ukrainian communities.

Thousands of families whose homes have been hit by Russian shelling are using their smartphones to file compensation claims, access government funds, and begin to rebuild their homes. This innovation is part of eRecovery, the world’s first-ever example of a government compensation program for damaged or destroyed homes rolled out digitally, at scale, in the midst of a war. It’s one of the ways in which Ukraine’s tech-savvy government and populace have leaned into digital solutions to help counter Russian aggression with resilience and a speedier approach to reconstruction and recovery.

According to Ukraine’s Housing, Land and Property Technical Working Group, since its launch last summer eRecovery has processed more than 83,000 compensation claims for damaged or destroyed property and paid out more than 45,000. In addition, more than half a million Ukrainians have taken the first step in the compensation process by filing a property damage report through Ukraine’s e-government platform, Diia. eRecovery’s potential to transform the way governments get people back into their homes following a war, natural disaster, or other calamity is hard to overstate…(More)”.

Can AI mediate conflict better than humans?


Article by Virginia Pietromarchi: “Diplomats whizzing around the globe. Hush-hush meetings, often never made public. For centuries, the art of conflict mediation has relied on nuanced human skills: from elements as simple as how to make eye contact and listen carefully to detecting shifts in emotions and subtle signals from opponents.

Now, a growing set of entrepreneurs and experts are pitching a dramatic new set of tools into the world of dispute resolution – relying increasingly on artificial intelligence (AI).

“Groundbreaking technological advancements are revolutionising the frontier of peace and mediation,” said Sama al-Hamdani, programme director of Hala System, a private company using AI and data analysis to gather unencrypted intelligence in conflict zones, among other war-related tasks.

“We are witnessing an era where AI transforms mediators into powerhouses of efficiency and insight,” al-Hamdani said.

The researcher is one of thousands of speakers participating in the Web Summit in Doha, Qatar, where digital conflict mediation is on the agenda. The four-day summit started on February 26 and concludes on Thursday, February 29.

Already, say experts, digital solutions have proven effective in complex diplomacy. At the peak of the COVID-19 restrictions, mediators were not able to travel for in-person meetings with their interlocutors.

The solution? Use remote communication software Skype to facilitate negotiations, as then-United States envoy Zalmay Khalilzad did for the Qatar-brokered talks between the US and the Taliban in 2020.

For generations, power brokers would gather behind doors to make decisions affecting people far and wide. Digital technologies can now allow the process to be relatively more inclusive.

This is what Stephanie Williams, special representative of the United Nations’ chief in Libya, did in 2021 when she used a hybrid model integrating personal and digital interactions as she led mediation efforts to establish a roadmap towards elections. That strategy helped her speak to people living in areas deemed too dangerous to travel to. The UN estimates that Williams managed to reach one million Libyans.

However, practitioners are now growing interested in the use of technology beyond online consultations…(More)”

Unlocking Technology for Peacebuilding: The Munich Security Conference’s Role in Empowering a Peacetech Movement


Article by Stefaan Verhulst and Artur Kluz: “This week’s annual Munich Security Conference is taking place amid a turbulent backdrop. The so-called “peace dividend” that followed the end of the Cold War has long since faded. From Ukraine to Sudan to the Middle East, we are living in an era marked by increasingly unstable geopolitics and renewed–and new forms of–violent conflict. Recently, the Uppsala Conflict Data Program, measuring war since 1945, identified 2023 as the worst on record since the Cold War. As the Foreword to the Munich Security Report, issued alongside the Conference, notes: “Unfortunately, this year’s report reflects a downward trend in world politics, marked by an increase in geopolitical tensions and economic uncertainty.”

As we enter deeper into this violent era, it is worth considering the role of technology. It is perhaps no coincidence that a moment of growing peril and division coincides with the increasing penetration of technologies such as smartphones and social media, or with the emergence of new technologies such as artificial intelligence (AI) and virtual reality. In addition, the actions of satellite operators and cross-border digital payment networks have been thrust into the limelight, with their roles in enabling or precipitating conflict attracting increasing scrutiny. Today, it appears increasingly clear that transnational tech actors–and technology itself–are playing a more significant role in geopolitical conflict than ever before. As the Munich Security Report notes, “Technology has gone from being a driver of global prosperity to being a central means of geopolitical competition.”

It doesn’t have to be this way. While much attention is paid to technology’s negative capabilities, this article argues that technology can also play a more positive role, through the contributions of what is sometimes referred to as Peacetech. Peacetech is an emerging field, encompassing technologies as varied as early warning systemsAI driven predictions, and citizen journalism platforms. Broadly, its aims can be described as preventing conflict, mediating disputes, mitigating human suffering, and protecting human dignity and universal human rights. In the words of the United Nations Institute for Disarmament Research (UNIDIR), “Peacetech aims to leverage technology to drive peace while also developing strategies to prevent technology from being used to enable violence.”This article is intended as a call to those attending the Munich Security Conference to prioritize Peacetech — at a global geopolitical forum for peacebuilding. Highlighting recent concerns over the role of technology in conflict–with a particular emphasis on the destructive potential of AI and satellite systems–we argue for technology’s positive potential instead, by promoting peace and mitigating conflict. In particular, we suggest the need for a realignment in how policy and other stakeholders approach and fund technology, to foster its peaceful rather than destructive potential. This realignment would bring out the best in technology; it would harness technology toward the greater public good at a time of rising geopolitical uncertainty and instability…(More)”.

Gaza and the Future of Information Warfare


Article by P. W. Singer and Emerson T. Brooking: “The Israel-Hamas war began in the early hours of Saturday, October 7, when Hamas militants and their affiliates stole over the Gazan-Israeli border by tunnel, truck, and hang glider, killed 1,200 people, and abducted over 200 more. Within minutes, graphic imagery and bombastic propaganda began to flood social media platforms. Each shocking video or post from the ground drew new pairs of eyes, sparked horrified reactions around the world, and created demand for more. A second front in the war had been opened online, transforming physical battles covering a few square miles into a globe-spanning information conflict.

In the days that followed, Israel launched its own bloody retaliation against Hamas; its bombardment of cities in the Gaza Strip killed more than 10,000 Palestinians in the first month. With a ground invasion in late October, Israeli forces began to take control of Gazan territory. The virtual battle lines, meanwhile, only became more firmly entrenched. Digital partisans clashed across Facebook, Instagram, X, TikTok, YouTube, Telegram, and other social media platforms, each side battling to be the only one heard and believed, unshakably committed to the righteousness of its own cause.

The physical and digital battlefields are now merged. In modern war, smartphones and cameras transmit accounts of nearly every military action across the global information space. The debates they spur, in turn, affect the real world. They shape public opinion, provide vast amounts of intelligence to actors around the world, and even influence diplomatic and military operational decisions at both the strategic and tactical levels. In our 2018 book, we dubbed this phenomenon “LikeWar,” defined as a political and military competition for command of attention. If cyberwar is the hacking of online networks, LikeWar is the hacking of the people on them, using their likes and shares to make a preferred narrative go viral…(More)”.

The UN Hired an AI Company to Untangle the Israeli-Palestinian Crisis


Article by David Gilbert: “…The application of artificial intelligence technologies to conflict situations has been around since at least 1996, with machine learning being used to predict where conflicts may occur. The use of AI in this area has expanded in the intervening years, being used to improve logistics, training, and other aspects of peacekeeping missions. Lane and Shults believe they could use artificial intelligence to dig deeper and find the root causes of conflicts.

Their idea for an AI program that models the belief systems that drive human behavior first began when Lane moved to Northern Ireland a decade ago to study whether computation modeling and cognition could be used to understand issues around religious violence.

In Belfast, Lane figured out that by modeling aspects of identity and social cohesion, and identifying the factors that make people motivated to fight and die for a particular cause, he could accurately predict what was going to happen next.

“We set out to try and come up with something that could help us better understand what it is about human nature that sometimes results in conflict, and then how can we use that tool to try and get a better handle or understanding on these deeper, more psychological issues at really large scales,” Lane says.

The result of their work was a study published in 2018 in The Journal for Artificial Societies and Social Simulation, which found that people are typically peaceful but will engage in violence when an outside group threatens the core principles of their religious identity.

A year later, Lane wrote that the model he had developed predicted that measures introduced by Brexit—the UK’s departure from the European Union that included the introduction of a hard border in the Irish Sea between Northern Ireland and the rest of the UK—would result in a rise in paramilitary activity. Months later, the model was proved right.

The multi-agent model developed by Lane and Shults relied on distilling more than 50 million articles from GDelt, a project that ​​monitors “the world’s broadcast, print, and web news from nearly every corner of every country in over 100 languages.” But feeding the AI millions of articles and documents was not enough, the researchers realized. In order to fully understand what was driving the people of Northern Ireland to engage in violence against their neighbors, they would need to conduct their own research…(More)”.

PeaceTech: Digital Transformation to End War


Book by Christine Bell: “Why are we willing to believe that technology can bring about war… but not peace?

 PeaceTech: Digital Transformation to End Wars is the world’s first book dealing with the use of technological innovation to support peace and transition processes. Through an interwoven narrative of personal stories that capture the complexity of real-time peace negotiation, Bell maps the fast-paced developments of PeaceTech, and the ethical and practical challenges involved.

Bell locates PeaceTech within the wider digital revolution that is also transforming the conduct of war. She lays bare the ‘double disruption’ of peace processes, through digital transformation, and through changing conflict patterns that make processes more difficult to mount. Against this backdrop – can digital peacebuilding be a force for good?  Or do the risks outweigh the benefits?

PeaceTech provides a 12-Step Manifesto laying out the types of practice and commitment needed for successful use of digital tools to support peace processes. This open access book will be invaluable primer for business tech entrepreneurs, peacebuilders, the tech community, and students of international relations, informatics, comparative politics, ethics and law; and indeed for those simply curious about peace process innovation in the contemporary world…(More)”.

Governing the Digital Future


Report by the New America Foundation: “…The first part of this analysis was focused on five issue areas in digital technology that are driving conflict, human rights violations, and socioeconomic displacement: (1) AI and algorithmic decision-making, (2) digital access and divides, (3) data protection and data sovereignty, (4) digital identity and surveillance, and (5) transnational cybercrime...

From our dialogues, consultations, and analysis, a fundamental conclusion emerged: An over-concentration of power and severe power asymmetries are causing conflict, harm, and governance dysfunction in the digital domain. Whereas the internet began as a distributed enterprise that connected and empowered individuals worldwide, extreme concentrations of political, economic, and social power now characterize the digital domain. Power imbalances are especially acute between developing and wealthy nations, as a handful of rich-world tech companies and nation-states control the terms and trajectory of digitization…

On a more practical level, a few takeaways and first principles stood out as in need of urgent attention:

  1. We have a critical opportunity to get ahead of possible harms that will stem from AI; science and citizen-centric fora like the Pugwash Conferences on Science and Technology offer a model means of refocusing the digital governance ecosystem beyond the myopic logic of national sovereignty.
  2. Amid digital divides and increasing government control over the internet, multilateral and multi-stakeholder agencies should invest in fail-safes, alternative or redundant means of access, that can shift the stewardship of connectivity away from concentrated power centers.
  3. Regional standards that respect diverse local circumstances can help generate global cooperation on challenges such as cybercrime.
  4. To reduce global conflict in digital surveillance, democracies should practice what they preach and ban commercial spyware outright.
  5. Redistributing the value from big data can diminish corporate power and empower individuals…(More)”