Paper by Robert J. Lempert: “Seventy-five years into the Great Acceleration—a period marked by unprecedented growth in human activity and its effects on the planet—some type of societal transformation is inevitable. Successfully navigating these tumultuous times requires scientific, evidence-based information as an input into society’s value-laden decisions at all levels and scales. The methods and tools most commonly used to bring such expert knowledge to policy discussions employ predictions of the future, which under the existing conditions of complexity and deep uncertainty can often undermine trust and hinder good decisions. How, then, should experts best inform society’s attempts to navigate when both experts and decisionmakers are sure to be surprised? Decision Making under Deep Uncertainty (DMDU) offers an answer to this question. With its focus on model pluralism, learning, and robust solutions coproduced in a participatory process of deliberation with analysis, DMDU can repair the fractured conversations among policy experts, decisionmakers, and the public. In this paper, the author explores how DMDU can reshape policy analysis to better align with the demands of a rapidly evolving world and offers insights into the roles and opportunities for experts to inform societal debates and actions toward more-desirable futures…(More)”.
For sale: Data on US servicemembers — and lots of it
Article by Alfred Ng: “Active-duty members of the U.S. military are vulnerable to having their personal information collected, packaged and sold to overseas companies without any vetting, according to a new report funded by the U.S. Military Academy at West Point.
The report highlights a significant American security risk, according to military officials, lawmakers and the experts who conducted the research, and who say the data available on servicemembers exposes them to blackmail based on their jobs and habits.
It also casts a spotlight on the practices of data brokers, a set of firms that specialize in scraping and packaging people’s digital records such as health conditions and credit ratings.
“It’s really a case of being able to target people based on specific vulnerabilities,” said Maj. Jessica Dawson, a research scientist at the Army Cyber Institute at West Point who initiated the study.
Data brokers gather government files, publicly available information and financial records into packages they can sell to marketers and other interested companies. As the practice has grown into a $214 billion industry, it has raised privacy concerns and come under scrutiny from lawmakers in Congress and state capitals.
Worried it could also present a risk to national security, the U.S. Military Academy at West Point funded the study from Duke University to see how servicemembers’ information might be packaged and sold.
Posing as buyers in the U.S. and Singapore, Duke researchers contacted multiple data-broker firms who listed datasets about active-duty servicemembers for sale. Three agreed and sold datasets to the researchers while two declined, saying the requests came from companies that didn’t meet their verification standards.
In total, the datasets contained information on nearly 30,000 active-duty military personnel. They also purchased a dataset on an additional 5,000 friends and family members of military personnel…(More)”
AI models could help negotiators secure peace deals
The Economist: “In a messy age of grinding wars and multiplying tariffs, negotiators are as busy as the stakes are high. Alliances are shifting and political leaders are adjusting—if not reversing—positions. The resulting tumult is giving even seasoned negotiators trouble keeping up with their superiors back home. Artificial-intelligence (AI) models may be able to lend a hand.
Some such models are already under development. One of the most advanced projects, dubbed Strategic Headwinds, aims to help Western diplomats in talks on Ukraine. Work began during the Biden administration in America, with officials on the White House’s National Security Council (NSC) offering guidance to the Centre for Strategic and International Studies (CSIS), a think-tank in Washington that runs the project. With peace talks under way, CSIS has speeded up its effort. Other outfits are doing similar work.
The CSIS programme is led by a unit called the Futures Lab. This team developed an AI language model using software from Scale AI, a firm based in San Francisco, and unique training data. The lab designed a tabletop strategy game called “Hetman’s Shadow” in which Russia, Ukraine and their allies hammer out deals. Data from 45 experts who played the game were fed into the model. So were media analyses of issues at stake in the Russia-Ukraine war, as well as answers provided by specialists to a questionnaire about the relative values of potential negotiation trade-offs. A database of 374 peace agreements and ceasefires was also poured in.
Thus was born, in late February, the first iteration of the Ukraine-Russia Peace Agreement Simulator. Users enter preferences for outcomes grouped under four rubrics: territory and sovereignty; security arrangements; justice and accountability; and economic conditions. The AI model then cranks out a draft agreement. The software also scores, on a scale of one to ten, the likelihood that each of its components would be satisfactory, negotiable or unacceptable to Russia, Ukraine, America and Europe. The model was provided to government negotiators from those last three territories, but a limited “dashboard” version of the software can be run online by interested members of the public…(More)”.
The Age of AI in the Life Sciences: Benefits and Biosecurity Considerations
Report by the National Academies of Sciences, Engineering, and Medicine: “Artificial intelligence (AI) applications in the life sciences have the potential to enable advances in biological discovery and design at a faster pace and efficiency than is possible with classical experimental approaches alone. At the same time, AI-enabled biological tools developed for beneficial applications could potentially be misused for harmful purposes. Although the creation of biological weapons is not a new concept or risk, the potential for AI-enabled biological tools to affect this risk has raised concerns during the past decade.
This report, as requested by the Department of Defense, assesses how AI-enabled biological tools could uniquely impact biosecurity risk, and how advancements in such tools could also be used to mitigate these risks. The Age of AI in the Life Sciences reviews the capabilities of AI-enabled biological tools and can be used in conjunction with the 2018 National Academies report, Biodefense in the Age of Synthetic Biology, which sets out a framework for identifying the different risk factors associated with synthetic biology capabilities…(More)”
Government reform starts with data, evidence
Article by Kshemendra Paul: “It’s time to strengthen the use of data, evidence and transparency to stop driving with mud on the windshield and to steer the government toward improving management of its programs and operations.
Existing Government Accountability Office and agency inspectors general reports identify thousands of specific evidence-based recommendations to improve efficiency, economy and effectiveness, and reduce fraud, waste and abuse. Many of these recommendations aim at program design and requirements, highlighting specific instances of overlap, redundancy and duplication. Others describe inadequate internal controls to balance program integrity with the experience of the customer, contractor or grantee. While progress is being reported in part due to stronger partnerships with IGs, much remains to be done. Indeed, GAO’s 2023 High Risk List, which it has produced going back to 1990, shows surprisingly slow progress of efforts to reduce risk to government programs and operations.
Here are a few examples:
- GAO estimates recent annual fraud of between $233 billion to $521 billion, or about 3% to 7% of federal spending. On the other hand, identified fraud with high-risk Recovery Act spending was held under 1% using data, transparency and partnerships with Offices of Inspectors General.
- GAO and IGs have collectively identified hundreds of billions in potential cost savings or improvements not yet addressed by federal agencies.
- GAO has recently described shortcomings with the government’s efforts to build evidence. While federal policymakers need good information to inform their decisions, the Commission on Evidence-Based Policymaking previously said, “too little evidence is produced to meet this need.”
One of the main reasons for agency sluggishness is the lack of agency and governmentwide use of synchronized, authoritative and shared data to support how the government manages itself.
For example, the Energy Department IG found that, “[t]he department often lacks the data necessary to make critical decisions, evaluate and effectively manage risks, or gain visibility into program results.” It is past time for the government to commit itself to move away from its widespread use of data calls, the error-prone, costly and manual aggregation of data used to support policy analysis and decision-making. Efforts to embrace data-informed approaches to manage government programs and operations are stymied by lack of basic agency and governmentwide data hygiene. While bright pockets exist, management gaps, as DOE OIG stated, “create blind spots in the universe of data that, if captured, could be used to more efficiently identify, track and respond to risks…”
The proposed approach starts with current agency operating models, then drives into management process integration to tackle root causes of dysfunction from the bottom up. It recognizes that inefficiency, fraud and other challenges are diffused, deeply embedded and have non-obvious interrelationships within the federal complex…(More)”
Protecting civilians in a data-driven and digitalized battlespace: Towards a minimum basic technology infrastructure
Paper by Ann Fitz-Gerald and Jenn Hennebry: “This article examines the realities of modern day warfare, including a rising trend in hybrid threats and irregular warfare which employ emerging technologies supported by digital and data-driven processes. The way in which these technologies become applied generates a widened battlefield and leads to a greater number of civilians being caught up in conflict. Humanitarian groups mandated to protect civilians have adapted their approaches to the use of new emerging technologies. However, the lack of international consensus on the use of data, the public and private nature of the actors involved in conflict, the transnational aspects of the widened battlefield, and the heightened security risks in the conflict space pose enormous challenges for the protection of civilians agenda. Based on the dual-usage aspect of emerging technologies, the challenges associated with regulation and the need for those affected by conflict to demonstrate resilience towards, and knowledge of, digital media literacy, this paper proposes the development of guidance for a “minimum basic technology infrastructure” which is supported by technology, regulation, and public awareness and education…(More)”.
What is ‘sovereign AI’ and why is the concept so appealing (and fraught)?
Article by John Letzing: “Denmark unveiled its own artificial intelligence supercomputer last month, funded by the proceeds of wildly popular Danish weight-loss drugs like Ozempic. It’s now one of several sovereign AI initiatives underway, which one CEO believes can “codify” a country’s culture, history, and collective intelligence – and become “the bedrock of modern economies.”
That particular CEO, Jensen Huang, happens to run a company selling the sort of chips needed to pursue sovereign AI – that is, to construct a domestic vintage of the technology, informed by troves of homegrown data and powered by the computing infrastructure necessary to turn that data into a strategic reserve of intellect…
It’s not surprising that countries are forging expansive plans to put their own stamp on AI. But big-ticket supercomputers and other costly resources aren’t feasible everywhere.
Training a large language model has gotten a lot more expensive lately; the funds required for the necessary hardware, energy, and staff may soon top $1 billion. Meanwhile, geopolitical friction over access to the advanced chips necessary for powerful AI systems could further warp the global playing field.
Even for countries with abundant resources and access, there are “sovereignty traps” to consider. Governments pushing ahead on sovereign AI could risk undermining global cooperation meant to ensure the technology is put to use in transparent and equitable ways. That might make it a lot less safe for everyone.
An example: a place using AI systems trained on a local set of values for its security may readily flag behaviour out of sync with those values as a threat…(More)”.
Information Technology for Peace and Security
Book edited by Christian Reuter: “Technological and scientific progress, especially the rapid development in information technology (IT) and artificial intelligence (AI), plays a crucial role regarding questions of peace and security. This textbook, extended and updated in its second edition, addresses the significance, potential of IT, as well as the challenges it poses, with regard to peace and security.
It introduces the reader to the concepts of peace, conflict, and security research, especially focusing on natural, technical and computer science perspectives. In the following sections, it sheds light on cyber conflicts, war and peace, cyber arms control, cyber attribution, infrastructures, artificial intelligence, as well ICT in peace and conflict…(More)”.
How Artificial Intelligence Can Support Peace
Essay by Adam Zable, Marine Ragnet, Roshni Singh, Hannah Chafetz, Andrew J. Zahuranec, and Stefaan G. Verhulst: “In what follows we provide a series of case studies of how AI can be used to promote peace, leveraging what we learned at the Kluz Prize for PeaceTech and NYU Prep and Becera events. These case studies and applications of AI are limited to what was included in these initiatives and are not fully comprehensive. With these examples of the role of technology before, during, and after a conflict, we hope to broaden the discussion around the potential positive uses of AI in the context of today’s global challenges.
The table above summarizes the how AI may be harnessed throughout the conflict cycle and the supporting examples from the Kluz Prize for PeaceTech and NYU PREP and Becera events
(1) The Use of AI Before a Conflict
AI can support conflict prevention by predicting emerging tensions and supporting mediation efforts. In recent years, AI-driven early warning systems have been used to identify patterns that precede violence, allowing for timely interventions.
For instance, The Violence & Impacts Early-Warning System (VIEWS), developed by a research consortium at Uppsala University in Sweden and the Peace Research Institute Oslo (PRIO) in Norway, employs AI and machine learning algorithms to analyze large datasets, including conflict history, political events, and socio-economic indicators—supporting negative peace and peacebuilding efforts. These algorithms are trained to recognize patterns that precede violent conflict, using both supervised and unsupervised learning methods to make predictions about the likelihood and severity of conflicts up to three years in advance. The system also uses predictive analytics to identify potential hotspots, where specific factors—such as spikes in political unrest or economic instability—suggest a higher risk of conflict…(More)”.
Deliberative Technology: Designing AI and Computational Democracy for Peacebuilding in Highly-Polarized Contexts
Report by Lisa Schirch: “This is a report on an international workshop for 45 peacebuilders, co-hosted by Toda Peace Institute and the University of Notre Dame’s Kroc Institute for International Peace Studies in June 2024. Emphasizing citizen participation and collective intelligence, the workshop explored the intersection of digital democracy and algorithmic technologies designed to enhance democratic processes. Central to the discussions were deliberative technologies, a new class of tools that facilitate collective discussion and decision-making by incorporating both qualitative and quantitative inputs, supported by bridging algorithms and AI. The workshop provided a comprehensive overview of how these innovative approaches and technologies can contribute to more inclusive and effective democratic processes, particularly in contexts marked by polarization and conflict…(More)”