AI-assisted diplomatic decision-making during crises—Challenges and opportunities


Article by Neeti Pokhriyal and Till Koebe: “Recent academic works have demonstrated the efficacy of employing or integrating “non-traditional” data (e.g., social media, satellite imagery, etc) for situational awareness tasks…

Despite these successes, we identify four critical challenges unique to the area of diplomacy that needs to be considered within the growing AI and diplomacy community going ahead:

1. First, decisions during crises are almost always taken using limited or incomplete information. There may be deliberate misuse and obfuscation of data/signals between different parties involved. At the start of a crisis, information is usually limited and potentially biased, especially along socioeconomic and rural-urban lines as crises are known to exacerbate the vulnerabilities already existing in the populations. This requires AI tools to quantify and visualize calibrated uncertainty in their outputs in an appropriate manner.

2. Second, in many cases, human lives and livelihoods are at stake. Therefore, any forecast, reasoning, or recommendation provided by AI assistance needs to be explainable and transparent for authorized users, but also secure against unauthorized access as diplomatic information is often highly sensitive. The question of accountability in case of misleading AI assistance needs to be addressed beforehand.

3. Third, in complex situations with high stakes but limited information, cultural differences and value-laden judgment driven by personal experiences play a central role in diplomatic decision-making. This calls for the use of learning techniques that can incorporate domain knowledge and experience.

4. Fourth, diplomatic interests during crises are often multifaceted, resulting in deep mistrust in and strategic misuse of information. Social media data, when used for consular tasks, has been shown to be susceptible to various d-/misinformation campaigns, some by the public, others by state actors for strategic manipulation…(More)”

Global Trends in Government Innovation 2023


OECD Report: “In the face of what has increasingly been referred to as an ongoing “permacrisis”, governments must cope with and respond to emerging threats while already grappling with longstanding issues such as climate change, digital disruption and low levels of trust. Despite compounding challenges, governments have been able to adapt and innovate to transform their societies and economies, and to transform themselves and how they design and deliver policies and services. Indeed, recent crises have served to catalyse innovation, and innovation has emerged as a much-needed driver of stability that can generate public value in difficult times.

In this context, understanding new approaches and spreading successful ideas has never been more important. In seeking to do our part to promote this, OPSI and the United Arab Emirates (UAE) Mohammed Bin Rashid Centre for Government Innovation (MBRCGI) have worked in partnership for nearly seven years to surface leading edge public sector innovation trends and to tell the stories of innovators around the world who are working to challenge existing norms and embed new ways of doing things.

Today, we are excited to jointly launch our report Global Trends in Government Innovation 2023, the preliminary report of which was launched at the World Government Summit (WGS), which brings together over 4 000 participants from more than 190 countries to discuss innovative ways to solve the challenges facing humanity…(More)”.

International Data Governance – Pathways to Progress


Press Release: “In May 2023, the United Nations System Chief Executives Board for Coordination endorsed International Data Governance – Pathways to Progress, developed through the High-level Committee on Programmes (HLCP) which approved the paper at its 45th session in March 2023.  International Data Governance – Pathways to Progress and its addenda were developed by the HLCP Working Group on International Data Governance…(More)”. (See Annex 1: Mapping and Comparing Data Governance Frameworks).

Let’s Randomize America! 


Article by Dalton Conley: “…As our society has become less random, it has become more unequal. Many people know that inequality has been rising steadily over time, but a less-remarked-on development is that there’s been a parallel geographic shift, with high- and low-income people moving into separate, ever more distinct communities…As a sociologist, I study inequality and what can be done about it. It is, to say the least, a difficult problem to solve…I’ve come to believe that lotteries could help to crack this nut and make our society fairer and more equal. We can’t randomly assign where people live, of course. And we can’t integrate neighborhoods by fiat, either. We learned that lesson in the nineteen-seventies, when counties tried busing schoolchildren across town. Those programs aimed to create more racially and economically integrated schools; they resulted in the withdrawal of affluent students from urban public-school systems, and set off a political backlash that can still be felt today…

As a political tool, lotteries have come and gone throughout history. Sortition—the selection of political officials by lot—was first practiced in Athens in the sixth century B.C.E., and later reappeared in Renaissance city-states such as Florence, Venice, and Lombardy, and in Switzerland and elsewhere. In recent years, citizens’ councils—randomly chosen groups of individuals who meet to hammer out a particular issue, such as climate policy—have been tried in Canada, France, Iceland, Ireland, and the U.K. Some political theorists, such as Hélène Landemore, Jane Mansbridge, and the Belgian writer David Van Reybrouck, have argued that randomly selected decision-makers who don’t have to campaign are less likely to be corrupt or self-interested than those who must run for office; people chosen at random are also unlikely to be typically privileged, power-hungry politicians. The wisdom of the crowd improves when the crowd is more diverse…(More)”.

Machines of mind: The case for an AI-powered productivity boom


Report by Martin Neil Baily, Erik Brynjolfsson, Anton Korinek: “ Large language models such as ChatGPT are emerging as powerful tools that not only make workers more productive but also increase the rate of innovation, laying the foundation for a significant acceleration in economic growth. As a general purpose technology, AI will impact a wide array of industries, prompting investments in new skills, transforming business processes, and altering the nature of work. However, official statistics will only partially capture the boost in productivity because the output of knowledge workers is difficult to measure. The rapid advances can have great benefits but may also lead to significant risks, so it is crucial to ensure that we steer progress in a direction that benefits all of society…(More)”.

Data portability and interoperability: A primer on two policy tools for regulation of digitized industries


Article by Sukhi Gulati-Gilbert and Robert Seamans: “…In this article we describe two other tools, data portability and interoperability, that may be particularly useful in technology-enabled sectors. Data portability allows users to move data from one company to another, helping to reduce switching costs and providing rival firms with access to valuable customer data. Interoperability allows two or more technical systems to exchange data interactively. Due to its interactive nature, interoperability can help prevent lock-in to a specific platform by allowing users to connect across platforms. Data portability and interoperability share some similarities; in addition to potential pro-competitive benefits, the tools promote values of openness, transparency, and consumer choice.

After providing an overview of these topics, we describe the tradeoffs involved with implementing data portability and interoperability. While these policy tools offer lots of promise, in practice there can be many challenges involved when determining how to fund and design an implementation that is secure and intuitive and accomplishes the intended result.  These challenges require that policymakers think carefully about the initial implementation of data portability and interoperability. Finally, to better show how data portability and interoperability can increase competition in an industry, we discuss how they could be applied in the banking and social media sectors. These are just two examples of how data portability and interoperability policy could be applied to many different industries facing increased digitization. Our definitions and examples should be helpful to those interested in understanding the tradeoffs involved in using these tools to promote competition and innovation in the U.S. economy…(More)” See also: Data to Go: The Value of Data Portability as a Means to Data Liquidity.

Evidence Gap Maps as Critical Information Communication Devices for Evidence-based Public Policy


Paper by Esteban Villa-Turek et al: “The public policy cycle requires increasingly the use of evidence by policy makers. Evidence Gap Maps (EGMs) are a relatively new methodology that helps identify, process, and visualize the vast amounts of studies representing a rich source of evidence for better policy making. This document performs a methodological review of EGMs and presents the development of a working integrated system that automates several critical steps of EGM creation by means of applied computational and statistical methods. Above all, the proposed system encompasses all major steps of EGM creation in one place, namely inclusion criteria determination, processing of information, analysis, and user-friendly communication of synthesized relevant evidence. This tool represents a critical milestone in the efforts of implementing cutting-edge computational methods in usable systems. The contribution of the document is two-fold. First, it presents the critical importance of EGMs in the public policy cycle; second, it justifies and explains the development of a usable tool that encompasses the methodological phases of creation of EGMs, while automating most time-consuming stages of the process. The overarching goal is the better and faster information communication to relevant actors like policy makers, thus promoting well-being through better and more efficient interventions based on more evidence-driven policy making…(More)”.

Regulating Cross-Border Data Flows


Book by Bryan Mercurio, and Ronald Yu: “Data is now one of, if not the world’s most valuable resource. The adoption of data-driven applications across economic sectors has made data and the flow of data so pervasive that it has become integral to everything we as members of society do – from conducting our finances to operating businesses to powering the apps we use every day. For this reason, governing cross-border data flows is inherently difficult given the ubiquity and value of data, and the impact government policies can have on national competitiveness, business attractiveness and personal rights. The challenge for governments is to address in a coherent manner the broad range of data-related issues in the context of a global data-driven economy.

This book engages with the unexplored topic of why and how governments should develop a coherent and consistent strategic framework regulating cross-border data flows. The objective is to fill a very significant gap in the legal and policy setting by considering multiple perspectives in order to assist in the development of a jurisdiction’s coherent and strategic policy framework…(More)“.

If good data is key to decarbonization, more than half of Asia’s economies are being locked out of progress, this report says


Blog by Ewan Thomson: “If measuring something is the first step towards understanding it, and understanding something is necessary to be able to improve it, then good data is the key to unlocking positive change. This is particularly true in the energy sector as it seeks to decarbonize.

But some countries have a data problem, according to energy think tank Ember and climate solutions enabler Subak’s Asia Data Transparency Report 2023, and this lack of open and reliable power-generation data is holding back the speed of the clean power transition in the region.

Asia is responsible for around 80% of global coal consumption, making it a big contributor to carbon emissions. Progress is being made on reducing these emissions, but without reliable data on power generation, measuring the rate of this progress will be challenging.

These charts show how different Asian economies are faring on data transparency on power generation and what can be done to improve both the quality and quantity of the data.

Infographic showing the number of economies by overall transparency score.

Over half of Asian countries lack reliable data in their power sectors, Ember says. Image: Ember

There are major data gaps in 24 out of the 39 Asian economies covered in the Ember research. This means it is unclear whether the energy needs of the nearly 700 million people in these 24 economies are being met with renewables or fossil fuels…(More)”.

AI Is Tearing Wikipedia Apart


Article by Claire Woodcock: “As generative artificial intelligence continues to permeate all aspects of culture, the people who steward Wikipedia are divided on how best to proceed. 

During a recent community call, it became apparent that there is a community split over whether or not to use large language models to generate content. While some people expressed that tools like Open AI’s ChatGPT could help with generating and summarizing articles, others remained wary. 

The concern is that machine-generated content has to be balanced with a lot of human review and would overwhelm lesser-known wikis with bad content. While AI generators are useful for writing believable, human-like text, they are also prone to including erroneous information, and even citing sources and academic papers which don’t exist. This often results in text summaries which seem accurate, but on closer inspection are revealed to be completely fabricated

“The risk for Wikipedia is people could be lowering the quality by throwing in stuff that they haven’t checked,” Bruckman added. “I don’t think there’s anything wrong with using it as a first draft, but every point has to be verified.” 

The Wikimedia Foundation, the nonprofit organization behind the website, is looking into building tools to make it easier for volunteers to identify bot-generated content. Meanwhile, Wikipedia is working to draft a policy that lays out the limits to how volunteers can use large language models to create content.

The current draft policy notes that anyone unfamiliar with the risks of large language models should avoid using them to create Wikipedia content, because it can open the Wikimedia Foundation up to libel suits and copyright violations—both of which the nonprofit gets protections from but the Wikipedia volunteers do not. These large language models also contain implicit biases, which often result in content skewed against marginalized and underrepresented groups of people

The community is also divided on whether large language models should be allowed to train on Wikipedia content. While open access is a cornerstone of Wikipedia’s design principles, some worry the unrestricted scraping of internet data allows AI companies like OpenAI to exploit the open web to create closed commercial datasets for their models. This is especially a problem if the Wikipedia content itself is AI-generated, creating a feedback loop of potentially biased information, if left unchecked…(More)”.