How AI Could Revolutionize Diplomacy


Article by Andrew Moore: “More than a year into Russia’s war of aggression against Ukraine, there are few signs the conflict will end anytime soon. Ukraine’s success on the battlefield has been powered by the innovative use of new technologies, from aerial drones to open-source artificial intelligence (AI) systems. Yet ultimately, the war in Ukraine—like any other war—will end with negotiations. And although the conflict has spurred new approaches to warfare, diplomatic methods remain stuck in the 19th century.

Yet not even diplomacy—one of the world’s oldest professions—can resist the tide of innovation. New approaches could come from global movements, such as the Peace Treaty Initiative, to reimagine incentives to peacemaking. But much of the change will come from adopting and adapting new technologies.

With advances in areas such as artificial intelligence, quantum computing, the internet of things, and distributed ledger technology, today’s emerging technologies will offer new tools and techniques for peacemaking that could impact every step of the process—from the earliest days of negotiations all the way to monitoring and enforcing agreements…(More)”.

Responding to the coronavirus disease-2019 pandemic with innovative data use: The role of data challenges


Paper by Jamie Danemayer, Andrew Young, Siobhan Green, Lydia Ezenwa and Michael Klein: “Innovative, responsible data use is a critical need in the global response to the coronavirus disease-2019 (COVID-19) pandemic. Yet potentially impactful data are often unavailable to those who could utilize it, particularly in data-poor settings, posing a serious barrier to effective pandemic mitigation. Data challenges, a public call-to-action for innovative data use projects, can identify and address these specific barriers. To understand gaps and progress relevant to effective data use in this context, this study thematically analyses three sets of qualitative data focused on/based in low/middle-income countries: (a) a survey of innovators responding to a data challenge, (b) a survey of organizers of data challenges, and (c) a focus group discussion with professionals using COVID-19 data for evidence-based decision-making. Data quality and accessibility and human resources/institutional capacity were frequently reported limitations to effective data use among innovators. New fit-for-purpose tools and the expansion of partnerships were the most frequently noted areas of progress. Discussion participants identified building capacity for external/national actors to understand the needs of local communities can address a lack of partnerships while de-siloing information. A synthesis of themes demonstrated that gaps, progress, and needs commonly identified by these groups are relevant beyond COVID-19, highlighting the importance of a healthy data ecosystem to address emerging threats. This is supported by data holders prioritizing the availability and accessibility of their data without causing harm; funders and policymakers committed to integrating innovations with existing physical, data, and policy infrastructure; and innovators designing sustainable, multi-use solutions based on principles of good data governance…(More)”.

The Normative Challenges of AI in Outer Space: Law, Ethics, and the Realignment of Terrestrial Standards


Paper by Ugo Pagallo, Eleonora Bassi & Massimo Durante: “The paper examines the open problems that experts of space law shall increasingly address over the next few years, according to four different sets of legal issues. Such differentiation sheds light on what is old and what is new with today’s troubles of space law, e.g., the privatization of space, vis-à-vis the challenges that AI raises in this field. Some AI challenges depend on its unique features, e.g., autonomy and opacity, and how they affect pillars of the law, whether on Earth or in space missions. The paper insists on a further class of legal issues that AI systems raise, however, only in outer space. We shall never overlook the constraints of a hazardous and hostile environment, such as on a mission between Mars and the Moon. The aim of this paper is to illustrate what is still mostly unexplored or in its infancy in this kind of research, namely, the fourfold ways in which the uniqueness of AI and that of outer space impact both ethical and legal standards. Such standards shall provide for thresholds of evaluation according to which courts and legislators evaluate the pros and cons of technology. Our claim is that a new generation of sui generis standards of space law, stricter or more flexible standards for AI systems in outer space, down to the “principle of equality” between human standards and robotic standards, will follow as a result of this twofold uniqueness of AI and of outer space…(More)”.

Protecting the integrity of survey research


Paper by Jamieson, Kathleen Hall, et al: “Although polling is not irredeemably broken, changes in technology and society create challenges that, if not addressed well, can threaten the quality of election polls and other important surveys on topics such as the economy. This essay describes some of these challenges and recommends remediations to protect the integrity of all kinds of survey research, including election polls. These 12 recommendations specify ways that survey researchers, and those who use polls and other public-oriented surveys, can increase the accuracy and trustworthiness of their data and analyses. Many of these recommendations align practice with the scientific norms of transparency, clarity, and self-correction. The transparency recommendations focus on improving disclosure of factors that affect the nature and quality of survey data. The clarity recommendations call for more precise use of terms such as “representative sample” and clear description of survey attributes that can affect accuracy. The recommendation about correcting the record urges the creation of a publicly available, professionally curated archive of identified technical problems and their remedies. The paper also calls for development of better benchmarks and for additional research on the effects of panel conditioning. Finally, the authors suggest ways to help people who want to use or learn from survey research understand the strengths and limitations of surveys and distinguish legitimate and problematic uses of these methods…(More)”.

The Incredible Challenge of Counting Every Global Birth and Death


Jeneen Interlandi at The New York Times: “…The world’s wealthiest nations are awash in so much personal data that data theft has become a lucrative business and its protection a common concern. From such a vantage point, it can be difficult to even fathom the opposite — a lack of any identifying information at all — let alone grapple with its implications. But the undercounting of human lives is pervasive, data scientists say. The resulting ills are numerous and consequential, and recent history is littered with missed opportunities to solve the problem.

More than two decades ago, 147 nations rallied around the Millennium Development Goals, the United Nations’ bold new plan for halving extreme poverty, curbing childhood mortality and conquering infectious diseases like malaria and H.I.V. The health goals became the subject of countless international summits and steady news coverage, ultimately spurring billions of dollars in investment from the world’s wealthiest nations, including the United States. But a fierce debate quickly ensued. Critics said that health officials at the United Nations and elsewhere had almost no idea what the baseline conditions were in many of the countries they were trying to help. They could not say whether maternal mortality was increasing or decreasing, or how many people were being infected with malaria, or how fast tuberculosis was spreading. In a 2004 paper, the World Health Organization’s former director of evidence, Chris Murray, and other researchers described the agency’s estimates as “serial guessing.” Without that baseline data, progress toward any given goal — to halve hunger, for example — could not be measured…(More)”.

Advancing Technology for Democracy


The White House: “The first wave of the digital revolution promised that new technologies would support democracy and human rights. The second saw an authoritarian counterrevolution. Now, the United States and other democracies are working together to ensure that the third wave of the digital revolution leads to a technological ecosystem characterized by resilience, integrity, openness, trust and security, and that reinforces democratic principles and human rights.

Together, we are organizing and mobilizing to ensure that technologies work for, not against, democratic principles, institutions, and societies.  In so doing, we will continue to engage the private sector, including by holding technology platforms accountable when they do not take action to counter the harms they cause, and by encouraging them to live up to democratic principles and shared values…

Key deliverables announced or highlighted at the second Summit for Democracy include:

  • National Strategy to Advance Privacy-Preserving Data Sharing and Analytics. OSTP released a National Strategy to Advance Privacy-Preserving Data Sharing and Analytics, a roadmap for harnessing privacy-enhancing technologies, coupled with strong governance, to enable data sharing and analytics in a way that benefits individuals and society, while mitigating privacy risks and harms and upholding democratic principles.  
  • National Objectives for Digital Assets Research and Development. OSTP also released a set of National Objectives for Digital Assets Research and Development, whichoutline its priorities for the responsible research and development (R&D) of digital assets. These objectives will help developers of digital assets better reinforce democratic principles and protect consumers by default.
  • Launch of Trustworthy and Responsible AI Resource Center for Risk Management. NIST announced a new Resource Center, which is designed as a one-stop-shop website for foundational content, technical documents, and toolkits to enable responsible use of AI. Government, industry, and academic stakeholders can access resources such as a repository for AI standards, measurement methods and metrics, and data sets. The website is designed to facilitate the implementation and international alignment with the AI Risk Management Framework. The Framework articulates the key building blocks of trustworthy AI and offers guidance for addressing them.
  • International Grand Challenges on Democracy-Affirming Technologies. Announced at the first Summit, the United States and the United Kingdom carried out their joint Privacy Enhancing Technology Prize Challenges. IE University, in partnership with the U.S. Department of State, hosted the Tech4Democracy Global Entrepreneurship Challenge. The winners, selected from around the world, were featured at the second Summit….(More)”.

Data is power — it’s time we act like it


Article by Danil Mikhailov: “Almost 82% of NGOs in low- and middle-income countries cite a lack of funding as their biggest barrier to adopting digital tools for social impact. What’s more, data.org’s 2023 data for social impact, or DSI, report, Accelerate Aspirations: Moving Together to Achieve Systems Change, found that when it comes to financial support, funders overlook the power of advanced data strategies to address longer-term systemic solutions — instead focusing on short-term, project-based outcomes.

That’s a real problem as we look to deploy powerful, data-driven interventions to solve some of today’s biggest crises — from shifting demographics to rising inequality to pandemics to our global climate emergency. Given the urgent challenges our world faces, pilots, one-offs, and underresourced program interventions are no longer acceptable.

It’s time we — as funders, academics, and purpose-driven data practitioners — acknowledge that data is power. And how do we truly harness that power? We must look toward innovative, diverse, equitable, and collaborative funding and partnership models to meet the incredible potential of data for social impact or risk the success of systems-level solutions that lead to long-term impact…(More)”.

Law, AI, and Human Rights


Article by John Croker: “Technology has been at the heart of two injustices that courts have labelled significant miscarriages of justice. The first example will be familiar now to many people in the UK: colloquially known as the ‘post office’ or ‘horizon’ scandal. The second is from Australia, where the Commonwealth Government sought to utilise AI to identify overpayment in the welfare system through what is colloquially known as the ‘Robodebt System’. The first example resulted in the most widespread miscarriage of justice in the UK legal system’s history. The second example was labelled “a shameful chapter” in government administration in Australia and led to the government unlawfully asserting debts amounting to $1.763 billion against 433,000 Australians, and is now the subject of a Royal Commission seeking to identify how public policy failures could have been made on such a significant scale.

Both examples show that where technology and AI goes wrong, the scale of the injustice can result in unprecedented impacts across societies….(More)”.

When Concerned People Produce Environmental Information: A Need to Re-Think Existing Legal Frameworks and Governance Models?


Paper by Anna Berti Suman, Mara Balestrini, Muki Haklay, and Sven Schade: “When faced with an environmental problem, locals are often among the first to act. Citizen science is increasingly one of the forms of participation in which people take action to help solve environmental problems that concern them. This implies, for example, using methods and instruments with scientific validity to collect and analyse data and evidence to understand the problem and its causes. Can the contribution of environmental data by citizens be articulated as a right? In this article, we explore these forms of productive engagement with a local matter of concern, focussing on their potential to challenge traditional allocations of responsibilities. Taking mostly the perspective of the European legal context, we identify an existing gap between the right to obtain environmental information, granted at present by the Aarhus Convention, and “a right to contribute information” and have that information considered by appointed institutions. We also explore what would be required to effectively practise this right in terms of legal and governance processes, capacities, and infrastructures, and we propose a flexible framework to implement it. Situated at the intersection of legal and governance studies, this article builds on existing literature on environmental citizen science, and on its interplay with law and governance. Our methodological approach combines literature review with legal analysis of the relevant conventions and national rules. We conclude by reflecting on the implications of our analysis, and on the benefits of this legal innovation, potentially fostering data altruism and an active citizenship, and shielding ordinary people against possible legal risks…(More)”.

China’s fake science industry: how ‘paper mills’ threaten progress


Article by Eleanor Olcott, Clive Cookson and Alan Smith at the Financial Times: “…Over the past two decades, Chinese researchers have become some of the world’s most prolific publishers of scientific papers. The Institute for Scientific Information, a US-based research analysis organisation, calculated that China produced 3.7mn papers in 2021 — 23 per cent of global output — and just behind the 4.4mn total from the US.

At the same time, China has been climbing the ranks of the number of times a paper is cited by other authors, a metric used to judge output quality. Last year, China surpassed the US for the first time in the number of most cited papers, according to Japan’s National Institute of Science and Technology Policy, although that figure was flattered by multiple references to Chinese research that first sequenced the Covid-19 virus genome.

The soaring output has sparked concern in western capitals. Chinese advances in high-profile fields such as quantum technology, genomics and space science, as well as Beijing’s surprise hypersonic missile test two years ago, have amplified the view that China is marching towards its goal of achieving global hegemony in science and technology.

That concern is a part of a wider breakdown of trust in some quarters between western institutions and Chinese ones, with some universities introducing background checks on Chinese academics amid fears of intellectual property theft.

But experts say that China’s impressive output masks systemic inefficiencies and an underbelly of low-quality and fraudulent research. Academics complain about the crushing pressure to publish to gain prized positions at research universities…(More)”.