The Normative Challenges of AI in Outer Space: Law, Ethics, and the Realignment of Terrestrial Standards


Paper by Ugo Pagallo, Eleonora Bassi & Massimo Durante: “The paper examines the open problems that experts of space law shall increasingly address over the next few years, according to four different sets of legal issues. Such differentiation sheds light on what is old and what is new with today’s troubles of space law, e.g., the privatization of space, vis-à-vis the challenges that AI raises in this field. Some AI challenges depend on its unique features, e.g., autonomy and opacity, and how they affect pillars of the law, whether on Earth or in space missions. The paper insists on a further class of legal issues that AI systems raise, however, only in outer space. We shall never overlook the constraints of a hazardous and hostile environment, such as on a mission between Mars and the Moon. The aim of this paper is to illustrate what is still mostly unexplored or in its infancy in this kind of research, namely, the fourfold ways in which the uniqueness of AI and that of outer space impact both ethical and legal standards. Such standards shall provide for thresholds of evaluation according to which courts and legislators evaluate the pros and cons of technology. Our claim is that a new generation of sui generis standards of space law, stricter or more flexible standards for AI systems in outer space, down to the “principle of equality” between human standards and robotic standards, will follow as a result of this twofold uniqueness of AI and of outer space…(More)”.

Protecting the integrity of survey research


Paper by Jamieson, Kathleen Hall, et al: “Although polling is not irredeemably broken, changes in technology and society create challenges that, if not addressed well, can threaten the quality of election polls and other important surveys on topics such as the economy. This essay describes some of these challenges and recommends remediations to protect the integrity of all kinds of survey research, including election polls. These 12 recommendations specify ways that survey researchers, and those who use polls and other public-oriented surveys, can increase the accuracy and trustworthiness of their data and analyses. Many of these recommendations align practice with the scientific norms of transparency, clarity, and self-correction. The transparency recommendations focus on improving disclosure of factors that affect the nature and quality of survey data. The clarity recommendations call for more precise use of terms such as “representative sample” and clear description of survey attributes that can affect accuracy. The recommendation about correcting the record urges the creation of a publicly available, professionally curated archive of identified technical problems and their remedies. The paper also calls for development of better benchmarks and for additional research on the effects of panel conditioning. Finally, the authors suggest ways to help people who want to use or learn from survey research understand the strengths and limitations of surveys and distinguish legitimate and problematic uses of these methods…(More)”.

The Incredible Challenge of Counting Every Global Birth and Death


Jeneen Interlandi at The New York Times: “…The world’s wealthiest nations are awash in so much personal data that data theft has become a lucrative business and its protection a common concern. From such a vantage point, it can be difficult to even fathom the opposite — a lack of any identifying information at all — let alone grapple with its implications. But the undercounting of human lives is pervasive, data scientists say. The resulting ills are numerous and consequential, and recent history is littered with missed opportunities to solve the problem.

More than two decades ago, 147 nations rallied around the Millennium Development Goals, the United Nations’ bold new plan for halving extreme poverty, curbing childhood mortality and conquering infectious diseases like malaria and H.I.V. The health goals became the subject of countless international summits and steady news coverage, ultimately spurring billions of dollars in investment from the world’s wealthiest nations, including the United States. But a fierce debate quickly ensued. Critics said that health officials at the United Nations and elsewhere had almost no idea what the baseline conditions were in many of the countries they were trying to help. They could not say whether maternal mortality was increasing or decreasing, or how many people were being infected with malaria, or how fast tuberculosis was spreading. In a 2004 paper, the World Health Organization’s former director of evidence, Chris Murray, and other researchers described the agency’s estimates as “serial guessing.” Without that baseline data, progress toward any given goal — to halve hunger, for example — could not be measured…(More)”.

Advancing Technology for Democracy


The White House: “The first wave of the digital revolution promised that new technologies would support democracy and human rights. The second saw an authoritarian counterrevolution. Now, the United States and other democracies are working together to ensure that the third wave of the digital revolution leads to a technological ecosystem characterized by resilience, integrity, openness, trust and security, and that reinforces democratic principles and human rights.

Together, we are organizing and mobilizing to ensure that technologies work for, not against, democratic principles, institutions, and societies.  In so doing, we will continue to engage the private sector, including by holding technology platforms accountable when they do not take action to counter the harms they cause, and by encouraging them to live up to democratic principles and shared values…

Key deliverables announced or highlighted at the second Summit for Democracy include:

  • National Strategy to Advance Privacy-Preserving Data Sharing and Analytics. OSTP released a National Strategy to Advance Privacy-Preserving Data Sharing and Analytics, a roadmap for harnessing privacy-enhancing technologies, coupled with strong governance, to enable data sharing and analytics in a way that benefits individuals and society, while mitigating privacy risks and harms and upholding democratic principles.  
  • National Objectives for Digital Assets Research and Development. OSTP also released a set of National Objectives for Digital Assets Research and Development, whichoutline its priorities for the responsible research and development (R&D) of digital assets. These objectives will help developers of digital assets better reinforce democratic principles and protect consumers by default.
  • Launch of Trustworthy and Responsible AI Resource Center for Risk Management. NIST announced a new Resource Center, which is designed as a one-stop-shop website for foundational content, technical documents, and toolkits to enable responsible use of AI. Government, industry, and academic stakeholders can access resources such as a repository for AI standards, measurement methods and metrics, and data sets. The website is designed to facilitate the implementation and international alignment with the AI Risk Management Framework. The Framework articulates the key building blocks of trustworthy AI and offers guidance for addressing them.
  • International Grand Challenges on Democracy-Affirming Technologies. Announced at the first Summit, the United States and the United Kingdom carried out their joint Privacy Enhancing Technology Prize Challenges. IE University, in partnership with the U.S. Department of State, hosted the Tech4Democracy Global Entrepreneurship Challenge. The winners, selected from around the world, were featured at the second Summit….(More)”.

Data is power — it’s time we act like it


Article by Danil Mikhailov: “Almost 82% of NGOs in low- and middle-income countries cite a lack of funding as their biggest barrier to adopting digital tools for social impact. What’s more, data.org’s 2023 data for social impact, or DSI, report, Accelerate Aspirations: Moving Together to Achieve Systems Change, found that when it comes to financial support, funders overlook the power of advanced data strategies to address longer-term systemic solutions — instead focusing on short-term, project-based outcomes.

That’s a real problem as we look to deploy powerful, data-driven interventions to solve some of today’s biggest crises — from shifting demographics to rising inequality to pandemics to our global climate emergency. Given the urgent challenges our world faces, pilots, one-offs, and underresourced program interventions are no longer acceptable.

It’s time we — as funders, academics, and purpose-driven data practitioners — acknowledge that data is power. And how do we truly harness that power? We must look toward innovative, diverse, equitable, and collaborative funding and partnership models to meet the incredible potential of data for social impact or risk the success of systems-level solutions that lead to long-term impact…(More)”.

Law, AI, and Human Rights


Article by John Croker: “Technology has been at the heart of two injustices that courts have labelled significant miscarriages of justice. The first example will be familiar now to many people in the UK: colloquially known as the ‘post office’ or ‘horizon’ scandal. The second is from Australia, where the Commonwealth Government sought to utilise AI to identify overpayment in the welfare system through what is colloquially known as the ‘Robodebt System’. The first example resulted in the most widespread miscarriage of justice in the UK legal system’s history. The second example was labelled “a shameful chapter” in government administration in Australia and led to the government unlawfully asserting debts amounting to $1.763 billion against 433,000 Australians, and is now the subject of a Royal Commission seeking to identify how public policy failures could have been made on such a significant scale.

Both examples show that where technology and AI goes wrong, the scale of the injustice can result in unprecedented impacts across societies….(More)”.

When Concerned People Produce Environmental Information: A Need to Re-Think Existing Legal Frameworks and Governance Models?


Paper by Anna Berti Suman, Mara Balestrini, Muki Haklay, and Sven Schade: “When faced with an environmental problem, locals are often among the first to act. Citizen science is increasingly one of the forms of participation in which people take action to help solve environmental problems that concern them. This implies, for example, using methods and instruments with scientific validity to collect and analyse data and evidence to understand the problem and its causes. Can the contribution of environmental data by citizens be articulated as a right? In this article, we explore these forms of productive engagement with a local matter of concern, focussing on their potential to challenge traditional allocations of responsibilities. Taking mostly the perspective of the European legal context, we identify an existing gap between the right to obtain environmental information, granted at present by the Aarhus Convention, and “a right to contribute information” and have that information considered by appointed institutions. We also explore what would be required to effectively practise this right in terms of legal and governance processes, capacities, and infrastructures, and we propose a flexible framework to implement it. Situated at the intersection of legal and governance studies, this article builds on existing literature on environmental citizen science, and on its interplay with law and governance. Our methodological approach combines literature review with legal analysis of the relevant conventions and national rules. We conclude by reflecting on the implications of our analysis, and on the benefits of this legal innovation, potentially fostering data altruism and an active citizenship, and shielding ordinary people against possible legal risks…(More)”.

China’s fake science industry: how ‘paper mills’ threaten progress


Article by Eleanor Olcott, Clive Cookson and Alan Smith at the Financial Times: “…Over the past two decades, Chinese researchers have become some of the world’s most prolific publishers of scientific papers. The Institute for Scientific Information, a US-based research analysis organisation, calculated that China produced 3.7mn papers in 2021 — 23 per cent of global output — and just behind the 4.4mn total from the US.

At the same time, China has been climbing the ranks of the number of times a paper is cited by other authors, a metric used to judge output quality. Last year, China surpassed the US for the first time in the number of most cited papers, according to Japan’s National Institute of Science and Technology Policy, although that figure was flattered by multiple references to Chinese research that first sequenced the Covid-19 virus genome.

The soaring output has sparked concern in western capitals. Chinese advances in high-profile fields such as quantum technology, genomics and space science, as well as Beijing’s surprise hypersonic missile test two years ago, have amplified the view that China is marching towards its goal of achieving global hegemony in science and technology.

That concern is a part of a wider breakdown of trust in some quarters between western institutions and Chinese ones, with some universities introducing background checks on Chinese academics amid fears of intellectual property theft.

But experts say that China’s impressive output masks systemic inefficiencies and an underbelly of low-quality and fraudulent research. Academics complain about the crushing pressure to publish to gain prized positions at research universities…(More)”.

What We Gain from More Behavioral Science in the Global South


Article by Pauline Kabitsis and Lydia Trupe: “In recent years, the field has been critiqued for applying behavioral science at the margins, settling for small but statistically significant effect sizes. Critics have argued that by focusing our efforts on nudging individuals to increase their 401(k) contributions or to reduce their so-called carbon footprint, we have ignored the systemic drivers of important challenges, such as fundamental flaws in the financial system and corporate responsibility for climate change. As Michael Hallsworth points out, however, the field may not be willfully ignoring these deeper challenges, but rather investing in areas of change that are likely easier to move, measure, and secure funding.

It’s been our experience working in the Global South that nudge-based solutions can provide short-term gains within current systems, but for lasting impact a focus beyond individual-level change is required. This is because the challenges in the Global South typically navigate fundamental problems, like enabling women’s reproductive choice, combatting intimate partner violence and improving food security among the world’s most vulnerable populations.

Our work at Common Thread focuses on improving behaviors related to health, like encouraging those persistently left behind to get vaccinated, and enabling Ukrainian refugees in Poland to access health and welfare services. We use a behavioral model that considers not just the individual biases that impact people’s behaviors, but the structural, social, interpersonal, and even historical context that triggers these biases and inhibits health seeking behaviors…(More)”.

China Data Flows and Power in the Era of Chinese Big Tech


Paper by W. Gregory Voss and Emmanuel Pernot-Leplay: “Personal data have great economic interest today and their possession and control are the object of geopolitics, leading to their regulation by means that vary dependent on the strategic objectives of the jurisdiction considered. This study fills a gap in the literature in this area by analyzing holistically the regulation of personal data flows both into and from China, the world’s second largest economy. In doing so, it focuses on laws and regulations of three major power blocs: the United States, the European Union, and China, seen within the framework of geopolitics, and considering the rise of Chinese big tech.

First, this study analyzes ways that the United States—the champion of the free-flow of data that has helped feed the success of the Silicon Valley system—has in specific cases prevented data flows to China on grounds of individual data protection and national security. The danger of this approach and alternate protection through potential U.S. federal data privacy legislation are evoked. Second, the cross-border data flow restriction of the European Union’s General Data Protection Regulation (GDPR) is studied in the context of data exports to China, including where the data transit via the United States prior to their transfer to China. Next, after review of the conditions for a European Commission adequacy determination and an examination of recent data privacy legislation in China, the authors provide a preliminary negative assessment of the potential for such a determination for China, where government access is an important part of the picture. Difficult points are highlighted for investigation by data exporters to China, when relying on EU transfer mechanisms, following the Schrems II jurisprudence.

Finally, recent Chinese regulations establishing requirements for the export of data are studied. In this exercise, light is shed on compliance requirements for companies under Chinese law, provisions of Chinese data transfer regulations that are similar to the those of the GDPR, and aspects that show China’s own approach to restrictions on data transfers, such as an emphasis on national security protection. This study concludes with the observation that restrictions for data flows both into and out of China will continue and potentially be amplified, and economic actors will need to prepare themselves to navigate the relevant regulations examined in this study….(More)”.