Addressing ethical gaps in ‘Technology for Good’: Foregrounding care and capabilities


Paper by Alison B. Powell et al: “This paper identifies and addresses persistent gaps in the consideration of ethical practice in ‘technology for good’ development contexts. Its main contribution is to model an integrative approach using multiple ethical frameworks to analyse and understand the everyday nature of ethical practice, including in professional practice among ‘technology for good’ start-ups. The paper identifies inherent paradoxes in the ‘technology for good’ sector as well as ethical gaps related to (1) the sometimes-misplaced assignment of virtuousness to an individual; (2) difficulties in understanding social constraints on ethical action; and (3) the often unaccounted for mismatch between ethical intentions and outcomes in everyday practice, including in professional work associated with an ‘ethical turn’ in technology. These gaps persist even in contexts where ethics are foregrounded as matters of concern. To address the gaps, the paper suggests systemic, rather than individualized, considerations of care and capability applied to innovation settings, in combination with considerations of virtue and consequence. This paper advocates for addressing these challenges holistically in order to generate renewed capacity for change at a systemic level…(More)”.

Democratic self-government and the algocratic shortcut: the democratic harms in algorithmic governance of society


Paper by Nardine Alnemr: “Algorithms are used to calculate and govern varying aspects of public life for efficient use of the vast data available about citizens. Assuming that algorithms are neutral and efficient in data-based decision making, algorithms are used in areas such as criminal justice and welfare. This has ramifications on the ideal of democratic self-government as algorithmic decisions are made without democratic deliberation, scrutiny or justification. In the book Democracy without Shortcuts, Cristina Lafont argued against “shortcutting” democratic self-government. Lafont’s critique of shortcuts turns to problematise taken-for-granted practices in democracies that bypass citizen inclusion and equality in authoring decisions governing public life. In this article, I extend Lafont’s argument to another shortcut: the algocratic shortcut. The democratic harms attributable to the algocratic shortcut include diminishing the role of voice in politics and reducing opportunities for civic engagement. In this article, I define the algocratic shortcut and discuss the democratic harms of this shortcut, its relation to other shortcuts to democracy and the limitations of using shortcuts to remedy algocratic harms. Finally, I reflect on remedy through “aspirational deliberation”…(More)”.

When is a Decision Automated? A Taxonomy for a Fundamental Rights Analysis


Paper by Francesca Palmiotto: “This paper addresses the pressing issues surrounding the use of automated systems in public decision-making, with a specific focus on the field of migration, asylum, and mobility. Drawing on empirical research conducted for the AFAR project, the paper examines the potential and limitations of the General Data Protection Regulation and the proposed Artificial Intelligence Act in effectively addressing the challenges posed by automated decision making (ADM). The paper argues that the current legal definitions and categorizations of ADM fail to capture the complexity and diversity of real-life applications, where automated systems assist human decision-makers rather than replace them entirely. This discrepancy between the legal framework and practical implementation highlights the need for a fundamental rights approach to legal protection in the automation age. To bridge the gap between ADM in law and practice, the paper proposes a taxonomy that provides theoretical clarity and enables a comprehensive understanding of ADM in public decision-making. This taxonomy not only enhances our understanding of ADM but also identifies the fundamental rights at stake for individuals and the sector-specific legislation applicable to ADM. The paper finally calls for empirical observations and input from experts in other areas of public law to enrich and refine the proposed taxonomy, thus ensuring clearer conceptual frameworks to safeguard individuals in our increasingly algorithmic society…(More)”.

The growing energy footprint of artificial intelligence


Paper by Alex de Vries: “Throughout 2022 and 2023, artificial intelligence (AI) has witnessed a period of rapid expansion and extensive, large-scale application. Prominent tech companies such as Alphabet and Microsoft significantly increased their support for AI in 2023, influenced by the successful launch of OpenAI’s ChatGPT, a conversational generative AI chatbot that reached 100 million users in an unprecedented 2 months. In response, Microsoft and Alphabet introduced their own chatbots, Bing Chat and Bard, respectively.

 This accelerated development raises concerns about the electricity consumption and potential environmental impact of AI and data centers. In recent years, data center electricity consumption has accounted for a relatively stable 1% of global electricity use, excluding cryptocurrency mining. Between 2010 and 2018, global data center electricity consumption may have increased by only 6%.

 There is increasing apprehension that the computational resources necessary to develop and maintain AI models and applications could cause a surge in data centers’ contribution to global electricity consumption.

This commentary explores initial research on AI electricity consumption and assesses the potential implications of widespread AI technology adoption on global data center electricity use. The piece discusses both pessimistic and optimistic scenarios and concludes with a cautionary note against embracing either extreme…(More)”.

The contested role of AI ethics boards in smart societies: a step towards improvement based on board composition by sortition


Paper by Ludovico Giacomo Conti & Peter Seele: “The recent proliferation of AI scandals led private and public organisations to implement new ethics guidelines, introduce AI ethics boards, and list ethical principles. Nevertheless, some of these efforts remained a façade not backed by any substantive action. Such behaviour made the public question the legitimacy of the AI industry and prompted scholars to accuse the sector of ethicswashing, machinewashing, and ethics trivialisation—criticisms that spilt over to institutional AI ethics boards. To counter this widespread issue, contributions in the literature have proposed fixes that do not consider its systemic character and are based on a top-down, expert-centric governance. To fill this gap, we propose to make use of qualified informed lotteries: a two-step model that transposes the documented benefits of the ancient practice of sortition into the selection of AI ethics boards’ members and combines them with the advantages of a stakeholder-driven, participative, and deliberative bottom-up process typical of Citizens’ Assemblies. The model permits increasing the public’s legitimacy and participation in the decision-making process and its deliverables, curbing the industry’s over-influence and lobbying, and diminishing the instrumentalisation of ethics boards. We suggest that this sortition-based approach may provide a sound base for both public and private organisations in smart societies for constructing a decentralised, bottom-up, participative digital democracy…(More)”.

Four Questions to Guide Decision-Making for Data Sharing and Integration


Paper by the Actionable Intelligence for Social Policy Center: “This paper presents a Four Question Framework to guide data integration partners in building a strong governance and legal foundation to support ethical data use. While this framework was developed based on work in the United States that routinely integrates public data, it is meant to be a simple, digestible tool that can be adapted to any context. The framework was developed through a series of public deliberation workgroups and 15 years of field experience working with a diversity of data integration efforts across the United States.
The Four Questions – Is this legal? Is this ethical? Is this a good idea? How do we know (and who decides)? – should be considered within an established data governance framework and alongside core partners to determine whether and how to move forward when building an Integrated Data System (IDS) and also at each stage of a specific data project. We discuss these questions in depth, with a particular focus on the role of governance in establishing legal and ethical data use. In addition, we provide example data governance structures from two IDS sites and hypothetical scenarios that illustrate key considerations for the Four Question Framework.
A robust governance process is essential for determining whether data sharing and integration is legal, ethical, and a good idea within the local context. This process is iterative and as relational as it is technical, which means authentic collaboration across partners should be prioritized at each stage of a data use project. The Four Questions serve as a guide for determining whether to undertake data sharing and integration and should be regularly revisited throughout the life of a project…(More)”.

The Participatory Turn in AI Design: Theoretical Foundations and the Current State of Practice


Paper by Fernando Delgado, Stephen Yang, Michael Madaio, and Qian Yang: “Despite the growing consensus that stakeholders affected by AI systems should participate in their design, enormous variation and implicit disagreements exist among current approaches. For researchers and practitioners who are interested in taking a participatory approach to AI design and development, it remains challenging to assess the extent to which any participatory approach grants substantive agency to stakeholders. This article thus aims to ground what we dub the “participatory turn” in AI design by synthesizing existing theoretical literature on participation and through empirical investigation and critique of its current practices. Specifically, we derive a conceptual framework through synthesis of literature across technology design, political theory, and the social sciences that researchers and practitioners can leverage to evaluate approaches to participation in AI design. Additionally, we articulate empirical findings concerning the current state of participatory practice in AI design based on an analysis of recently published research and semi-structured interviews with 12 AI researchers and practitioners. We use these empirical findings to understand the current state of participatory practice and subsequently provide guidance to better align participatory goals and methods in a way that accounts for practical constraints…(More)”.

Data Dysphoria: The Governance Challenge Posed by Large Learning Models


Paper by Susan Ariel Aaronson: “Only 8 months have passed since Chat-GPT and the large learning model underpinning it took the world by storm. This article focuses on the data supply chain—the data collected and then utilized to train large language models and the governance challenge it presents to policymakers These challenges include:

• How web scraping may affect individuals and firms which hold copyrights.
• How web scraping may affect individuals and groups who are supposed to be protected under privacy and personal data protection laws.
• How web scraping revealed the lack of protections for content creators and content providers on open access web sites; and
• How the debate over open and closed source LLM reveals the lack of clear and universal rules to ensure the quality and validity of datasets. As the US National Institute of Standards explained, many LLMs depend on “largescale datasets, which can lead to data quality and validity concerns. “The difficulty of finding the “right” data may lead AI actors to select datasets based more on accessibility and availability than on suitability… Such decisions could contribute to an environment where the data used in processes is not fully representative of the populations or phenomena that are being modeled, introducing downstream risks” –in short problems of quality and validity…(More)”.

Disaster preparedness: Will a “norm nudge” sink or swim?


Article by Jantsje Mol: “In these times of unprecedented climate change, one critical question persists: how do we motivate homeowners to protect their homes and loved ones from the ever-looming threat of flooding? This question led to a captivating behavioral science study, born from a research visit to the Wharton Risk Management and Decision Processes Center in 2019 (currently the Wharton Climate Center). Co-founded and co-directed by the late Howard Kunreuther, the Center has been at the forefront of understanding and mitigating the impact of natural disasters. In this study, we explored the potential of social norms to boost flood preparedness among homeowners. While the results may not align with initial expectations, they shed light on the complexities of human behavior, the significance of meticulous testing, and the enduring legacy of a visionary scholar.

The Power of Social Norms

Before we delve into the results, let’s take a moment to understand what social norms are and why they matter. Social norms dictate what is considered acceptable or expected in a given community. A popular behavioral intervention based on social norms is a norm-nudge: reading information about what others do (say, energy saving behavior of neighbors or tax compliance rates of fellow citizens) may adjust one’s own behavior closer. Norm-nudges are cheap, easy to implement and less prone to political resistance than traditional interventions such as taxes, but they might be ineffective or even backfire. Norm-nudges have been applied to health, finance and the environment, but not yet to the context of natural disaster risk-reduction…(More)”.

Can Google Trends predict asylum-seekers’ destination choices?


Paper by Haodong Qi & Tuba Bircan: “Google Trends (GT) collate the volumes of search keywords over time and by geographical location. Such data could, in theory, provide insights into people’s ex ante intentions to migrate, and hence be useful for predictive analysis of future migration. Empirically, however, the predictive power of GT is sensitive, it may vary depending on geographical context, the search keywords selected for analysis, as well as Google’s market share and its users’ characteristics and search behavior, among others. Unlike most previous studies attempting to demonstrate the benefit of using GT for forecasting migration flows, this article addresses a critical but less discussed issue: when GT cannot enhance the performances of migration models. Using EUROSTAT statistics on first-time asylum applications and a set of push-pull indicators gathered from various data sources, we train three classes of gravity models that are commonly used in the migration literature, and examine how the inclusion of GT may affect models’ abilities to predict refugees’ destination choices. The results suggest that the effects of including GT are highly contingent on the complexity of different models. Specifically, GT can only improve the performance of relatively simple models, but not of those augmented by flow Fixed-Effects or by Auto-Regressive effects. These findings call for a more comprehensive analysis of the strengths and limitations of using GT, as well as other digital trace data, in the context of modeling and forecasting migration. It is our hope that this nuanced perspective can spur further innovations in the field, and ultimately bring us closer to a comprehensive modeling framework of human migration…(More)”.