What does it mean to be good? The normative and metaethical problem with ‘AI for good’


Article by Tom Stenson: “Using AI for good is an imperative for its development and regulation, but what exactly does it mean? This article contends that ‘AI for good’ is a powerful normative concept and is problematic for the ethics of AI because it oversimplifies complex philosophical questions in defining good and assumes a level of moral knowledge and certainty that may not be justified. ‘AI for good’ expresses a value judgement on what AI should be and its role in society, thereby functioning as a normative concept in AI ethics. As a moral statement, AI for good makes two things implicit: i) we know what a good outcome is and ii) we know the process by which to achieve it. By examining these two claims, this article will articulate the thesis that ‘AI for good’ should be examined as a normative and metaethical problem for AI ethics. Furthermore, it argues that we need to pay more attention to our relationship with normativity and how it guides what we believe the ‘work’ of ethical AI should be…(More)”.

Scraping the demos. Digitalization, web scraping and the democratic project


Paper by Lena Ulbricht: “Scientific, political and bureaucratic elites use epistemic practices like “big data analysis” and “web scraping” to create representations of the citizenry and to legitimize policymaking. I develop the concept of “demos scraping” for these practices of gaining information about citizens (the “demos”) through automated analysis of digital trace data which are re-purposed for political means. This article critically engages with the discourse advocating demos scraping and provides a conceptual analysis of its democratic implications. It engages with the promise of demos scraping advocates to reduce the gap between political elites and citizens and highlights how demos scraping is presented as a superior form of accessing the “will of the people” and to increase democratic legitimacy. This leads me to critically discuss the implications of demos scraping for political representation and participation. In its current form, demos scraping is technocratic and de-politicizing; and the larger political and economic context in which it takes place makes it unlikely that it will reduce the gap between elites and citizens. From the analytic perspective of a post-democratic turn, demos scraping is an attempt of late modern and digitalized societies to address the democratic paradox of increasing citizen expectations coupled with a deep legitimation crisis…(More)”.

Participation in the Age of Foundation Models


Paper by Harini Suresh et al: “Growing interest and investment in the capabilities of foundation models has positioned such systems to impact a wide array of services, from banking to healthcare. Alongside these opportunities is the risk that these systems reify existing power imbalances and cause disproportionate harm to historically marginalized groups. The larger scale and domain-agnostic manner in which these models operate further heightens the stakes: any errors or harms are liable to reoccur across use cases. In AI & ML more broadly, participatory approaches hold promise to lend agency and decision-making power to marginalized stakeholders, leading to systems that better benefit justice through equitable and distributed governance. But existing approaches in participatory AI/ML are typically grounded in a specific application and set of relevant stakeholders, and it is not straightforward how to apply these lessons to the context of foundation models. Our paper aims to fill this gap.
First, we examine existing attempts at incorporating participation into foundation models. We highlight the tension between participation and scale, demonstrating that it is intractable for impacted communities to meaningfully shape a foundation model that is intended to be universally applicable. In response, we develop a blueprint for participatory foundation models that identifies more
local, application-oriented opportunities for meaningful participation. In addition to the “foundation” layer, our framework proposes the “subfloor” layer, in which stakeholders develop shared technical infrastructure, norms and governance for a grounded domain such as clinical care, journalism, or finance, and the “surface” (or application) layer, in which affected communities shape the use of a foundation model for a specific downstream task. The intermediate “subfloor” layer scopes the range of potential harms to consider, and affords communities more concrete avenues for deliberation and intervention. At the same time, it avoids duplicative effort by scaling input across relevant use cases. Through three case studies in clinical care, financial services, and journalism, we illustrate how this multi-layer model can create more meaningful opportunities for participation than solely intervening at the foundation layer…(More)”.

“The Death of Wikipedia?” — Exploring the Impact of ChatGPT on Wikipedia Engagement


Paper by Neal Reeves, Wenjie Yin, Elena Simperl: “Wikipedia is one of the most popular websites in the world, serving as a major source of information and learning resource for millions of users worldwide. While motivations for its usage vary, prior research suggests shallow information gathering — looking up facts and information or answering questions — dominates over more in-depth usage. On the 22nd of November 2022, ChatGPT was released to the public and has quickly become a popular source of information, serving as an effective question-answering and knowledge gathering resource. Early indications have suggested that it may be drawing users away from traditional question answering services such as Stack Overflow, raising the question of how it may have impacted Wikipedia. In this paper, we explore Wikipedia user metrics across four areas: page views, unique visitor numbers, edit counts and editor numbers within twelve language instances of Wikipedia. We perform pairwise comparisons of these metrics before and after the release of ChatGPT and implement a panel regression model to observe and quantify longer-term trends. We find no evidence of a fall in engagement across any of the four metrics, instead observing that page views and visitor numbers increased in the period following ChatGPT’s launch. However, we observe a lower increase in languages where ChatGPT was available than in languages where it was not, which may suggest ChatGPT’s availability limited growth in those languages. Our results contribute to the understanding of how emerging generative AI tools are disrupting the Web ecosystem…(More)”. See also: Are we entering a Data Winter? On the urgent need to preserve data access for the public interest.

Towards a pan-EU Freedom of Information Act? Harmonizing Access to Information in the EU through the internal market competence


Paper by Alberto Alemanno and Sébastien Fassiaux: “This paper examines whether – and on what basis – the EU may harmonise the right of access to information across the Union. It does by examining the available legal basis established by relevant international obligations, such as those stemming from the Council of Europe, and EU primary law. Its demonstrates that neither the Council of Europe – through the European Convention of Human Rights and the more recent Trømso Convention – nor the EU – through Article 41 of the EU Charter of Fundamental Rights – do require the EU to enact minimum standards of access to information. That Charter’s provision combined with Articles 10 and 11 TEU do require instead only the EU institutions – not the EU Member States – to ensure public access to documents, including legislative texts and meeting minutes. Regulation 1049/2001 was adopted (originally Art. 255 TEC) on such a legal basis and should be revised accordingly. The paper demonstrates that the most promising legal basis enabling the EU to proceed towards the harmonisation of access to information within the EU is offered by Article 114 TFEU. It argues hat the harmonisation of the conditions governing access to information across Member States would facilitate cross-border activities and trade, thus enhancing the internal market. Moreover, this would ensure equal access to information for all EU citizens and residents, irrespective of their location within the EU. Therefore, the question is not whether but how the EU may – under Article 114 TFEU – act to harmonise access to information. If the EU enjoys wide legislative discretion under Article 114(1) TFEU, this is not absolute but is subject to limits derived from fundamental rights and principles such as proportionality, equality, and subsidiarity. Hence, the need to design the type of harmonisation capable of preserving existing national FOIAs while enhancing the weakest ones. The only type of harmonisation fit for purpose would therefore be minimal, as opposed to maximal, by merely defining the minimum conditions required on each Member State’s national legislation governing the access to information…(More)”.

Determinants of behaviour and their efficacy as targets of behavioural change interventions


Paper with Dolores Albarracín, Bita Fayaz-Farkhad & Javier A. Granados Samayoa: “Unprecedented social, environmental, political and economic challenges — such as pandemics and epidemics, environmental degradation and community violence — require taking stock of how to promote behaviours that benefit individuals and society at large. In this Review, we synthesize multidisciplinary meta-analyses of the individual and social-structural determinants of behaviour (for example, beliefs and norms, respectively) and the efficacy of behavioural change interventions that target them. We find that, across domains, interventions designed to change individual determinants can be ordered by increasing impact as those targeting knowledge, general skills, general attitudes, beliefs, emotions, behavioural skills, behavioural attitudes and habits. Interventions designed to change social-structural determinants can be ordered by increasing impact as legal and administrative sanctions; programmes that increase institutional trustworthiness; interventions to change injunctive norms; monitors and reminders; descriptive norm interventions; material incentives; social support provision; and policies that increase access to a particular behaviour. We find similar patterns for health and environmental behavioural change specifically. Thus, policymakers should focus on interventions that enable individuals to circumvent obstacles to enacting desirable behaviours rather than targeting salient but ineffective determinants of behaviour such as knowledge and beliefs…(More)”.

Artificial intelligence, the common good, and the democratic deficit in AI governance


Paper by Mark Coeckelbergh: “There is a broad consensus that artificial intelligence should contribute to the common good, but it is not clear what is meant by that. This paper discusses this issue and uses it as a lens for analysing what it calls the “democracy deficit” in current AI governance, which includes a tendency to deny the inherently political character of the issue and to take a technocratic shortcut. It indicates what we may agree on and what is and should be up to (further) deliberation when it comes to AI ethics and AI governance. Inspired by the republican tradition in political theory, it also argues for a more active role of citizens and (end-)users: not only as participants in deliberation but also in ensuring, creatively and communicatively, that AI contributes to the common good…(More)”.

Dynamic Collective Action and the Power of Large Numbers


Paper by Marco Battaglini & Thomas R. Palfrey: “Collective action is a dynamic process where individuals in a group assess over time the benefits and costs of participating toward the success of a collective goal. Early participation improves the expectation of success and thus stimulates the subsequent participation of other individuals who might otherwise be unwilling to engage. On the other hand, a slow start can depress expectations and lead to failure for the group. Individuals have an incentive to procrastinate, not only in the hope of free riding, but also in order to observe the flow of participation by others, which allows them to better gauge whether their own participation will be useful or simply wasted. How do these phenomena affect the probability of success for a group? As the size of the group increases, will a “power of large numbers” prevail producing successful outcomes, or will a “curse of large numbers” lead to failure? In this paper, we address these questions by studying a dynamic collective action problem in which n individuals can achieve a collective goal if a share of them takes a costly action (e.g., participate in a protest, join a picket line, or sign an environmental agreement). Individuals have privately known participation costs and decide over time if and when to participate. We characterize the equilibria of this game and show that under general conditions the eventual success of collective action is necessarily probabilistic. The process starts for sure, and hence there is always a positive probability of success; however, the process “gets stuck” with positive probability, in the sense that participation stops short of the goal. Equilibrium outcomes have a simple characterization in large populations: welfare converges to either full efficiency or zero as n→∞ depending on a precise condition on the rate at which the share required for success converges to zero. Whether success is achievable or not, delays are always irrelevant: in the limit, success is achieved either instantly or never…(More)”

Predicting hotspots of unsheltered homelessness using geospatial administrative data and volunteered geographic information


Paper by Jessie Chien, Benjamin F. Henwood, Patricia St. Clair, Stephanie Kwack, and Randall Kuhn: “Unsheltered homelessness is an increasingly prevalent phenomenon in major cities that is associated with adverse health and mortality outcomes. This creates a need for spatial estimates of population denominators for resource allocation and epidemiological studies. Gaps in the timeliness, coverage, and spatial specificity of official Point-in-Time Counts of unsheltered homelessness suggest a role for geospatial data from alternative sources to provide interim, neighborhood-level estimates of counts and trends. We use citizen-generated data from homeless-related 311 requests, provider-based administrative data from homeless street outreach cases, and expert reports of unsheltered count to predict count and emerging hotspots of unsheltered homelessness in census tracts across the City of Los Angeles for 2019 and 2020. Our study shows that alternative data sources can contribute timely insights into the state of unsheltered homelessness throughout the year and inform the delivery of interventions to this vulnerable population…(More)”.

Data governance for the ecological transition: An infrastructure perspective


Article by Charlotte Ducuing: “This article uses infrastructure studies to provide a critical analysis of the European Union’s (EU) ambition to regulate data for the ecological transition. The EU’s regulatory project implicitly qualifies data as an infrastructure for a better economy and society. However, current EU law does not draw all the logical consequences derived from this qualification of data as infrastructure, which is one main reason why EU data legislation for the ecological transition may not deliver on its high political expectations. The ecological transition does not play a significant normative role in EU data legislation and is largely overlooked in the data governance literature. By drawing inferences from the qualification of data as an infrastructure more consistently, the article opens avenues for data governance that centre the ecological transition as a normative goal…(More)”.