The Case for Local and Regional Public Engagement in Governing Artificial Intelligence


Article by Stefaan Verhulst and Claudia Chwalisz: “As the Paris AI Action Summit approaches, the world’s attention will once again turn to the urgent questions surrounding how we govern artificial intelligence responsibly. Discussions will inevitably include calls for global coordination and participation, exemplified by several proposals for a Global Citizens’ Assembly on AI. While such initiatives aim to foster inclusivity, the reality is that meaningful deliberation and actionable outcomes often emerge most effectively at the local and regional levels.

Building on earlier reflections in “AI Globalism and AI Localism,” we argue that to govern AI for public benefit, we must prioritize building public engagement capacity closer to the communities where AI systems are deployed. Localized engagement not only ensures relevance to specific cultural, social, and economic contexts but also equips communities with the agency to shape both policy and product development in ways that reflect their needs and values.

While a Global Citizens’ Assembly sounds like a great idea on the surface, there is no public authority with teeth or enforcement mechanisms at that level of governance. The Paris Summit represents an opportunity to rethink existing AI governance frameworks, reorienting them toward an approach that is grounded in lived, local realities and mutually respectful processes of co-creation. Toward that end, we elaborate below on proposals for: local and regional AI assemblies; AI citizens’ assemblies for EU policy; capacity-building programs, and localized data governance models…(More)”.

Reimagining data for Open Source AI: A call to action


Report by Open Source Initiative: “Artificial intelligence (AI) is changing the world at a remarkable pace, with Open Source AI playing a pivotal role in shaping its trajectory. Yet, as AI advances, a fundamental challenge emerges: How do we create a data ecosystem that is not only robust but also equitable and sustainable?

The Open Source Initiative (OSI) and Open Future have taken a significant step toward addressing this challenge by releasing a white paper: “Data Governance in Open Source AI: Enabling Responsible and Systematic Access.” This document is the culmination of a global co-design process, enriched by insights from a vibrant two-day workshop held in Paris in October 2024….

The white paper offers a blueprint for a data ecosystem rooted in fairness, inclusivity and sustainability. It calls for two transformative shifts:

  1. From Open Data to Data Commons: Moving beyond the notion of unrestricted data to a model that balances openness with the rights and needs of all stakeholders.
  2. Broadening the stakeholder universe: Creating collaborative frameworks that unite communities, stewards and creators in equitable data-sharing practices.

To bring these shifts to life, the white paper delves into six critical focus areas:

  • Data preparation
  • Preference signaling and licensing
  • Data stewards and custodians
  • Environmental sustainability
  • Reciprocity and compensation
  • Policy interventions…(More)”

To Bot or Not to Bot? How AI Companions Are Reshaping Human Services and Connection


Essay by Julia Freeland Fisher: “Last year, a Harvard study on chatbots drew a startling conclusion: AI companions significantly reduce loneliness. The researchers found that “synthetic conversation partners,” or bots engineered to be caring and friendly, curbed loneliness on par with interacting with a fellow human. The study was silent, however, on the irony behind these findings: synthetic interaction is not a real, lasting connection. Should the price of curing loneliness really be more isolation?

Missing that subtext is emblematic of our times. Near-term upsides often overshadow long-term consequences. Even with important lessons learned about the harms of social media and big tech over the past two decades, today, optimism about AI’s potential is soaring, at least in some circles.

Bots present an especially tempting fix to long-standing capacity constraints across education, health care, and other social services. AI coaches, tutors, navigators, caseworkers, and assistants could overcome the very real challenges—like cost, recruitment, training, and retention—that have made access to vital forms of high-quality human support perennially hard to scale.

But scaling bots that simulate human support presents new risks. What happens if, across a wide range of “human” services, we trade access to more services for fewer human connections?…(More)”.

Towards Best Practices for Open Datasets for LLM Training


Paper by Stefan Baack et al: “Many AI companies are training their large language models (LLMs) on data without the permission of the copyright owners. The permissibility of doing so varies by jurisdiction: in countries like the EU and Japan, this is allowed under certain restrictions, while in the United States, the legal landscape is more ambiguous. Regardless of the legal status, concerns from creative producers have led to several high-profile copyright lawsuits, and the threat of litigation is commonly cited as a reason for the recent trend towards minimizing the information shared about training datasets by both corporate and public interest actors. This trend in limiting data information causes harm by hindering transparency, accountability, and innovation in the broader ecosystem by denying researchers, auditors, and impacted individuals access to the information needed to understand AI models.
While this could be mitigated by training language models on open access and public domain data, at the time of writing, there are no such models (trained at a meaningful scale) due to the substantial technical and sociological challenges in assembling the necessary corpus. These challenges include incomplete and unreliable metadata, the cost and complexity of digitizing physical records, and the diverse set of legal and technical skills required to ensure relevance and responsibility in a quickly changing landscape. Building towards a future where AI systems can be trained on openly licensed data that is responsibly curated and governed requires collaboration across legal, technical, and policy domains, along with investments in metadata standards, digitization, and fostering a culture of openness…(More)”.

Beware the Intention Economy: Collection and Commodification of Intent via Large Language Models


Article by Yaqub Chaudhary and Jonnie Penn: “The rapid proliferation of large language models (LLMs) invites the possibility of a new marketplace for behavioral and psychological data that signals intent. This brief article introduces some initial features of that emerging marketplace. We survey recent efforts by tech executives to position the capture, manipulation, and commodification of human intentionality as a lucrative parallel to—and viable extension of—the now-dominant attention economy, which has bent consumer, civic, and media norms around users’ finite attention spans since the 1990s. We call this follow-on the intention economy. We characterize it in two ways. First, as a competition, initially, between established tech players armed with the infrastructural and data capacities needed to vie for first-mover advantage on a new frontier of persuasive technologies. Second, as a commodification of hitherto unreachable levels of explicit and implicit data that signal intent, namely those signals borne of combining (a) hyper-personalized manipulation via LLM-based sycophancy, ingratiation, and emotional infiltration and (b) increasingly detailed categorization of online activity elicited through natural language.

This new dimension of automated persuasion draws on the unique capabilities of LLMs and generative AI more broadly, which intervene not only on what users want, but also, to cite Williams, “what they want to want” (Williams, 2018, p. 122). We demonstrate through a close reading of recent technical and critical literature (including unpublished papers from ArXiv) that such tools are already being explored to elicit, infer, collect, record, understand, forecast, and ultimately manipulate, modulate, and commodify human plans and purposes, both mundane (e.g., selecting a hotel) and profound (e.g., selecting a political candidate)…(More)”.

Nearly all Americans use AI, though most dislike it, poll shows


Axios: “The vast majority of Americans use products that involve AI, but their views of the technology remain overwhelmingly negative, according to a Gallup-Telescope survey published Wednesday.

Why it matters: The rapid advancement of generative AI threatens to have far-reaching consequences for Americans’ everyday lives, including reshaping the job marketimpacting elections, and affecting the health care industry.

The big picture: An estimated 99% of Americans used at least one AI-enabled product in the past week, but nearly two-thirds didn’t realize they were doing so, according to the poll’s findings.

  • These products included navigation apps, personal virtual assistants, weather forecasting apps, streaming services, shopping websites and social media platforms.
  • Ellyn Maese, a senior research consultant at Gallup, told Axios that the disconnect is because there is “a lot of confusion when it comes to what is just a computer program versus what is truly AI and intelligent.”

Zoom in: Despite its prevalent use, Americans’ views of AI remain overwhelmingly bleak, the survey found.

  • 72% of those surveyed had a “somewhat” or “very” negative opinion of how AI would impact the spread of false information, while 64% said the same about how it affects social connections.
  • The only area where a majority of Americans (61%) had a positive view of AI’s impact was regarding how it might help medical diagnosis and treatment…

State of play: The survey found that 68% of Americans believe the government and businesses equally bear responsibility for addressing the spread of false information related to AI.

  • 63% said the same about personal data privacy violations.
  • Majorities of those surveyed felt the same about combatting the unauthorized use of individuals’ likenesses (62%) and AI’s impact on job losses (52%).
  • In fact, the only area where Americans felt differently was when it came to national security threats; 62% of those surveyed said the government bore primary responsibility for reducing such threats…(More).”

Governing artificial intelligence means governing data: (re)setting the agenda for data justice


Paper by Linnet Taylor, Siddharth Peter de Souza, Aaron Martin, and Joan López Solano: “The field of data justice has been evolving to take into account the role of data in powering the field of artificial intelligence (AI). In this paper we review the main conceptual bases for governing data and AI: the market-based approach, the personal–non-personal data distinction and strategic sovereignty. We then analyse how these are being operationalised into practical models for governance, including public data trusts, data cooperatives, personal data sovereignty, data collaboratives, data commons approaches and indigenous data sovereignty. We interrogate these models’ potential for just governance based on four benchmarks which we propose as a reformulation of the Data Justice governance agenda identified by Taylor in her 2017 framework. Re-situating data justice at the intersection of data and AI, these benchmarks focus on preserving and strengthening public infrastructures and public goods; inclusiveness; contestability and accountability; and global responsibility. We demonstrate how they can be used to test whether a governance approach will succeed in redistributing power, engaging with public concerns and creating a plural politics of AI…(More)”.

Artificial Intelligence Narratives


A Global Voices Report: “…Framing AI systems as intelligent is further complicated and intertwined with neighboring narratives. In the US, AI narratives often revolve around opposing themes such as hope and fear, often bridging two strong emotions: existential fears and economic aspirations. In either case, they propose that the technology is powerful. These narratives contribute to the hype surrounding AI tools and their potential impact on society. Some examples include:

Many of these framings often present AI as an unstoppable and accelerating force. While this narrative can generate excitement and investment in AI research, it can also contribute to a sense of technological determinism and a lack of critical engagement with the consequences of widespread AI adoption. Counter-narratives are many and expand on the motifs of surveillance, erosions of trust, bias, job impacts, exploitation of labor, high-risk uses, the concentration of power, and environmental impacts, among others.

These narrative frames, combined with the metaphorical language and imagery used to describe AI, contribute to the confusion and lack of public knowledge about the technology. By positioning AI as a transformative, inevitable, and necessary tool for national success, these narratives can shape public opinion and policy decisions, often in ways that prioritize rapid adoption and commercialization…(More)”

AI for Social Good


Essay by Iqbal Dhaliwal: “Artificial intelligence (AI) has the potential to transform our lives. Like the internet, it’s a general-purpose technology that spans sectors, is widely accessible, has a low marginal cost of adding users, and is constantly improving. Tech companies are rapidly deploying more capable AI models that are seeping into our personal lives and work.

AI is also swiftly penetrating the social sector. Governments, social enterprises, and NGOs are infusing AI into programs, while public treasuries and donors are working hard to understand where to invest. For example, AI is being deployed to improve health diagnostics, map flood-prone areas for better relief targeting, grade students’ essays to free up teachers’ time for student interaction, assist governments in detecting tax fraud, and enable agricultural extension workers to customize advice.

But the social sector is also rife with examples over the past two decades of technologies touted as silver bullets that fell short of expectations, including One Laptop Per ChildSMS reminders to take medication, and smokeless stoves to reduce indoor air pollution. To avoid a similar fate, AI-infused programs must incorporate insights from years of evidence generated by rigorous impact evaluations and be scaled in an informed way through concurrent evaluations.

Specifically, implementers of such programs must pay attention to three elements. First, they must use research insights on where AI is likely to have the greatest social impact. Decades of research using randomized controlled trials and other exacting empirical work provide us with insights across sectors on where and how AI can play the most effective role in social programs.

Second, they must incorporate research lessons on how to effectively infuse AI into existing social programs. We have decades of research on when and why technologies succeed or fail in the social sector that can help guide AI adopters (governments, social enterprises, NGOs), tech companies, and donors to avoid pitfalls and design effective programs that work in the field.

Third, we must promote the rigorous evaluation of AI in the social sector so that we disseminate trustworthy information about what works and what does not. We must motivate adopters, tech companies, and donors to conduct independent, rigorous, concurrent impact evaluations of promising AI applications across social sectors (including impact on workers themselves); draw insights emerging across multiple studies; and disseminate those insights widely so that the benefits of AI can be maximized and its harms understood and minimized. Taking these steps can also help build trust in AI among social sector players and program participants more broadly…(More)”.

Which Health Facilities Have Been Impacted by L.A.-Area Fires? AI May Paint a Clearer Picture


Article by Andrew Schroeder: “One of the most important factors for humanitarian responders in these types of large-scale disaster situations is to understand the effects on the formal health system, upon which most people — and vulnerable communities in particular — rely upon in their neighborhoods. Evaluation of the impact of disasters on individual structures, including critical infrastructure such as health facilities, is traditionally a relatively slow and manually arduous process, involving extensive ground truth visitation by teams of assessment professionals.

Speeding up this process without losing accuracy, while potentially improving the safety and efficiency of assessment teams, is among the more important analytical efforts Direct Relief can undertake for response and recovery efforts. Manual assessments can now be effectively paired with AI-based analysis of satellite imagery to do just that…

With the advent of geospatial AI models trained on disaster damage impacts, ground assessment is not the only tool available to response agencies and others seeking to understand how much damage has occurred and the degree to which that damage may affect essential services for communities. The work of the Oregon State University team of experts in remote sensing-based post-disaster damage detection, led by Jamon Van Den Hoek and Corey Scher, was featured in the Financial Times on January 9.

Their modeling, based on Sentinel-1 satellite imagery, identified 21,757 structures overall, of which 11,124 were determined to have some level of damage. The Oregon State model does not distinguish between different levels of damage, and therefore cannot respond to certain types of questions that the manual inspections can respond to, but nevertheless the coverage area and the speed of detection have been much greater…(More)”.