The AI Policy Playbook


Playbook by AI Policymaker Network & Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ) GmbH: “It moves away from talking about AI ethics in abstract terms but tells of building policies that work right-away in emerging economies and respond to immediate development priorities. The Playbook emphasises that a one-size-fits-all solution doesn’t work. Rather, it illustrates shared challenges—like limited research capacity, fragmented data ecosystems, and compounding AI risks—while spotlighting national innovations and success stories. From drafting AI strategies to engaging communities and safeguarding rights, it lays out a roadmap grounded in local realities….What can you expect to find in the AI Policy Playbook:

  1. Policymaker Interviews
    Real-world insights from policymakers to understand their challenges and best practices.
  2. Policy Process Analysis
    Key elements from existing policies to extract effective strategies for AI governance, as well as policy mapping.
  3. Case Studies
    Examples of successes and lessons learnt from various countries to provide practical guidance.
  4. Recommendations
    Concrete solutions and recommendations from actors in the field to improve the policy development process, including quick tips for implementation and handling challenges.

What distinguishes this initiative is its commitment to peer learning and co-creation. The Africa-Asia AI Policymaker Network comprises over 30 high-level government partners who anchor the Playbook in real-world policy contexts. This ensures that the frameworks are not only theoretically sound but politically and socially implementable…(More)”

Hamburg Declaration on Responsible AI


Declaration by the United Nations Development Programme (UNDP), in partnership with the German Federal Ministry for Economic Cooperation and Development (BMZ): “We are at a crossroads. Despite the progress made in recent years, we need renewed commitment andvengagement to advance toward and achieve the Sustainable Development Goals (SDGs). Digital technologies, such as Artificial Intelligence (AI), can play a significant role in this regard. AI presents opportunities and risks in a world of rapid social, political, economic, ecological, and technological shifts. If developed and deployed responsibly, AI can drive sustainable development and benefit society, the economy, and the planet. Yet, without safeguards throughout the AI value chain, it may widen inequalities within and between countries and contribute to direct harm through inappropriate, illegal, or deliberate misuse. It can also contribute to human rights violations, fuel disinformation, homogenize creative and cultural expression, and harm the environment. These risks are likely to disproportionately affect low-income countries, vulnerable groups, and future generations. Geopolitical competition and market dependencies further amplify these risks…(More)”.

Silicon Valley Is at an Inflection Point


Article by Karen Hao: “…In the decade that I have observed Silicon Valley — first as an engineer, then as a journalist — I’ve watched the industry shift to a new paradigm. Tech companies have long reaped the benefits of a friendly U.S. government, but the Trump administration has made clear that it will now grant new firepower to the industry’s ambitions. The Stargate announcement was just one signal. Another was the Republican tax bill that the House passed last week, which would prohibit states from regulating A.I. for the next 10 years.

The leading A.I. giants are no longer merely multinational corporations; they are growing into modern-day empires. With the full support of the federal government, soon they will be able to reshape most spheres of society as they please, from the political to the economic to the production of science…(More)”.

Collective Bargaining in the Information Economy Can Address AI-Driven Power Concentration


Position paper by Nicholas Vincent, Matthew Prewitt and Hanlin Li: “…argues that there is an urgent need to restructure markets for the information that goes into AI systems. Specifically, producers of information goods (such as journalists, researchers, and creative professionals) need to be able to collectively bargain with AI product builders in order to receive reasonable terms and a sustainable return on the informational value they contribute. We argue that without increased market coordination or collective bargaining on the side of these primary information producers, AI will exacerbate a large-scale “information market failure” that will lead not only to undesirable concentration of capital, but also to a potential “ecological collapse” in the informational commons. On the other hand, collective bargaining in the information economy can create market frictions and aligned incentives necessary for a pro-social, sustainable AI future. We provide concrete actions that can be taken to support a coalitionbased approach to achieve this goal. For example, researchers and developers can establish technical mechanisms such as federated data management tools and explainable data value estimations, to inform and facilitate collective bargaining in the information economy. Additionally, regulatory and policy interventions may be introduced to support trusted data intermediary organizations representing guilds or syndicates of information producers…(More)”.

Some signs of AI model collapse begin to reveal themselves


Article by Steven J. Vaughan-Nichols: “I use AI a lot, but not to write stories. I use AI for search. When it comes to search, AI, especially Perplexity, is simply better than Google.

Ordinary search has gone to the dogs. Maybe as Google goes gaga for AI, its search engine will get better again, but I doubt it. In just the last few months, I’ve noticed that AI-enabled search, too, has been getting crappier.

In particular, I’m finding that when I search for hard data such as market-share statistics or other business numbers, the results often come from bad sources. Instead of stats from 10-Ks, the US Securities and Exchange Commission’s (SEC) mandated annual business financial reports for public companies, I get numbers from sites purporting to be summaries of business reports. These bear some resemblance to reality, but they’re never quite right. If I specify I want only 10-K results, it works. If I just ask for financial results, the answers get… interesting,

This isn’t just Perplexity. I’ve done the exact same searches on all the major AI search bots, and they all give me “questionable” results.

Welcome to Garbage In/Garbage Out (GIGO). Formally, in AI circles, this is known as AI model collapse. In an AI model collapse, AI systems, which are trained on their own outputs, gradually lose accuracy, diversity, and reliability. This occurs because errors compound across successive model generations, leading to distorted data distributions and “irreversible defects” in performance. The final result? A Nature 2024 paper stated, “The model becomes poisoned with its own projection of reality.”

Model collapse is the result of three different factors. The first is error accumulation, in which each model generation inherits and amplifies flaws from previous versions, causing outputs to drift from original data patterns. Next, there is the loss of tail data: In this, rare events are erased from training data, and eventually, entire concepts are blurred. Finally, feedback loops reinforce narrow patterns, creating repetitive text or biased recommendations…(More)”.

Ethical implications related to processing of personal data and artificial intelligence in humanitarian crises: a scoping review


Paper by Tino Kreutzer et al: “Humanitarian organizations are rapidly expanding their use of data in the pursuit of operational gains in effectiveness and efficiency. Ethical risks, particularly from artificial intelligence (AI) data processing, are increasingly recognized yet inadequately addressed by current humanitarian data protection guidelines. This study reports on a scoping review that maps the range of ethical issues that have been raised in the academic literature regarding data processing of people affected by humanitarian crises….

We identified 16,200 unique records and retained 218 relevant studies. Nearly one in three (n = 66) discussed technologies related to AI. Seventeen studies included an author from a lower-middle income country while four included an author from a low-income country. We identified 22 ethical issues which were then grouped along the four ethical value categories of autonomy, beneficence, non-maleficence, and justice. Slightly over half of included studies (n = 113) identified ethical issues based on real-world examples. The most-cited ethical issue (n = 134) was a concern for privacy in cases where personal or sensitive data might be inadvertently shared with third parties. Aside from AI, the technologies most frequently discussed in these studies included social media, crowdsourcing, and mapping tools.

Studies highlight significant concerns that data processing in humanitarian contexts can cause additional harm, may not provide direct benefits, may limit affected populations’ autonomy, and can lead to the unfair distribution of scarce resources. The increase in AI tool deployment for humanitarian assistance amplifies these concerns. Urgent development of specific, comprehensive guidelines, training, and auditing methods is required to address these ethical challenges. Moreover, empirical research from low and middle-income countries, disproportionally affected by humanitarian crises, is vital to ensure inclusive and diverse perspectives. This research should focus on the ethical implications of both emerging AI systems, as well as established humanitarian data management practices…(More)”.

Two Paths for A.I.


Essay by Joshua Rothman: “Last spring, Daniel Kokotajlo, an A.I.-safety researcher working at OpenAI, quit his job in protest. He’d become convinced that the company wasn’t prepared for the future of its own technology, and wanted to sound the alarm. After a mutual friend connected us, we spoke on the phone. I found Kokotajlo affable, informed, and anxious. Advances in “alignment,” he told me—the suite of techniques used to insure that A.I. acts in accordance with human commands and values—were lagging behind gains in intelligence. Researchers, he said, were hurtling toward the creation of powerful systems they couldn’t control.

Kokotajlo, who had transitioned from a graduate program in philosophy to a career in A.I., explained how he’d educated himself so that he could understand the field. While at OpenAI, part of his job had been to track progress in A.I. so that he could construct timelines predicting when various thresholds of intelligence might be crossed. At one point, after the technology advanced unexpectedly, he’d had to shift his timelines up by decades. In 2021, he’d written a scenario about A.I. titled “What 2026 Looks Like.” Much of what he’d predicted had come to pass before the titular year. He’d concluded that a point of no return, when A.I. might become better than people at almost all important tasks, and be trusted with great power and authority, could arrive in 2027 or sooner. He sounded scared.

Around the same time that Kokotajlo left OpenAI, two computer scientists at Princeton, Sayash Kapoor and Arvind Narayanan, were preparing for the publication of their book, “AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference.” In it, Kapoor and Narayanan, who study technology’s integration with society, advanced views that were diametrically opposed to Kokotajlo’s. They argued that many timelines of A.I.’s future were wildly optimistic; that claims about its usefulness were often exaggerated or outright fraudulent; and that, because of the world’s inherent complexity, even powerful A.I. would change it only slowly. They cited many cases in which A.I. systems had been called upon to deliver important judgments—about medical diagnoses, or hiring—and had made rookie mistakes that indicated a fundamental disconnect from reality. The newest systems, they maintained, suffered from the same flaw.Recently, all three researchers have sharpened their views, releasing reports that take their analyses further. The nonprofit AI Futures Project, of which Kokotajlo is the executive director, has published “AI 2027,” a heavily footnoted document, written by Kokotajlo and four other researchers, which works out a chilling scenario in which “superintelligent” A.I. systems either dominate or exterminate the human race by 2030. It’s meant to be taken seriously, as a warning about what might really happen. Meanwhile, Kapoor and Narayanan, in a new paper titled “AI as Normal Technology,” insist that practical obstacles of all kinds—from regulations and professional standards to the simple difficulty of doing physical things in the real world—will slow A.I.’s deployment and limit its transformational potential. While conceding that A.I. may eventually turn out to be a revolutionary technology, on the scale of electricity or the internet, they maintain that it will remain “normal”—that is, controllable through familiar safety measures, such as fail-safes, kill switches, and human supervision—for the foreseeable future. “AI is often analogized to nuclear weapons,” they argue. But “the right analogy is nuclear power,” which has remained mostly manageable and, if anything, may be underutilized for safety reasons.

The Agentic State: How Agentic AI Will Revamp 10 Functional Layers of Public Administration


Whitepaper by the Global Government Technology Centre Berlin: “…explores how agentic AI will transform ten functional layers of government and public administration. The Agentic State signifies a fundamental shift in governance, where AI systems can perceive, reason, and act with minimal human intervention to deliver public value. Its impact on  key functional layers of government will be as follows…(More)”.

Unlock Your City’s Hidden Solutions


Article by Andreas Pawelke, Basma Albanna and Damiano Cerrone: “Cities around the world face urgent challenges — from climate change impacts to rapid urbanization and infrastructure strain. Municipal leaders struggle with limited budgets, competing priorities, and pressure to show quick results, making traditional approaches to urban transformation increasingly difficult to implement.

Every city, however, has hidden success stories — neighborhoods, initiatives, or communities that are achieving remarkable results despite facing similar challenges as their peers.

These “positive deviants” often remain unrecognized and underutilized, yet they contain the seeds of solutions that are already adapted to local contexts and constraints.

Data-Powered Positive Deviance (DPPD) combines urban data, advanced analytics, and community engagement to systematically uncover these bright spots and amplify their impact. This new approach offers a pathway to urban transformation that is not only evidence-based but also cost-effective and deeply rooted in local realities.

DPPD is particularly valuable in resource-constrained environments, where expensive external solutions often fail to take hold. By starting with what’s already working, cities can make strategic investments that build on existing strengths rather than starting from scratch. Leveraging AI tools that improve community engagement, the approach becomes even more powerful — enabling cities to envision potential futures, and engage citizens in meaningful co-creation…(More)”

AI in Urban Life


Book by Patricia McKenna: “In exploring artificial intelligence (AI) in urban life, this book brings together and extends thinking on how human-AI interactions are continuously evolving. Through such interactions, people are aided on the one hand, while becoming more aware of their own capabilities and potentials on the other hand, pertaining, for example, to creativity, human sensing, and collaboration.

It is the particular focus of research questions developed in relation to awareness, smart cities, autonomy, privacy, transparency, theory, methods, practices, and collective intelligence, along with the wide range of perspectives and opportunities offered, that set this work apart from others. Conceptual frameworks are formulated for each of these areas to guide explorations and understandings in this work and going forward. A synthesis is provided in the final chapter for perspectives, challenges and opportunities, and conceptual frameworks for urban life in an era of AI, opening the way for evolving research and practice directions…(More)”.