Paper by Thomas P. Kehler, Scott E. Page, Alex Pentland, Martin Reeves and John Seely Brown: “We propose a new framework for human-AI collaboration that amplifies the distinct capabilities
of both. This framework, which we call Generative Collective Intelligence (GCI), shifts AI to the
group/social level and employs AI in dual roles: as interactive agents and as technology that
accumulates, organizes, and leverages knowledge. By creating a cognitive bridge between
human reasoning and AI models, GCI can overcome limitations of purely algorithmic
approaches to problem-solving and decision-making. The framework demonstrates how AI can
be reframed as a social and cultural technology that enables groups to solve complex problems
through structured collaboration that transcends traditional communication barriers. We describe
the mathematical foundations of GCI based on comparative judgment and minimum regret
principles, and illustrate its applications across domains including climate adaptation, healthcare
transformation, and civic participation. By combining human creativity with AI’s computational
capabilities, GCI offers a promising approach to addressing complex societal challenges that
neither human or machines can solve alone…(More)”.
Leveraging Citizen Data to Improve Public Services and Measure Progress Toward Sustainable Development Goal 16
Paper by Dilek Fraisl: “This paper presents the results of a pilot study conducted in Ghana that utilized citizen data approaches for monitoring a governance indicator within the SDG framework, focusing on indicator 16.6.2 citizen satisfaction with public services. This indicator is a crucial measure of governance quality, as emphasized by the UN Sustainable Development Goals (SDGs) through target 16.6 Develop effective, accountable, and transparent institutions at all levels. Indicator 16.6.2 specifically measures satisfaction with key public services, including health, education, and other government services, such as government-issued identification documents through a survey. However, with only 5 years remaining to achieve the SDGs, the lack of data continues to pose a significant challenge in monitoring progress toward this target, particularly regarding the experiences of marginalized populations. Our findings suggest that well-designed citizen data initiatives can effectively capture the experiences of marginalized individuals and communities. Additionally, they can serve as valuable supplements to official statistics, providing crucial data on population groups typically underrepresented in traditional surveys…(More)”.
Ethical implications related to processing of personal data and artificial intelligence in humanitarian crises: a scoping review
Paper by Tino Kreutzer et al: “Humanitarian organizations are rapidly expanding their use of data in the pursuit of operational gains in effectiveness and efficiency. Ethical risks, particularly from artificial intelligence (AI) data processing, are increasingly recognized yet inadequately addressed by current humanitarian data protection guidelines. This study reports on a scoping review that maps the range of ethical issues that have been raised in the academic literature regarding data processing of people affected by humanitarian crises….
We identified 16,200 unique records and retained 218 relevant studies. Nearly one in three (n = 66) discussed technologies related to AI. Seventeen studies included an author from a lower-middle income country while four included an author from a low-income country. We identified 22 ethical issues which were then grouped along the four ethical value categories of autonomy, beneficence, non-maleficence, and justice. Slightly over half of included studies (n = 113) identified ethical issues based on real-world examples. The most-cited ethical issue (n = 134) was a concern for privacy in cases where personal or sensitive data might be inadvertently shared with third parties. Aside from AI, the technologies most frequently discussed in these studies included social media, crowdsourcing, and mapping tools.
Studies highlight significant concerns that data processing in humanitarian contexts can cause additional harm, may not provide direct benefits, may limit affected populations’ autonomy, and can lead to the unfair distribution of scarce resources. The increase in AI tool deployment for humanitarian assistance amplifies these concerns. Urgent development of specific, comprehensive guidelines, training, and auditing methods is required to address these ethical challenges. Moreover, empirical research from low and middle-income countries, disproportionally affected by humanitarian crises, is vital to ensure inclusive and diverse perspectives. This research should focus on the ethical implications of both emerging AI systems, as well as established humanitarian data management practices…(More)”.
Engagement Integrity: Ensuring Legitimacy at a time of AI-Augmented Participation
Article by Stefaan G. Verhulst: “As participatory practices are increasingly tech-enabled, ensuring engagement integrity is becoming more urgent. While considerable scholarly and policy attention has been paid to information integrity (OECD, 2024; Gillwald et al., 2024; Wardle & Derakhshan, 2017; Ghosh & Scott, 2018), including concerns about disinformation, misinformation, and computational propaganda, the integrity of engagement itself — how to ensure collective decision-making is not tech manipulated — remains comparatively under-theorized and under-protected. I define engagement integrity as the procedural fairness and resistance to manipulation of tech-enabled deliberative and participatory processes.
My definition is different from prior discussions of engagement integrity, which mainly emphasized ethical standards when scientists engage with the public (e.g., in advisory roles, communication, or co-research). The concept is particularly salient in light of recent innovations that aim to lower the transaction costs of engagement using artificial intelligence (AI) (Verhulst, 2018). From AI-facilitated citizen assemblies (Simon et al., 2023) to natural language processing (NLP) -enhanced policy proposal platforms (Grobbink & Peach, 2020) to automated analysis of unstructured direct democracy proposals (Grobbink & Peach, 2020) to large-scale deliberative polls augmented with agentic AI (Mulgan, 2022), these developments promise to enhance inclusion, scalability, and sense-making. However, they also create new attack surfaces and vectors of influence that could undermine legitimacy.
This concern is not speculative…(More)”.
The Global Data Barometer 2nd edition: A Shared Compass for Navigating the Data Landscape
Report by the Global Data Barometer: “Across the globe, we’re at a turning point. From artificial intelligence and digital governance to public transparency and service delivery, data is now a fundamental force shaping how our societies function and who they serve. It holds tremendous promise to drive inclusive growth, foster accountability, and support urgent action on global challenges. And yet, access to high-quality, usable data is becoming increasingly constrained.
Some, like Verhulst (2024), have begun calling this moment a “data winter,” a period marked by shrinking openness, rising inequality in access, and growing fragmentation in how data is governed and used. This trend poses a risk not just to innovation but to the democratic values that underpin trust, participation, and accountability.
In this complex landscape, evidence matters more than ever. That is why we are proud to launch the Second Edition of the Global Data Barometer (GDB), a collaborative and comparative study that tracks the state of data for the public good across 43 countries, with a focused lens on Latin America and the Caribbean (LAC) and Africa…
The Barometer tracks countries across four dimensions: governance, capabilities, and availability, while also exploring key cross-cutting areas like AI readiness, inclusion, and data use. Here are some of the key takeaways:
- The Implementation Gap
Many countries have adopted laws and frameworks for data governance, but there is a stark gap between policy and practice. Without strong institutions and dedicated capacity, even well-designed frameworks fall short.
- The Role of Skills and Infrastructure
Data does not flow or translate into value without people and systems in place. Across both Latin America and the Caribbean and Africa, we see underinvestment in public sector skills, training, and the infrastructure needed to manage and reuse data effectively.
- AI Is Moving Faster Than Governance
AI is increasingly present in national strategies, but very few countries have clear policies to guide its ethical use. Governance frameworks rarely address issues like algorithmic bias, data quality, or the accountability of AI-driven decision-making.
- Open Data Needs Reinvestment
Many countries once seen as open data champions are struggling to sustain their efforts. Legal mandates are not always matched by technical implementation or resources. As a result, open data initiatives risk losing momentum.
- Transparency Tools Are Missing
Key datasets that support transparency and anti-corruption, such as lobbying registers, beneficial ownership data, and political finance records, are often missing or fragmented. This makes it hard to follow the money or hold institutions to account.
- Inclusion Is Still Largely Symbolic
Despite commitments to equity, inclusive data governance remains the exception. Data is rarely published in Indigenous or widely spoken non-official languages. Accessibility for persons with disabilities is often treated as a recommendation rather than a requirement.
- Interoperability Remains a Barrier
Efforts to connect datasets across government, such as on procurement, company data, or political integrity, are rare. Without common standards or identifiers, it is difficult to track influence or evaluate policy impact holistically…(More)”.
Two Paths for A.I.
Essay by Joshua Rothman: “Last spring, Daniel Kokotajlo, an A.I.-safety researcher working at OpenAI, quit his job in protest. He’d become convinced that the company wasn’t prepared for the future of its own technology, and wanted to sound the alarm. After a mutual friend connected us, we spoke on the phone. I found Kokotajlo affable, informed, and anxious. Advances in “alignment,” he told me—the suite of techniques used to insure that A.I. acts in accordance with human commands and values—were lagging behind gains in intelligence. Researchers, he said, were hurtling toward the creation of powerful systems they couldn’t control.
Kokotajlo, who had transitioned from a graduate program in philosophy to a career in A.I., explained how he’d educated himself so that he could understand the field. While at OpenAI, part of his job had been to track progress in A.I. so that he could construct timelines predicting when various thresholds of intelligence might be crossed. At one point, after the technology advanced unexpectedly, he’d had to shift his timelines up by decades. In 2021, he’d written a scenario about A.I. titled “What 2026 Looks Like.” Much of what he’d predicted had come to pass before the titular year. He’d concluded that a point of no return, when A.I. might become better than people at almost all important tasks, and be trusted with great power and authority, could arrive in 2027 or sooner. He sounded scared.
Around the same time that Kokotajlo left OpenAI, two computer scientists at Princeton, Sayash Kapoor and Arvind Narayanan, were preparing for the publication of their book, “AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference.” In it, Kapoor and Narayanan, who study technology’s integration with society, advanced views that were diametrically opposed to Kokotajlo’s. They argued that many timelines of A.I.’s future were wildly optimistic; that claims about its usefulness were often exaggerated or outright fraudulent; and that, because of the world’s inherent complexity, even powerful A.I. would change it only slowly. They cited many cases in which A.I. systems had been called upon to deliver important judgments—about medical diagnoses, or hiring—and had made rookie mistakes that indicated a fundamental disconnect from reality. The newest systems, they maintained, suffered from the same flaw.Recently, all three researchers have sharpened their views, releasing reports that take their analyses further. The nonprofit AI Futures Project, of which Kokotajlo is the executive director, has published “AI 2027,” a heavily footnoted document, written by Kokotajlo and four other researchers, which works out a chilling scenario in which “superintelligent” A.I. systems either dominate or exterminate the human race by 2030. It’s meant to be taken seriously, as a warning about what might really happen. Meanwhile, Kapoor and Narayanan, in a new paper titled “AI as Normal Technology,” insist that practical obstacles of all kinds—from regulations and professional standards to the simple difficulty of doing physical things in the real world—will slow A.I.’s deployment and limit its transformational potential. While conceding that A.I. may eventually turn out to be a revolutionary technology, on the scale of electricity or the internet, they maintain that it will remain “normal”—that is, controllable through familiar safety measures, such as fail-safes, kill switches, and human supervision—for the foreseeable future. “AI is often analogized to nuclear weapons,” they argue. But “the right analogy is nuclear power,” which has remained mostly manageable and, if anything, may be underutilized for safety reasons.
The Agentic State: How Agentic AI Will Revamp 10 Functional Layers of Public Administration
Whitepaper by the Global Government Technology Centre Berlin: “…explores how agentic AI will transform ten functional layers of government and public administration. The Agentic State signifies a fundamental shift in governance, where AI systems can perceive, reason, and act with minimal human intervention to deliver public value. Its impact on key functional layers of government will be as follows…(More)”.

Unlock Your City’s Hidden Solutions
Article by Andreas Pawelke, Basma Albanna and Damiano Cerrone: “Cities around the world face urgent challenges — from climate change impacts to rapid urbanization and infrastructure strain. Municipal leaders struggle with limited budgets, competing priorities, and pressure to show quick results, making traditional approaches to urban transformation increasingly difficult to implement.
Every city, however, has hidden success stories — neighborhoods, initiatives, or communities that are achieving remarkable results despite facing similar challenges as their peers.
These “positive deviants” often remain unrecognized and underutilized, yet they contain the seeds of solutions that are already adapted to local contexts and constraints.
Data-Powered Positive Deviance (DPPD) combines urban data, advanced analytics, and community engagement to systematically uncover these bright spots and amplify their impact. This new approach offers a pathway to urban transformation that is not only evidence-based but also cost-effective and deeply rooted in local realities.
DPPD is particularly valuable in resource-constrained environments, where expensive external solutions often fail to take hold. By starting with what’s already working, cities can make strategic investments that build on existing strengths rather than starting from scratch. Leveraging AI tools that improve community engagement, the approach becomes even more powerful — enabling cities to envision potential futures, and engage citizens in meaningful co-creation…(More)”
Data as Policy
Paper by Janet Freilich and W. Nicholson Price II: “A large literature on regulation highlights the many different methods of policy-making: command-and-control rulemaking, informational disclosures, tort liability, taxes, and more. But the literature overlooks a powerful method to achieve policy objectives: data. The state can provide (or suppress) data as a regulatory tool to solve policy problems. For administrations with expansive views of government’s purpose, government-provided data can serve as infrastructure for innovation and push innovation in socially desirable directions; for administrations with deregulatory ambitions, suppressing or choosing not to collect data can reduce regulatory power or serve as a back-door mechanism to subvert statutory or common law rules. Government-provided data is particularly powerful for data-driven technologies such as AI where it is sometimes more effective than traditional methods of regulation. But government-provided data is a policy tool beyond AI and can influence policy in any field. We illustrate why government-provided data is a compelling tool both for positive regulation and deregulation in contexts ranging from addressing healthcare discrimination, automating legal practice, smart power generation, and others. We then consider objections and limitations to the role of government-provided data as policy instrument, with substantial focus on privacy concerns and the possibility for autocratic abuse.
We build on the broad literature on regulation by introducing data as a regulatory tool. We also join—and diverge from—the growing literature on data by showing that while data can be privately produced purely for private gain, they do not need to be. Rather, government can be deeply involved in the generation and sharing of data, taking a much more publicly oriented view. Ultimately, while government-provided data are not a panacea for either regulatory or data problems, governments should view data provision as an understudied but useful tool in the innovation and governance toolbox…(More)”
The Teacher in the Machine: A Human History of Education Technology
Book by Anne Trumbore: “From AI tutors who ensure individualized instruction but cannot do math to free online courses from elite universities that were supposed to democratize higher education, claims that technological innovations will transform education often fall short. Yet, as Anne Trumbore shows in The Teacher in the Machine, the promises of today’s cutting-edge technologies aren’t new. Long before the excitement about the disruptive potential of generative AI–powered tutors and massive open online courses, scholars at Stanford, MIT, and the University of Illinois in the 1960s and 1970s were encouraged by the US government to experiment with computers and artificial intelligence in education. Trumbore argues that the contrast between these two eras of educational technology reveals the changing role of higher education in the United States as it shifted from a public good to a private investment.
Writing from a unique insider’s perspective and drawing on interviews with key figures, historical research, and case studies, Trumbore traces today’s disparate discussions about generative AI, student loan debt, and declining social trust in higher education back to their common origins at a handful of elite universities fifty years ago. Arguing that those early educational experiments have resonance today, Trumbore points the way to a more equitable and collaborative pedagogical future. Her account offers a critical lens on the history of technology in education just as universities and students seek a stronger hand in shaping the future of their institutions…(More)”