The Global Data Barometer 2nd edition: A Shared Compass for Navigating the Data Landscape


Report by the Global Data Barometer: “Across the globe, we’re at a turning point. From artificial intelligence and digital governance to public transparency and service delivery, data is now a fundamental force shaping how our societies function and who they serve. It holds tremendous promise to drive inclusive growth, foster accountability, and support urgent action on global challenges. And yet, access to high-quality, usable data is becoming increasingly constrained.

Some, like Verhulst (2024), have begun calling this moment a “data winter,” a period marked by shrinking openness, rising inequality in access, and growing fragmentation in how data is governed and used. This trend poses a risk not just to innovation but to the democratic values that underpin trust, participation, and accountability.

In this complex landscape, evidence matters more than ever. That is why we are proud to launch the Second Edition of the Global Data Barometer (GDB), a collaborative and comparative study that tracks the state of data for the public good across 43 countries, with a focused lens on Latin America and the Caribbean (LAC) and Africa…

The Barometer tracks countries across four dimensions: governance, capabilities, and availability, while also exploring key cross-cutting areas like AI readiness, inclusion, and data use. Here are some of the key takeaways:

  • The Implementation Gap

Many countries have adopted laws and frameworks for data governance, but there is a stark gap between policy and practice. Without strong institutions and dedicated capacity, even well-designed frameworks fall short.

  • The Role of Skills and Infrastructure

Data does not flow or translate into value without people and systems in place. Across both Latin America and the Caribbean and Africa, we see underinvestment in public sector skills, training, and the infrastructure needed to manage and reuse data effectively.

  • AI Is Moving Faster Than Governance

AI is increasingly present in national strategies, but very few countries have clear policies to guide its ethical use. Governance frameworks rarely address issues like algorithmic bias, data quality, or the accountability of AI-driven decision-making.

  • Open Data Needs Reinvestment

Many countries once seen as open data champions are struggling to sustain their efforts. Legal mandates are not always matched by technical implementation or resources. As a result, open data initiatives risk losing momentum.

  • Transparency Tools Are Missing

Key datasets that support transparency and anti-corruption, such as lobbying registers, beneficial ownership data, and political finance records, are often missing or fragmented. This makes it hard to follow the money or hold institutions to account.

  • Inclusion Is Still Largely Symbolic

Despite commitments to equity, inclusive data governance remains the exception. Data is rarely published in Indigenous or widely spoken non-official languages. Accessibility for persons with disabilities is often treated as a recommendation rather than a requirement.

  • Interoperability Remains a Barrier

Efforts to connect datasets across government, such as on procurement, company data, or political integrity, are rare. Without common standards or identifiers, it is difficult to track influence or evaluate policy impact holistically…(More)”.

How Canada Needs to Respond to the US Data Crisis


Article by Danielle Goldfarb: “The United States is cutting and undermining official US data across a wide range of domains, eroding the foundations of evidence-based policy making. This is happening mostly under the radar here in Canada, buried by news about US President Donald Trump’s barrage of tariffs and many other alarming actions. Doing nothing in response means Canada accepts blind spots in critical areas. Instead, this country should respond by investing in essential data and building the next generation of trusted public intelligence.

The United States has cut or altered more than 2,000 official data sets across the science, health, climate and development sectors, according to the National Security Archive. Deep staff cuts across all program areas effectively cancel or deeply erode many other statistical programs….

Even before this data purge, official US data methods were becoming less relevant and reliable. Traditional government surveys lag by weeks or months and face declining participation. This lag proved particularly problematic during the COVID-19 pandemic and also now, when economic data with a one- or two-month lag is largely irrelevant for tracking the real-time impact of constantly shifting Trump tariffs….

With deep ties to the United States, Canada needs to take action to reduce these critical blind spots. This challenge brings a major strength into the picture: Canada’s statistical agencies have strong reputations as trusted, transparent information sources.

First, Canada should strengthen its data infrastructure. Official Canadian data suffers from similar delays and declining response rates as in the United States. Statistics Canada needs a renewed mandate and stable resources to produce policy-relevant indicators, especially in a timelier way, and in areas where US data has been cut or compromised.

Second, Canada could also act as a trusted place to store vulnerable indicators — inventorying missing data sets, archiving those at risk and coordinating global efforts to reconstruct essential metrics.

Third, Canada has an opportunity to lead in shaping the next generation of trusted and better public-interest intelligence…(More)”.

Two Paths for A.I.


Essay by Joshua Rothman: “Last spring, Daniel Kokotajlo, an A.I.-safety researcher working at OpenAI, quit his job in protest. He’d become convinced that the company wasn’t prepared for the future of its own technology, and wanted to sound the alarm. After a mutual friend connected us, we spoke on the phone. I found Kokotajlo affable, informed, and anxious. Advances in “alignment,” he told me—the suite of techniques used to insure that A.I. acts in accordance with human commands and values—were lagging behind gains in intelligence. Researchers, he said, were hurtling toward the creation of powerful systems they couldn’t control.

Kokotajlo, who had transitioned from a graduate program in philosophy to a career in A.I., explained how he’d educated himself so that he could understand the field. While at OpenAI, part of his job had been to track progress in A.I. so that he could construct timelines predicting when various thresholds of intelligence might be crossed. At one point, after the technology advanced unexpectedly, he’d had to shift his timelines up by decades. In 2021, he’d written a scenario about A.I. titled “What 2026 Looks Like.” Much of what he’d predicted had come to pass before the titular year. He’d concluded that a point of no return, when A.I. might become better than people at almost all important tasks, and be trusted with great power and authority, could arrive in 2027 or sooner. He sounded scared.

Around the same time that Kokotajlo left OpenAI, two computer scientists at Princeton, Sayash Kapoor and Arvind Narayanan, were preparing for the publication of their book, “AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference.” In it, Kapoor and Narayanan, who study technology’s integration with society, advanced views that were diametrically opposed to Kokotajlo’s. They argued that many timelines of A.I.’s future were wildly optimistic; that claims about its usefulness were often exaggerated or outright fraudulent; and that, because of the world’s inherent complexity, even powerful A.I. would change it only slowly. They cited many cases in which A.I. systems had been called upon to deliver important judgments—about medical diagnoses, or hiring—and had made rookie mistakes that indicated a fundamental disconnect from reality. The newest systems, they maintained, suffered from the same flaw.Recently, all three researchers have sharpened their views, releasing reports that take their analyses further. The nonprofit AI Futures Project, of which Kokotajlo is the executive director, has published “AI 2027,” a heavily footnoted document, written by Kokotajlo and four other researchers, which works out a chilling scenario in which “superintelligent” A.I. systems either dominate or exterminate the human race by 2030. It’s meant to be taken seriously, as a warning about what might really happen. Meanwhile, Kapoor and Narayanan, in a new paper titled “AI as Normal Technology,” insist that practical obstacles of all kinds—from regulations and professional standards to the simple difficulty of doing physical things in the real world—will slow A.I.’s deployment and limit its transformational potential. While conceding that A.I. may eventually turn out to be a revolutionary technology, on the scale of electricity or the internet, they maintain that it will remain “normal”—that is, controllable through familiar safety measures, such as fail-safes, kill switches, and human supervision—for the foreseeable future. “AI is often analogized to nuclear weapons,” they argue. But “the right analogy is nuclear power,” which has remained mostly manageable and, if anything, may be underutilized for safety reasons.

Making the case for collaborative digital infrastructure to scale regenerative food supply networks


Briefing paper from the Food Data Collaboration: “…a call to action to collaborate and invest in data infrastructure that will enable shorter, relational, regenerative food supply networks to scale.

These food supply networks play a vital role in achieving a truly sustainable and resilient food system. By embracing data technology that fosters commons ownership models, collaboration and interdependence we can build a more inclusive and dynamic food ecosystem in which collaborative efforts, as opposed to competitive businesses operating in silos, can achieve transformative scale.

Since 2022, the Food Data Collaboration has been exploring the potential for open data standards to enable shorter, relational, regenerative food supply networks to scale and pave the way towards a healthier, more equitable, and more resilient food future. This paper explores the high level rationale for our approach and is essential reading for anyone keen to know more about the project’s aims, achievements and future development…(More)”.

The Agentic State: How Agentic AI Will Revamp 10 Functional Layers of Public Administration


Whitepaper by the Global Government Technology Centre Berlin: “…explores how agentic AI will transform ten functional layers of government and public administration. The Agentic State signifies a fundamental shift in governance, where AI systems can perceive, reason, and act with minimal human intervention to deliver public value. Its impact on  key functional layers of government will be as follows…(More)”.

Unlock Your City’s Hidden Solutions


Article by Andreas Pawelke, Basma Albanna and Damiano Cerrone: “Cities around the world face urgent challenges — from climate change impacts to rapid urbanization and infrastructure strain. Municipal leaders struggle with limited budgets, competing priorities, and pressure to show quick results, making traditional approaches to urban transformation increasingly difficult to implement.

Every city, however, has hidden success stories — neighborhoods, initiatives, or communities that are achieving remarkable results despite facing similar challenges as their peers.

These “positive deviants” often remain unrecognized and underutilized, yet they contain the seeds of solutions that are already adapted to local contexts and constraints.

Data-Powered Positive Deviance (DPPD) combines urban data, advanced analytics, and community engagement to systematically uncover these bright spots and amplify their impact. This new approach offers a pathway to urban transformation that is not only evidence-based but also cost-effective and deeply rooted in local realities.

DPPD is particularly valuable in resource-constrained environments, where expensive external solutions often fail to take hold. By starting with what’s already working, cities can make strategic investments that build on existing strengths rather than starting from scratch. Leveraging AI tools that improve community engagement, the approach becomes even more powerful — enabling cities to envision potential futures, and engage citizens in meaningful co-creation…(More)”

Data as Policy


Paper by Janet Freilich and W. Nicholson Price II: “A large literature on regulation highlights the many different methods of policy-making: command-and-control rulemaking, informational disclosures, tort liability, taxes, and more. But the literature overlooks a powerful method to achieve policy objectives: data. The state can provide (or suppress) data as a regulatory tool to solve policy problems. For administrations with expansive views of government’s purpose, government-provided data can serve as infrastructure for innovation and push innovation in socially desirable directions; for administrations with deregulatory ambitions, suppressing or choosing not to collect data can reduce regulatory power or serve as a back-door mechanism to subvert statutory or common law rules. Government-provided data is particularly powerful for data-driven technologies such as AI where it is sometimes more effective than traditional methods of regulation. But government-provided data is a policy tool beyond AI and can influence policy in any field. We illustrate why government-provided data is a compelling tool both for positive regulation and deregulation in contexts ranging from addressing healthcare discrimination, automating legal practice, smart power generation, and others. We then consider objections and limitations to the role of government-provided data as policy instrument, with substantial focus on privacy concerns and the possibility for autocratic abuse.

We build on the broad literature on regulation by introducing data as a regulatory tool. We also join—and diverge from—the growing literature on data by showing that while data can be privately produced purely for private gain, they do not need to be. Rather, government can be deeply involved in the generation and sharing of data, taking a much more publicly oriented view. Ultimately, while government-provided data are not a panacea for either regulatory or data problems, governments should view data provision as an understudied but useful tool in the innovation and governance toolbox…(More)”

How Being Watched Changes How You Think


Article by Simon Makin: “In 1785 English philosopher Jeremy Bentham designed the perfect prison: Cells circle a tower from which an unseen guard can observe any inmate at will. As far as a prisoner knows, at any given time, the guard may be watching—or may not be. Inmates have to assume they’re constantly observed and behave accordingly. Welcome to the Panopticon.

Many of us will recognize this feeling of relentless surveillance. Information about who we are, what we do and buy and where we go is increasingly available to completely anonymous third parties. We’re expected to present much of our lives to online audiences and, in some social circles, to share our location with friends. Millions of effectively invisible closed-circuit television (CCTV) cameras and smart doorbells watch us in public, and we know facial recognition with artificial intelligence can put names to faces.

So how does being watched affect us? “It’s one of the first topics to have been studied in psychology,” says Clément Belletier, a psychologist at University of Clermont Auvergne in France. In 1898 psychologist Norman Triplett showed that cyclists raced harder in the presence of others. From the 1970s onward, studies showed how we change our overt behavior when we are watched to manage our reputation and social consequences.

But being watched doesn’t just change our behavior; decades of research show it also infiltrates our mind to impact how we think. And now a new study reveals how being watched affects unconscious processing in our brain. In this era of surveillance, researchers say, the findings raise concerns about our collective mental health…(More)”.

Measuring the Shade Coverage of Trees and Buildings in Cambridge, Massachusetts


Paper by Amirhosein Shabrang, Mehdi Pourpeikari Heris, and Travis Flohr: “We investigated the spatial shade patterns of trees and buildings on sidewalks and bike lanes in Cambridge, Massachusetts. We used Lidar data and 3D modeling to analyze the spatial and temporal shade distribution across the City. Our analysis shows significant shade variations throughout the City. Western city areas receive more shade from trees, and the eastern regions receive more shade from buildings. The City’s northern areas lack shade, but natural and built sources of shade can improve shade coverage integration. This study’s findings help identify shade coverage gaps, which have implications for urban planning and design for more heat-resilient cities…(More)”

AI in Urban Life


Book by Patricia McKenna: “In exploring artificial intelligence (AI) in urban life, this book brings together and extends thinking on how human-AI interactions are continuously evolving. Through such interactions, people are aided on the one hand, while becoming more aware of their own capabilities and potentials on the other hand, pertaining, for example, to creativity, human sensing, and collaboration.

It is the particular focus of research questions developed in relation to awareness, smart cities, autonomy, privacy, transparency, theory, methods, practices, and collective intelligence, along with the wide range of perspectives and opportunities offered, that set this work apart from others. Conceptual frameworks are formulated for each of these areas to guide explorations and understandings in this work and going forward. A synthesis is provided in the final chapter for perspectives, challenges and opportunities, and conceptual frameworks for urban life in an era of AI, opening the way for evolving research and practice directions…(More)”.