Report by Hanna Barakat, Chris Cameron, Alix Dunn and Prathm Juneja, and Emma Prest: “This report examines the global expansion of data centers driven by AI and cloud computing, highlighting both their economic promises and the often-overlooked social and environmental costs. Through case studies across five countries, it investigates how governments and tech companies influence development, how communities resist harmful effects, and what support is needed for effective advocacy…(More)”.
Designing Shared Data Futures: Engaging young people on how to re-use data responsibly for health and well-being
Report by Hannah Chafetz, Sampriti Saxena, Tracy Jo Ingram, Andrew J. Zahuranec, Jennifer Requejo and Stefaan Verhulst: “When young people are engaged in data decisions for or about them, they not only become more informed about this data, but can also contribute to new policies and programs that improve their health and well-being. However, oftentimes young people are left out of these discussions and are unaware of the data that organizations collect.
In October 2023, The Second Lancet Commission on Adolescent Health and well-being, the United Nations Children’s Fund (UNICEF), and The GovLab at New York University hosted six Youth Solutions Labs (or co-design workshops) with over 120 young people from 36 countries around the world. In addition to co-designing solutions to five key issues impacting their health and well-being, we sought to understand current sentiments around the re-use of data on those issues. The Labs provided several insights about young people’s preferences regarding: 1) the purposes for which data should be re-used to improve health and well-being, 2) the types and sources of data that should and should not be re-used, 3) who should have access to previously collected data, and 4) under what circumstances data re-use should take place. Additionally, participants provided suggestions of what ethical and responsible data re-use looks like to them and how young people can participate in decision making processes. In this paper, we elaborate on these findings and provide a series of recommendations to accelerate responsible data re-use for the health and well-being of young people…(More)”.
Future design in the public policy process: giving a voice to future generations
Paper by Marij Swinkels, Olivier de Vette & Victor Toom: “Long-term public issues face the intergenerational problem: current policy decisions place a disproportionate burden on future generations while primarily benefitting those in the present. The interests of present generations trump those of future generations, as the latter play no explicit part as stakeholders in policy making processes. How can the interests of future generations be voiced in the present? In this paper, we explore an innovative method to incorporate the interests of future generations in the process of policymaking: future design. First, we situate future design in the policy process and relate it to other intergenerational policymaking initiatives that aim to redeem the intergenerational problem. Second, we show how we applied future design and provide insights into three pilots that we organized on two long-term public issues in the Netherlands: housing shortages and water management. We conclude that future design can effectively contribute to representing the interests of future generations, but that adoption of future design in different contexts also requires adaptation of the method. The findings increase our understanding of the value of future design as an innovative policymaking practice to strengthen intergenerational policymaking. As such, it provides policymakers with insights into how to use this method…(More)”.
Why Generative AI Isn’t Transforming Government (Yet) — and What We Can Do About It
Article by Tiago C. Peixoto: “A few weeks ago, I reached out to a handful of seasoned digital services practitioners, NGOs, and philanthropies with a simple question: Where are the compelling generative AI (GenAI) use cases in public-sector workflows? I wasn’t looking for better search or smarter chatbots. I wanted examples of automation of real public workflows – something genuinely interesting and working. The responses, though numerous, were underwhelming.
That question has gained importance amid a growing number of reports forecasting AI’s transformative impact on government. The Alan Turing Institute, for instance, published a rigorous study estimating the potential of AI to help automate over 140 million government transactions in the UK. The Tony Blair Institute also weighed in, suggesting that a substantive portion of public-sector work could be automated. While the report helped bring welcome attention to the issue, its use of GPT-4 to assess task automatability has sparked a healthy discussion about how best to evaluate feasibility. Like other studies in this area, both reports highlight potential – but stop short of demonstrating real service automation.
Without testing technologies in real service environments – where workflows, incentives, and institutional constraints shape outcomes – and grounding each pilot in clear efficiency or well-being metrics, estimates risk becoming abstractions that underestimate feasibility.
This pattern aligns with what Arvind Narayanan and Sayash Kapoor argue in “AI as Normal Technology:” the impact of AI is realized only when methods translate into applications and diffuse through real-world systems. My own review, admittedly non-representative, confirms their call for more empirical work on the innovation-diffusion lag.
In the public sector, the gap between capability and impact is not only wide but also structural…(More)”
Urban Development Needs Systems Thinking
Article by Yaera Chung: “More than three decades after the collapse of the Soviet Union, cities in Eastern Europe and Central Asia (EECA) continue to grapple with economic stagnation, aging infrastructure, and environmental degradation while also facing new pressures from climate change and regional conflicts. In this context, traditional city planning, which tackles problems in isolation, is struggling to keep up. Urban strategies often rely on siloed, one-off interventions that fail to reflect the complexity of social challenges or adapt to shifting conditions. As a result, efforts are frequently fragmented, overlook root causes, and miss opportunities for long-term, cross-sector collaboration.
Instead of addressing one issue at a time, cities need to develop a set of coordinated, interlinked solutions that tackle multiple urban challenges simultaneously and align efforts across sectors. As part of a broader strategy to address environmental, economic, and social goals at once, for example, cities might advance a range of initiatives, such as transforming biowaste into resources, redesigning streets to reduce air pollution, and creating local green jobs. These kinds of “portfolio” approaches are leading to lasting and systems-level change.
Since 2021, the United Nations Development Programme (UNDP) has been collaborating with 15 cities across EECA to solve problems in ways that embrace complexity and interconnectedness. Selected through open calls under two UNDP initiatives, Mayors for Economic Growth and the City Experiment Fund, these cities demonstrated a strong interest in tackling systemic issues. Their proposals highlighted the problems they face, their capacity for innovation, and local initiatives and partnerships.
Their ongoing journeys have surfaced four lessons that can help other cities move beyond conventional planning pitfalls, and adopt a more responsive, inclusive, and sustainable approach to urban development…(More)”.
Can We Trust Social Science Yet?
Essay by Ryan Briggs: “Everyone likes the idea of evidence-based policy, but it’s hard to realize it when our most reputable social science journals are still publishing poor quality research.
Ideally, policy and program design is a straightforward process: a decision-maker faces a problem, turns to peer-reviewed literature, and selects interventions shown to work. In reality, that’s rarely how things unfold. The popularity of “evidence-based medicine” and other “evidence-based” topics highlights our desire for empirical approaches — but would the world actually improve if those in power consistently took social science evidence seriously? It brings me no joy to tell you that, at present, I think the answer is usually “no.”
Given the current state of evidence production in the social sciences, I believe that many — perhaps most — attempts to use social scientific evidence to inform policy will not lead to better outcomes. This is not because of politics or the challenges of scaling small programs. The problem is more immediate. Much of social science research is of poor quality, and sorting the trustworthy work from bad work is difficult, costly, and time-consuming.
But it is necessary. If you were to randomly select an empirical paper published in the past decade — including any studies from the top journals in political science or economics — there is a high chance that its findings may be inaccurate. And not just off by a little: possibly two times as large, or even incorrectly signed. As an academic, this bothers me. I think it should bother you, too. So let me explain why this happens…(More)”.
Simulating Human Behavior with AI Agents
Brief by The Stanford Institute for Human-Centered AI (HAI): “…we introduce an AI agent architecture that simulates more than 1,000 real people. The agent architecture—built by combining the transcripts of two-hour, qualitative interviews with a large language model (LLM) and scored against social science benchmarks—successfully replicated real individuals’ responses to survey questions 85% as accurately as participants replicate their own answers across surveys staggered two weeks apart. The generative agents performed comparably in predicting people’s personality traits and experiment outcomes and were less biased than previously used simulation tools.
This architecture underscores the benefits of using generative agents as a research tool to glean new insights into real-world individual behavior. However, researchers and policymakers must also mitigate the risks of using generative agents in such contexts, including harms related to over-reliance on agents, privacy, and reputation…(More)”.
We still don’t know how much energy AI consumes
Article by Sasha Luccioni: “…The AI Energy Score project, a collaboration between Salesforce, Hugging Face, AI developer Cohere and Carnegie Mellon University, is an attempt to shed more light on the issue by developing a standardised approach. The code is open and available for anyone to access and contribute to. The goal is to encourage the AI community to test as many models as possible.
By examining 10 popular tasks (such as text generation or audio transcription) on open-source AI models, it is possible to isolate the amount of energy consumed by the computer hardware that runs them. These are assigned scores ranging between one and five stars based on their relative efficiency. Between the most and least efficient AI models in our sample, we found a 62,000-fold difference in the power required.
Since the project was launched in February a new tool compares the energy use of chatbot queries with everyday activities like phone charging or driving as a way to help users understand the environmental impacts of the tech they use daily.
The tech sector is aware that AI emissions put its climate commitments in danger. Both Microsoft and Google no longer seem to be meeting their net zero targets. So far, however, no Big Tech company has agreed to use the methodology to test its own AI models.
It is possible that AI models will one day help in the fight against climate change. AI systems pioneered by companies like DeepMind are already designing next-generation solar panels and battery materials, optimising power grid distribution and reducing the carbon intensity of cement production.
Tech companies are moving towards cleaner energy sources too. Microsoft is investing in the Three Mile Island nuclear power plant and Alphabet is engaging with more experimental approaches such as small modular nuclear reactors. In 2024, the technology sector contributed to 92 per cent of new clean energy purchases in the US.
But greater clarity is needed. OpenAI, Anthropic and other tech companies should start disclosing the energy consumption of their models. If they resist, then we need legislation that would make such disclosures mandatory.
As more users interact with AI systems, they should be given the tools to understand how much energy each request consumes. Knowing this might make them more careful about using AI for superfluous tasks like looking up a nation’s capital. Increased transparency would also be an incentive for companies developing AI-powered services to select smaller, more sustainable models that meet their specific needs, rather than defaulting to the largest, most energy-intensive options…(More)”.
Gen Z’s new side hustle: selling data
Article by Erica Pandey: “Many young people are more willing than their parents to share personal data, giving companies deeper insight into their lives.
Why it matters: Selling data is becoming the new selling plasma.
Case in point: Generation Lab, a youth polling company, is launching a new product, Verb.AI, today — betting that buying this data is the future of polling.
- “We think corporations have extracted user data without fairly compensating people for their own data,” says Cyrus Beschloss, CEO of Generation Lab. “We think users should know exactly what data they’re giving us and should feel good about what they’re receiving in return.”
How it works: Generation Lab offers people cash — $50 or more per month, depending on use and other factors — to download a tracker onto their phones.
- The product takes about 90 seconds to download, and once it’s on your phone, it tracks things like what you browse, what you buy, which streaming apps you use — all anonymously. There are also things it doesn’t track, like activity on your bank account.
- Verb then uses that data to create a digital twin of you that lives in a central database and knows your preferences…(More)”.
Public AI White Paper – A Public Alternative to Private AI Dominance
White paper by the Bertelsmann Stiftung and Open Future: “Today, the most advanced AI systems are developed and controlled by a small number of private companies. These companies hold power not only over the models themselves but also over key resources such as computing infrastructure. This concentration of power poses not only economic risks but also significant democratic challenges.
The Public AI White Paper presents an alternative vision, outlining how open and public-interest approaches to AI can be developed and institutionalized. It advocates for a rebalancing of power within the AI ecosystem – with the goal of enabling societies to shape AI actively, rather than merely consume it…(More)”.