We still don’t know how much energy AI consumes


Article by Sasha Luccioni: “…The AI Energy Score project, a collaboration between Salesforce, Hugging FaceAI developer Cohere and Carnegie Mellon University, is an attempt to shed more light on the issue by developing a standardised approach. The code is open and available for anyone to access and contribute to. The goal is to encourage the AI community to test as many models as possible.

By examining 10 popular tasks (such as text generation or audio transcription) on open-source AI models, it is possible to isolate the amount of energy consumed by the computer hardware that runs them. These are assigned scores ranging between one and five stars based on their relative efficiency. Between the most and least efficient AI models in our sample, we found a 62,000-fold difference in the power required. 

Since the project was launched in February a new tool compares the energy use of chatbot queries with everyday activities like phone charging or driving as a way to help users understand the environmental impacts of the tech they use daily.

The tech sector is aware that AI emissions put its climate commitments in danger. Both Microsoft and Google no longer seem to be meeting their net zero targets. So far, however, no Big Tech company has agreed to use the methodology to test its own AI models.

It is possible that AI models will one day help in the fight against climate change. AI systems pioneered by companies like DeepMind are already designing next-generation solar panels and battery materials, optimising power grid distribution and reducing the carbon intensity of cement production.

Tech companies are moving towards cleaner energy sources too. Microsoft is investing in the Three Mile Island nuclear power plant and Alphabet is engaging with more experimental approaches such as small modular nuclear reactors. In 2024, the technology sector contributed to 92 per cent of new clean energy purchases in the US. 

But greater clarity is needed. OpenAI, Anthropic and other tech companies should start disclosing the energy consumption of their models. If they resist, then we need legislation that would make such disclosures mandatory.

As more users interact with AI systems, they should be given the tools to understand how much energy each request consumes. Knowing this might make them more careful about using AI for superfluous tasks like looking up a nation’s capital. Increased transparency would also be an incentive for companies developing AI-powered services to select smaller, more sustainable models that meet their specific needs, rather than defaulting to the largest, most energy-intensive options…(More)”.

Public AI White Paper – A Public Alternative to Private AI Dominance


White paper by the Bertelsmann Stiftung and Open Future: “Today, the most advanced AI systems are developed and controlled by a small number of private companies. These companies hold power not only over the models themselves but also over key resources such as computing infrastructure. This concentration of power poses not only economic risks but also significant democratic challenges.

The Public AI White Paper presents an alternative vision, outlining how open and public-interest approaches to AI can be developed and institutionalized. It advocates for a rebalancing of power within the AI ecosystem – with the goal of enabling societies to shape AI actively, rather than merely consume it…(More)”.

Reimagining Data Governance for AI: Operationalizing Social Licensing for Data Reuse


Report by Stefaan Verhulst, Adam Zable, Andrew J. Zahuranec, and Peter Addo: “…introduces a practical, community-centered framework for governing data reuse in the development and deployment of artificial intelligence systems in low- and middle-income countries (LMICs). As AI increasingly relies on data from LMICs, affected communities are often excluded from decision-making and see little benefit from how their data is used. This report,…reframes data governance through social licensing—a participatory model that empowers communities to collectively define, document, and enforce conditions for how their data is reused. It offers a step-by-step methodology and actionable tools, including a Social Licensing Questionnaire and adaptable contract clauses, alongisde real-world scenarios and recommendations for enforcement, policy integration, and future research. This report recasts data governance as a collective, continuous process – shifting the focus from individual consent to community decision-making…(More)”.

Leading, not lagging: Africa’s gen AI opportunity


Article by Mayowa Kuyoro, Umar Bagus: “The rapid rise of gen AI has captured the world’s imagination and accelerated the integration of AI into the global economy and the lives of people across the world. Gen AI heralds a step change in productivity. As institutions apply AI in novel ways, beyond the advanced analytics and machine learning (ML) applications of the past ten years, the global economy could increase significantly, improving the lives and livelihoods of millions.1

Nowhere is this truer than in Africa, a continent that has already demonstrated its ability to use technology to leapfrog traditional development pathways; for example, mobile technology overcoming the fixed-line internet gap, mobile payments in Kenya, and numerous African institutions making the leap to cloud faster than their peers in developed markets.2 Africa has been quick on the uptake with gen AI, too, with many unique and ingenious applications and deployments well underway…(More)”.

Across McKinsey’s client service work in Africa, many institutions have tested and deployed AI solutions. Our research has found that more than 40 percent of institutions have either started to experiment with gen AI or have already implemented significant solutions (see sidebar “About the research inputs”). However, the continent has so far only scratched the surface of what is possible, with both AI and gen AI. If institutions can address barriers and focus on building for scale, our analysis suggests African economies could unlock up to $100 billion in annual economic value across multiple sectors from gen AI alone. That is in addition to the still-untapped potential from traditional AI and ML in many sectors today—the combined traditional AI and gen AI total is more than double what gen AI can unlock on its own, with traditional AI making up at least 60 percent of the value…(More)”

The Right to AI


Paper by Rashid Mushkani, Hugo Berard, Allison Cohen, Shin Koeski: “This paper proposes a Right to AI, which asserts that individuals and communities should meaningfully participate in the development and governance of the AI systems that shape their lives. Motivated by the increasing deployment of AI in critical domains and inspired by Henri Lefebvre’s concept of the Right to the City, we reconceptualize AI as a societal infrastructure, rather than merely a product of expert design. In this paper, we critically evaluate how generative agents, large-scale data extraction, and diverse cultural values bring new complexities to AI oversight. The paper proposes that grassroots participatory methodologies can mitigate biased outcomes and enhance social responsiveness. It asserts that data is socially produced and should be managed and owned collectively. Drawing on Sherry Arnstein’s Ladder of Citizen Participation and analyzing nine case studies, the paper develops a four-tier model for the Right to AI that situates the current paradigm and envisions an aspirational future. It proposes recommendations for inclusive data ownership, transparent design processes, and stakeholder-driven oversight. We also discuss market-led and state-centric alternatives and argue that participatory approaches offer a better balance between technical efficiency and democratic legitimacy…(More)”.

Societal and technological progress as sewing an ever-growing, ever-changing, patchy, and polychrome quilt


Paper by Joel Z. Leibo et al: “Artificial Intelligence (AI) systems are increasingly placed in positions where their decisions have real consequences, e.g., moderating online spaces, conducting research, and advising on policy. Ensuring they operate in a safe and ethically acceptable fashion is thus critical. However, most solutions have been a form of one-size-fits-all “alignment”. We are worried that such systems, which overlook enduring moral diversity, will spark resistance, erode trust, and destabilize our institutions. This paper traces the underlying problem to an often-unstated Axiom of Rational Convergence: the idea that under ideal conditions, rational agents will converge in the limit of conversation on a single ethics. Treating that premise as both optional and doubtful, we propose what we call the appropriateness framework: an alternative approach grounded in conflict theory, cultural evolution, multi-agent systems, and institutional economics. The appropriateness framework treats persistent disagreement as the normal case and designs for it by applying four principles: (1) contextual grounding, (2) community customization, (3) continual adaptation, and (4) polycentric governance. We argue here that adopting these design principles is a good way to shift the main alignment metaphor from moral unification to a more productive metaphor of conflict management, and that taking this step is both desirable and urgent…(More)”.

The world at our fingertips, just out of reach: the algorithmic age of AI


Article by Soumi Banerjee: “Artificial intelligence (AI) has made global movements, testimonies, and critiques seem just a swipe away. The digital realm, powered by machine learning and algorithmic recommendation systems, offers an abundance of visual, textual, and auditory information. With a few swipes or keystrokes, the unbounded world lies open before us. Yet this ‘openness’ conceals a fundamental paradox: the distinction between availability and accessibility.

What is technically available is not always epistemically accessible. What appears global is often algorithmically curated. And what is served to users under the guise of choice frequently reflects the imperatives of engagement, profit, and emotional resonance over critical understanding or cognitive expansion.

The transformative potential of AI in democratising access to information comes with risks. Algorithmic enclosure and content curation can deepen epistemic inequality, particularly for the youth, whose digital fluency often masks a lack of epistemic literacy. What we need is algorithmic transparency, civic education in media literacy, and inclusive knowledge formats…(More)”.

Building Community-Centered AI Collaborations


Article by Michelle Flores Vryn and Meena Das: “AI can only boost the under-resourced nonprofit world if we design it to serve the communities we care about. But as nonprofits consider how to incorporate AI into their work, many look to expertise from tech sector, expecting tools and implementation advice as well as ethical guidance. Yet when mission-driven entities—with a strong focus on people, communities, and equity—partner solely with tech companies, they may encounter a variety of obstacles, such as:

  1. Limited understanding of community needs: Sector-specific knowledge is essential for aligning AI with nonprofit missions, something many tech companies lack.
  2. Bias in AI models: Without diverse input, AI models may exacerbate biases or misrepresent the communities that nonprofits serve.
  3. Resource constraints: Tech solutions often presume budgets or capacity beyond what nonprofits can bring to bear, creating a reliance on tools that fit the nonprofit context.

We need creative, diverse collaborations across various fields to ensure that technology is deployed in ways that align with nonprofit values, build trust, and serve the greater good. Seeking partners outside of the tech world helps nonprofits develop AI solutions that are context-aware, equitable, and resource-sensitive. Most importantly, nonprofit practitioners must deeply consider our ideal future state: What does an AI-empowered nonprofit sector look like when it truly centers human well-being, community agency, and ethical technology?

Imagining this future means not just reacting to emerging technology but proactively shaping its trajectory. Instead of simply adapting to AI’s capabilities, nonprofits should ask:

  • What problems do we truly need AI to solve?
  • Whose voices must be centered in AI decision-making?
  • How do we ensure AI remains a tool for empowerment rather than control?..(More)”.

Policy Implications of DeepSeek AI’s Talent Base


Brief by Amy Zegart and Emerson Johnston: “Chinese startup DeepSeek’s highly capable R1 and V3 models challenged prevailing beliefs about the United States’ advantage in AI innovation, but public debate focused more on the company’s training data and computing power than human talent. We analyzed data on the 223 authors listed on DeepSeek’s five foundational technical research papers, including information on their research output, citations, and institutional affiliations, to identify notable talent patterns. Nearly all of DeepSeek’s researchers were educated or trained in China, and more than half never left China for schooling or work. Of the quarter or so that did gain some experience in the United States, most returned to China to work on AI development there. These findings challenge the core assumption that the United States holds a natural AI talent lead. Policymakers need to reinvest in competing to attract and retain the world’s best AI talent while bolstering STEM education to maintain competitiveness…(More)”.

Governing in the Age of AI: Reimagining Local Government


Report by the Tony Blair Institute for Global Change: “…The limits of the existing operating model have been reached. Starved of resources by cuts inflicted by previous governments over the past 15 years, many councils are on the verge of bankruptcy even though local taxes are at their highest level. Residents wait too long for care, too long for planning applications and too long for benefits; many people never receive what they are entitled to. Public satisfaction with local services is sliding.

Today, however, there are new tools – enabled by artificial intelligence – that would allow councils to tackle these challenges. The day-to-day tasks of local government, whether related to the delivery of public services or planning for the local area, can all be performed faster, better and cheaper with the use of AI – a true transformation not unlike the one seen a century ago.

These tools would allow councils to overturn an operating model that is bureaucratic, labour-intensive and unresponsive to need. AI could release staff from repetitive tasks and relieve an overburdened and demotivated workforce. It could help citizens navigate the labyrinth of institutions, webpages and forms with greater ease and convenience. It could support councils to make better long-term decisions to drive economic growth, without which the resource pressure will only continue to build…(More)”.