Paper by Ove Johan Ragnar Gustafsson et al: “The rising popularity of computational workflows is driven by the need for repetitive and scalable data processing, sharing of processing know-how, and transparent methods. As both combined records of analysis and descriptions of processing steps, workflows should be reproducible, reusable, adaptable, and available. Workflow sharing presents opportunities to reduce unnecessary reinvention, promote reuse, increase access to best practice analyses for non-experts, and increase productivity. In reality, workflows are scattered and difficult to find, in part due to the diversity of available workflow engines and ecosystems, and because workflow sharing is not yet part of research practice. WorkflowHub provides a unified registry for all computational workflows that links to community repositories, and supports both the workflow lifecycle and making workflows findable, accessible, interoperable, and reusable (FAIR). By interoperating with diverse platforms, services, and external registries, WorkflowHub adds value by supporting workflow sharing, explicitly assigning credit, enhancing FAIRness, and promoting workflows as scholarly artefacts. The registry has a global reach, with hundreds of research organisations involved, and more than 800 workflows registered…(More)”
Where Cloud Meets Cement
Report by Hanna Barakat, Chris Cameron, Alix Dunn and Prathm Juneja, and Emma Prest: “This report examines the global expansion of data centers driven by AI and cloud computing, highlighting both their economic promises and the often-overlooked social and environmental costs. Through case studies across five countries, it investigates how governments and tech companies influence development, how communities resist harmful effects, and what support is needed for effective advocacy…(More)”.
Why Generative AI Isn’t Transforming Government (Yet) — and What We Can Do About It
Article by Tiago C. Peixoto: “A few weeks ago, I reached out to a handful of seasoned digital services practitioners, NGOs, and philanthropies with a simple question: Where are the compelling generative AI (GenAI) use cases in public-sector workflows? I wasn’t looking for better search or smarter chatbots. I wanted examples of automation of real public workflows – something genuinely interesting and working. The responses, though numerous, were underwhelming.
That question has gained importance amid a growing number of reports forecasting AI’s transformative impact on government. The Alan Turing Institute, for instance, published a rigorous study estimating the potential of AI to help automate over 140 million government transactions in the UK. The Tony Blair Institute also weighed in, suggesting that a substantive portion of public-sector work could be automated. While the report helped bring welcome attention to the issue, its use of GPT-4 to assess task automatability has sparked a healthy discussion about how best to evaluate feasibility. Like other studies in this area, both reports highlight potential – but stop short of demonstrating real service automation.
Without testing technologies in real service environments – where workflows, incentives, and institutional constraints shape outcomes – and grounding each pilot in clear efficiency or well-being metrics, estimates risk becoming abstractions that underestimate feasibility.
This pattern aligns with what Arvind Narayanan and Sayash Kapoor argue in “AI as Normal Technology:” the impact of AI is realized only when methods translate into applications and diffuse through real-world systems. My own review, admittedly non-representative, confirms their call for more empirical work on the innovation-diffusion lag.
In the public sector, the gap between capability and impact is not only wide but also structural…(More)”
We still don’t know how much energy AI consumes
Article by Sasha Luccioni: “…The AI Energy Score project, a collaboration between Salesforce, Hugging Face, AI developer Cohere and Carnegie Mellon University, is an attempt to shed more light on the issue by developing a standardised approach. The code is open and available for anyone to access and contribute to. The goal is to encourage the AI community to test as many models as possible.
By examining 10 popular tasks (such as text generation or audio transcription) on open-source AI models, it is possible to isolate the amount of energy consumed by the computer hardware that runs them. These are assigned scores ranging between one and five stars based on their relative efficiency. Between the most and least efficient AI models in our sample, we found a 62,000-fold difference in the power required.
Since the project was launched in February a new tool compares the energy use of chatbot queries with everyday activities like phone charging or driving as a way to help users understand the environmental impacts of the tech they use daily.
The tech sector is aware that AI emissions put its climate commitments in danger. Both Microsoft and Google no longer seem to be meeting their net zero targets. So far, however, no Big Tech company has agreed to use the methodology to test its own AI models.
It is possible that AI models will one day help in the fight against climate change. AI systems pioneered by companies like DeepMind are already designing next-generation solar panels and battery materials, optimising power grid distribution and reducing the carbon intensity of cement production.
Tech companies are moving towards cleaner energy sources too. Microsoft is investing in the Three Mile Island nuclear power plant and Alphabet is engaging with more experimental approaches such as small modular nuclear reactors. In 2024, the technology sector contributed to 92 per cent of new clean energy purchases in the US.
But greater clarity is needed. OpenAI, Anthropic and other tech companies should start disclosing the energy consumption of their models. If they resist, then we need legislation that would make such disclosures mandatory.
As more users interact with AI systems, they should be given the tools to understand how much energy each request consumes. Knowing this might make them more careful about using AI for superfluous tasks like looking up a nation’s capital. Increased transparency would also be an incentive for companies developing AI-powered services to select smaller, more sustainable models that meet their specific needs, rather than defaulting to the largest, most energy-intensive options…(More)”.
Public AI White Paper – A Public Alternative to Private AI Dominance
White paper by the Bertelsmann Stiftung and Open Future: “Today, the most advanced AI systems are developed and controlled by a small number of private companies. These companies hold power not only over the models themselves but also over key resources such as computing infrastructure. This concentration of power poses not only economic risks but also significant democratic challenges.
The Public AI White Paper presents an alternative vision, outlining how open and public-interest approaches to AI can be developed and institutionalized. It advocates for a rebalancing of power within the AI ecosystem – with the goal of enabling societies to shape AI actively, rather than merely consume it…(More)”.
The EU’s AI Power Play: Between Deregulation and Innovation
Article by Raluca Csernatoni: “From the outset, the European Union (EU) has positioned itself as a trailblazer in AI governance with the world’s first comprehensive legal framework for AI systems in use, the AI Act. The EU’s approach to governing artificial intelligence (AI) has been characterized by a strong precautionary and ethics-driven philosophy. This ambitious regulation reflects the EU’s long-standing approach of prioritizing high ethical standards and fundamental rights in tech and digital policies—a strategy of fostering both excellence and trust in human-centric AI models. Yet, framed as essential to keep pace with U.S. and Chinese AI giants, the EU has recently taken a deregulatory turn that risks trading away democratic safeguards, without addressing systemic challenges to AI innovation.
The EU now stands at a crossroads: it can forge ahead with bold, home-grown AI innovation underpinned by robust regulation, or it can loosen its ethical guardrails, only to find itself stripped of both technological autonomy and regulatory sway. While Brussels’s recent deregulatory turn is framed as a much needed competitiveness boost, the real obstacles to Europe’s digital renaissance lie elsewhere: persistent underfunding, siloed markets, and reliance on non-EU infrastructures…(More)”
From Software to Society — Openness in a changing world
Report by Henriette Litta and Peter Bihr: “…takes stock and looks to the future: What does openness mean in the digital age? Is the concept still up to date? The study traces the development of openness and analyses current challenges. It is based on interviews with experts and extensive literature research. The key insights at a glance are:
Give Openness a purpose. Especially in times of increasing injustice, surveillance and power monopolies, a clear framework for meaningful openness is needed, as this is often lacking. Companies market ‘open’ products without enabling co-creation. Political actors invoke openness without strengthening democratic control. This is particularly evident when dealing with AI. AI systems are complex and are often dominated by a few tech companies – which makes opening them up a fundamental challenge. The dominance of some tech companies is also massively exploited, which can lead to the censorship of other opinions.
Protect Openness by adding guard rails. Those who demand openness must also be prepared to get involved in political disputes – against a market monopoly, for example. According to Litta and Bihr, this requires new licence models that include obligations to return and share, as well as stricter enforcement of antitrust law and data protection. Openness therefore needs rules…(More)”.
Reimagining Data Governance for AI: Operationalizing Social Licensing for Data Reuse
Report by Stefaan Verhulst, Adam Zable, Andrew J. Zahuranec, and Peter Addo: “…introduces a practical, community-centered framework for governing data reuse in the development and deployment of artificial intelligence systems in low- and middle-income countries (LMICs). As AI increasingly relies on data from LMICs, affected communities are often excluded from decision-making and see little benefit from how their data is used. This report,…reframes data governance through social licensing—a participatory model that empowers communities to collectively define, document, and enforce conditions for how their data is reused. It offers a step-by-step methodology and actionable tools, including a Social Licensing Questionnaire and adaptable contract clauses, alongisde real-world scenarios and recommendations for enforcement, policy integration, and future research. This report recasts data governance as a collective, continuous process – shifting the focus from individual consent to community decision-making…(More)”.

Leading, not lagging: Africa’s gen AI opportunity
Article by Mayowa Kuyoro, Umar Bagus: “The rapid rise of gen AI has captured the world’s imagination and accelerated the integration of AI into the global economy and the lives of people across the world. Gen AI heralds a step change in productivity. As institutions apply AI in novel ways, beyond the advanced analytics and machine learning (ML) applications of the past ten years, the global economy could increase significantly, improving the lives and livelihoods of millions.1
Nowhere is this truer than in Africa, a continent that has already demonstrated its ability to use technology to leapfrog traditional development pathways; for example, mobile technology overcoming the fixed-line internet gap, mobile payments in Kenya, and numerous African institutions making the leap to cloud faster than their peers in developed markets.2 Africa has been quick on the uptake with gen AI, too, with many unique and ingenious applications and deployments well underway…(More)”.
Across McKinsey’s client service work in Africa, many institutions have tested and deployed AI solutions. Our research has found that more than 40 percent of institutions have either started to experiment with gen AI or have already implemented significant solutions (see sidebar “About the research inputs”). However, the continent has so far only scratched the surface of what is possible, with both AI and gen AI. If institutions can address barriers and focus on building for scale, our analysis suggests African economies could unlock up to $100 billion in annual economic value across multiple sectors from gen AI alone. That is in addition to the still-untapped potential from traditional AI and ML in many sectors today—the combined traditional AI and gen AI total is more than double what gen AI can unlock on its own, with traditional AI making up at least 60 percent of the value…(More)”
The Right to AI
Paper by Rashid Mushkani, Hugo Berard, Allison Cohen, Shin Koeski: “This paper proposes a Right to AI, which asserts that individuals and communities should meaningfully participate in the development and governance of the AI systems that shape their lives. Motivated by the increasing deployment of AI in critical domains and inspired by Henri Lefebvre’s concept of the Right to the City, we reconceptualize AI as a societal infrastructure, rather than merely a product of expert design. In this paper, we critically evaluate how generative agents, large-scale data extraction, and diverse cultural values bring new complexities to AI oversight. The paper proposes that grassroots participatory methodologies can mitigate biased outcomes and enhance social responsiveness. It asserts that data is socially produced and should be managed and owned collectively. Drawing on Sherry Arnstein’s Ladder of Citizen Participation and analyzing nine case studies, the paper develops a four-tier model for the Right to AI that situates the current paradigm and envisions an aspirational future. It proposes recommendations for inclusive data ownership, transparent design processes, and stakeholder-driven oversight. We also discuss market-led and state-centric alternatives and argue that participatory approaches offer a better balance between technical efficiency and democratic legitimacy…(More)”.