Smart Cities to Smart Societies: Moving Beyond Technology


Book edited by Esmat Zaidan, Imad Antoine Ibrahim, and Elie Azar: “…explores the governance of smart cities from a holistic approach, arguing that the creation of smart cities must consider the specific circumstances of each country to improve the preservation, revitalisation, liveability, and sustainability of urban areas. The recent push for smart cities is part of an effort to reshape urban development through megaprojects, centralised master planning, and approaches that convey modernism and global affluence. However, moving towards a citywide smart transition is a major undertaking, and complexities are expected to grow exponentially. This book argues that a comprehensive approach is necessary to consider all relevant aspects. The chapters seek to identify the potential and pitfalls of the smart transformation of urban communities and its role in sustainability goals; share state-of-the-art practices concerning technology, policy, and social science dimensions in smart cities and communities; and develop opportunities for cooperation and partnership in wider and larger research and development programmes. Divided into three parts, the first part of the book highlights the significance of various societal elements and factors in facilitating a successful smart transition, with a particular emphasis on the role of human capital. The second part delves into the challenges associated with technology and its integration into smart city initiatives. The final part of the book examines the current state of regulations and policies governing smart cities. The book will be an important asset for students and researchers studying law, engineering, political science, international relations, geopolitics, economics, and engineering…(More)”.

How the UK could monetise ‘citizen data’ and turn it into a national asset


Article by Ashley Braganza and S. Asieh H. Tabaghdehi: “Across all sectors, UK citizens produce vast amounts of data. This data is increasingly needed to train AI systems. But it is also of enormous value to private companies, which use it to target adverts to consumers based on their behaviour or to personalise content to keep people on their site.

Yet the economic and social value of this citizen-generated data is rarely returned to the public, highlighting the need for more equitable and transparent models of data stewardship.

AI companies have demonstrated that datasets hold immense economic, social and strategic value. And the UK’s AI Opportunities Action Plan notes that access to new and high-quality datasets can confer a competitive edge in developing AI models. This in turn unlocks the potential for innovative products and services.

However, there’s a catch. Most citizens have signed over their data to companies by accepting standard terms and conditions. Once citizen data is “owned” by companies, this leaves others unable to access it or forced to pay to do so.

Commercial approaches to data tend to prioritise short-term profit, often at the expense of the public interest. The debate over the use of artistic and creative materials to train AI models without recompense to the creator exemplifies the broader trade-off between commercial use of data and the public interest.

Countries around the world are recognising the strategic value of public data. The UK government could lead in making public data into a strategic asset. What this might mean in practice is the government owning citizen data and monetising this through sale or licensing agreements with commercial companies.

In our evidence, we proposed a UK sovereign data fund to manage the monetisation of public datasets curated within the NDL. This fund could invest directly in UK companies, fund scale-ups and create joint ventures with local and international partners.

The fund would have powers to license anonymised, ethically governed data to companies for commercial use. It would also be in a position to fast-track projects that benefit the UK or have been deemed to be national priorities. (These priorities are drones and other autonomous technologies as well as engineering biology, space and AI in healthcare.)…(More)”.

WorkflowHub: a registry for computational workflows


Paper by Ove Johan Ragnar Gustafsson et al: “The rising popularity of computational workflows is driven by the need for repetitive and scalable data processing, sharing of processing know-how, and transparent methods. As both combined records of analysis and descriptions of processing steps, workflows should be reproducible, reusable, adaptable, and available. Workflow sharing presents opportunities to reduce unnecessary reinvention, promote reuse, increase access to best practice analyses for non-experts, and increase productivity. In reality, workflows are scattered and difficult to find, in part due to the diversity of available workflow engines and ecosystems, and because workflow sharing is not yet part of research practice. WorkflowHub provides a unified registry for all computational workflows that links to community repositories, and supports both the workflow lifecycle and making workflows findable, accessible, interoperable, and reusable (FAIR). By interoperating with diverse platforms, services, and external registries, WorkflowHub adds value by supporting workflow sharing, explicitly assigning credit, enhancing FAIRness, and promoting workflows as scholarly artefacts. The registry has a global reach, with hundreds of research organisations involved, and more than 800 workflows registered…(More)”

Where Cloud Meets Cement


Report by Hanna Barakat, Chris Cameron, Alix Dunn and Prathm Juneja, and Emma Prest: “This report examines the global expansion of data centers driven by AI and cloud computing, highlighting both their economic promises and the often-overlooked social and environmental costs. Through case studies across five countries, it investigates how governments and tech companies influence development, how communities resist harmful effects, and what support is needed for effective advocacy…(More)”.

Designing Shared Data Futures: Engaging young people on how to re-use data responsibly for health and well-being


Report by Hannah Chafetz, Sampriti Saxena, Tracy Jo Ingram, Andrew J. Zahuranec, Jennifer Requejo and Stefaan Verhulst: “When young people are engaged in data decisions for or about them, they not only become more informed about this data, but can also contribute to new policies and programs that improve their health and well-being. However, oftentimes young people are left out of these discussions and are unaware of the data that organizations collect.

In October 2023, The Second Lancet Commission on Adolescent Health and well-being, the United Nations Children’s Fund (UNICEF), and The GovLab at New York University hosted six Youth Solutions Labs (or co-design workshops) with over 120 young people from 36 countries around the world. In addition to co-designing solutions to five key issues impacting their health and well-being, we sought to understand current sentiments around the re-use of data on those issues. The Labs provided several insights about young people’s preferences regarding: 1) the purposes for which data should be re-used to improve health and well-being, 2) the types and sources of data that should and should not be re-used, 3) who should have access to previously collected data, and 4) under what circumstances data re-use should take place. Additionally, participants provided suggestions of what ethical and responsible data re-use looks like to them and how young people can participate in decision making processes. In this paper, we elaborate on these findings and provide a series of recommendations to accelerate responsible data re-use for the health and well-being of young people…(More)”.

Why Generative AI Isn’t Transforming Government (Yet) — and What We Can Do About It


Article by Tiago C. Peixoto: “A few weeks ago, I reached out to a handful of seasoned digital services practitioners, NGOs, and philanthropies with a simple question: Where are the compelling generative AI (GenAI) use cases in public-sector workflows? I wasn’t looking for better search or smarter chatbots. I wanted examples of automation of real public workflows – something genuinely interesting and working. The responses, though numerous, were underwhelming.

That question has gained importance amid a growing number of reports forecasting AI’s transformative impact on government. The Alan Turing Institute, for instance, published a rigorous study estimating the potential of AI to help automate over 140 million government transactions in the UK. The Tony Blair Institute also weighed in, suggesting that a substantive portion of public-sector work could be automated. While the report helped bring welcome attention to the issue, its use of GPT-4 to assess task automatability has sparked a healthy discussion about how best to evaluate feasibility. Like other studies in this area, both reports highlight potential – but stop short of demonstrating real service automation.

Without testing technologies in real service environments – where workflows, incentives, and institutional constraints shape outcomes – and grounding each pilot in clear efficiency or well-being metrics, estimates risk becoming abstractions that underestimate feasibility.

This pattern aligns with what Arvind Narayanan and Sayash Kapoor argue in “AI as Normal Technology:” the impact of AI is realized only when methods translate into applications and diffuse through real-world systems. My own review, admittedly non-representative, confirms their call for more empirical work on the innovation-diffusion lag.

In the public sector, the gap between capability and impact is not only wide but also structural…(More)”

When data disappear: public health pays as US policy strays


Paper by Thomas McAndrew, Andrew A Lover, Garrik Hoyt, and Maimuna S Majumder: “Presidential actions on Jan 20, 2025, by President Donald Trump, including executive orders, have delayed access to or led to the removal of crucial public health data sources in the USA. The continuous collection and maintenance of health data support public health, safety, and security associated with diseases such as seasonal influenza. To show how public health data surveillance enhances public health practice, we analysed data from seven US Government-maintained sources associated with seasonal influenza. We fit two models that forecast the number of national incident influenza hospitalisations in the USA: (1) a data-rich model incorporating data from all seven Government data sources; and (2) a data-poor model built using a single Government hospitalisation data source, representing the minimal required information to produce a forecast of influenza hospitalisations. The data-rich model generated reliable forecasts useful for public health decision making, whereas the predictions using the data-poor model were highly uncertain, rendering them impractical. Thus, health data can serve as a transparent and standardised foundation to improve domestic and global health. Therefore, a plan should be developed to safeguard public health data as a public good…(More)”.

We still don’t know how much energy AI consumes


Article by Sasha Luccioni: “…The AI Energy Score project, a collaboration between Salesforce, Hugging FaceAI developer Cohere and Carnegie Mellon University, is an attempt to shed more light on the issue by developing a standardised approach. The code is open and available for anyone to access and contribute to. The goal is to encourage the AI community to test as many models as possible.

By examining 10 popular tasks (such as text generation or audio transcription) on open-source AI models, it is possible to isolate the amount of energy consumed by the computer hardware that runs them. These are assigned scores ranging between one and five stars based on their relative efficiency. Between the most and least efficient AI models in our sample, we found a 62,000-fold difference in the power required. 

Since the project was launched in February a new tool compares the energy use of chatbot queries with everyday activities like phone charging or driving as a way to help users understand the environmental impacts of the tech they use daily.

The tech sector is aware that AI emissions put its climate commitments in danger. Both Microsoft and Google no longer seem to be meeting their net zero targets. So far, however, no Big Tech company has agreed to use the methodology to test its own AI models.

It is possible that AI models will one day help in the fight against climate change. AI systems pioneered by companies like DeepMind are already designing next-generation solar panels and battery materials, optimising power grid distribution and reducing the carbon intensity of cement production.

Tech companies are moving towards cleaner energy sources too. Microsoft is investing in the Three Mile Island nuclear power plant and Alphabet is engaging with more experimental approaches such as small modular nuclear reactors. In 2024, the technology sector contributed to 92 per cent of new clean energy purchases in the US. 

But greater clarity is needed. OpenAI, Anthropic and other tech companies should start disclosing the energy consumption of their models. If they resist, then we need legislation that would make such disclosures mandatory.

As more users interact with AI systems, they should be given the tools to understand how much energy each request consumes. Knowing this might make them more careful about using AI for superfluous tasks like looking up a nation’s capital. Increased transparency would also be an incentive for companies developing AI-powered services to select smaller, more sustainable models that meet their specific needs, rather than defaulting to the largest, most energy-intensive options…(More)”.

Gen Z’s new side hustle: selling data


Article by Erica Pandey: “Many young people are more willing than their parents to share personal data, giving companies deeper insight into their lives.

Why it matters: Selling data is becoming the new selling plasma.

Case in point: Generation Lab, a youth polling company, is launching a new product, Verb.AI, today — betting that buying this data is the future of polling.

  • “We think corporations have extracted user data without fairly compensating people for their own data,” says Cyrus Beschloss, CEO of Generation Lab. “We think users should know exactly what data they’re giving us and should feel good about what they’re receiving in return.”

How it works: Generation Lab offers people cash — $50 or more per month, depending on use and other factors — to download a tracker onto their phones.

  • The product takes about 90 seconds to download, and once it’s on your phone, it tracks things like what you browse, what you buy, which streaming apps you use — all anonymously. There are also things it doesn’t track, like activity on your bank account.
  • Verb then uses that data to create a digital twin of you that lives in a central database and knows your preferences…(More)”.

Public AI White Paper – A Public Alternative to Private AI Dominance


White paper by the Bertelsmann Stiftung and Open Future: “Today, the most advanced AI systems are developed and controlled by a small number of private companies. These companies hold power not only over the models themselves but also over key resources such as computing infrastructure. This concentration of power poses not only economic risks but also significant democratic challenges.

The Public AI White Paper presents an alternative vision, outlining how open and public-interest approaches to AI can be developed and institutionalized. It advocates for a rebalancing of power within the AI ecosystem – with the goal of enabling societies to shape AI actively, rather than merely consume it…(More)”.