Paper by Thomas McAndrew, Andrew A Lover, Garrik Hoyt, and Maimuna S Majumder: “Presidential actions on Jan 20, 2025, by President Donald Trump, including executive orders, have delayed access to or led to the removal of crucial public health data sources in the USA. The continuous collection and maintenance of health data support public health, safety, and security associated with diseases such as seasonal influenza. To show how public health data surveillance enhances public health practice, we analysed data from seven US Government-maintained sources associated with seasonal influenza. We fit two models that forecast the number of national incident influenza hospitalisations in the USA: (1) a data-rich model incorporating data from all seven Government data sources; and (2) a data-poor model built using a single Government hospitalisation data source, representing the minimal required information to produce a forecast of influenza hospitalisations. The data-rich model generated reliable forecasts useful for public health decision making, whereas the predictions using the data-poor model were highly uncertain, rendering them impractical. Thus, health data can serve as a transparent and standardised foundation to improve domestic and global health. Therefore, a plan should be developed to safeguard public health data as a public good…(More)”.
We still don’t know how much energy AI consumes
Article by Sasha Luccioni: “…The AI Energy Score project, a collaboration between Salesforce, Hugging Face, AI developer Cohere and Carnegie Mellon University, is an attempt to shed more light on the issue by developing a standardised approach. The code is open and available for anyone to access and contribute to. The goal is to encourage the AI community to test as many models as possible.
By examining 10 popular tasks (such as text generation or audio transcription) on open-source AI models, it is possible to isolate the amount of energy consumed by the computer hardware that runs them. These are assigned scores ranging between one and five stars based on their relative efficiency. Between the most and least efficient AI models in our sample, we found a 62,000-fold difference in the power required.
Since the project was launched in February a new tool compares the energy use of chatbot queries with everyday activities like phone charging or driving as a way to help users understand the environmental impacts of the tech they use daily.
The tech sector is aware that AI emissions put its climate commitments in danger. Both Microsoft and Google no longer seem to be meeting their net zero targets. So far, however, no Big Tech company has agreed to use the methodology to test its own AI models.
It is possible that AI models will one day help in the fight against climate change. AI systems pioneered by companies like DeepMind are already designing next-generation solar panels and battery materials, optimising power grid distribution and reducing the carbon intensity of cement production.
Tech companies are moving towards cleaner energy sources too. Microsoft is investing in the Three Mile Island nuclear power plant and Alphabet is engaging with more experimental approaches such as small modular nuclear reactors. In 2024, the technology sector contributed to 92 per cent of new clean energy purchases in the US.
But greater clarity is needed. OpenAI, Anthropic and other tech companies should start disclosing the energy consumption of their models. If they resist, then we need legislation that would make such disclosures mandatory.
As more users interact with AI systems, they should be given the tools to understand how much energy each request consumes. Knowing this might make them more careful about using AI for superfluous tasks like looking up a nation’s capital. Increased transparency would also be an incentive for companies developing AI-powered services to select smaller, more sustainable models that meet their specific needs, rather than defaulting to the largest, most energy-intensive options…(More)”.
Gen Z’s new side hustle: selling data
Article by Erica Pandey: “Many young people are more willing than their parents to share personal data, giving companies deeper insight into their lives.
Why it matters: Selling data is becoming the new selling plasma.
Case in point: Generation Lab, a youth polling company, is launching a new product, Verb.AI, today — betting that buying this data is the future of polling.
- “We think corporations have extracted user data without fairly compensating people for their own data,” says Cyrus Beschloss, CEO of Generation Lab. “We think users should know exactly what data they’re giving us and should feel good about what they’re receiving in return.”
How it works: Generation Lab offers people cash — $50 or more per month, depending on use and other factors — to download a tracker onto their phones.
- The product takes about 90 seconds to download, and once it’s on your phone, it tracks things like what you browse, what you buy, which streaming apps you use — all anonymously. There are also things it doesn’t track, like activity on your bank account.
- Verb then uses that data to create a digital twin of you that lives in a central database and knows your preferences…(More)”.
Public AI White Paper – A Public Alternative to Private AI Dominance
White paper by the Bertelsmann Stiftung and Open Future: “Today, the most advanced AI systems are developed and controlled by a small number of private companies. These companies hold power not only over the models themselves but also over key resources such as computing infrastructure. This concentration of power poses not only economic risks but also significant democratic challenges.
The Public AI White Paper presents an alternative vision, outlining how open and public-interest approaches to AI can be developed and institutionalized. It advocates for a rebalancing of power within the AI ecosystem – with the goal of enabling societies to shape AI actively, rather than merely consume it…(More)”.
The EU’s AI Power Play: Between Deregulation and Innovation
Article by Raluca Csernatoni: “From the outset, the European Union (EU) has positioned itself as a trailblazer in AI governance with the world’s first comprehensive legal framework for AI systems in use, the AI Act. The EU’s approach to governing artificial intelligence (AI) has been characterized by a strong precautionary and ethics-driven philosophy. This ambitious regulation reflects the EU’s long-standing approach of prioritizing high ethical standards and fundamental rights in tech and digital policies—a strategy of fostering both excellence and trust in human-centric AI models. Yet, framed as essential to keep pace with U.S. and Chinese AI giants, the EU has recently taken a deregulatory turn that risks trading away democratic safeguards, without addressing systemic challenges to AI innovation.
The EU now stands at a crossroads: it can forge ahead with bold, home-grown AI innovation underpinned by robust regulation, or it can loosen its ethical guardrails, only to find itself stripped of both technological autonomy and regulatory sway. While Brussels’s recent deregulatory turn is framed as a much needed competitiveness boost, the real obstacles to Europe’s digital renaissance lie elsewhere: persistent underfunding, siloed markets, and reliance on non-EU infrastructures…(More)”
From Software to Society — Openness in a changing world
Report by Henriette Litta and Peter Bihr: “…takes stock and looks to the future: What does openness mean in the digital age? Is the concept still up to date? The study traces the development of openness and analyses current challenges. It is based on interviews with experts and extensive literature research. The key insights at a glance are:
Give Openness a purpose. Especially in times of increasing injustice, surveillance and power monopolies, a clear framework for meaningful openness is needed, as this is often lacking. Companies market ‘open’ products without enabling co-creation. Political actors invoke openness without strengthening democratic control. This is particularly evident when dealing with AI. AI systems are complex and are often dominated by a few tech companies – which makes opening them up a fundamental challenge. The dominance of some tech companies is also massively exploited, which can lead to the censorship of other opinions.
Protect Openness by adding guard rails. Those who demand openness must also be prepared to get involved in political disputes – against a market monopoly, for example. According to Litta and Bihr, this requires new licence models that include obligations to return and share, as well as stricter enforcement of antitrust law and data protection. Openness therefore needs rules…(More)”.
Federated learning for children’s data
Article by Roy Saurabh: “Across the world, governments are prioritizing the protection of citizens’ data – especially that of children. New laws, dedicated data protection authorities, and digital infrastructure initiatives reflect a growing recognition that data is not just an asset, but a foundation for public trust.
Yet a major challenge remains: how can governments use sensitive data to improve outcomes – such as in education – without undermining the very privacy protections they are committed to uphold?
One promising answer lies in federated, governance-aware approaches to data use. But realizing this potential requires more than new technology; it demands robust data governance frameworks designed from the outset.
Data governance: The missing link
In many countries, ministries of education, health, and social protection each hold pieces of the puzzle that together could provide a more complete picture of children’s learning and well-being. For example, a child’s school attendance, nutritional status, and family circumstances all shape their ability to thrive, yet these records are kept in separate systems.
Efforts to combine such data often run into legal and technical barriers. Centralized data lakes raise concerns about consent, security, and compliance with privacy laws. In fact, many international standards stress the principle of data minimization – the idea that personal information should not be gathered or combined unnecessarily.
“In many countries, ministries of education, health, and social protection each hold pieces of the puzzle that together could provide a more complete picture of children’s learning and well-being.”
This is where the right data governance frameworks become essential. Effective governance defines clear rules about how data can be accessed, shared, and used – specifying who has the authority, what purposes are permitted, and how rights are protected. These frameworks make it possible to collaborate with data responsibly, especially when it comes to children…(More)”
Reimagining Data Governance for AI: Operationalizing Social Licensing for Data Reuse
Report by Stefaan Verhulst, Adam Zable, Andrew J. Zahuranec, and Peter Addo: “…introduces a practical, community-centered framework for governing data reuse in the development and deployment of artificial intelligence systems in low- and middle-income countries (LMICs). As AI increasingly relies on data from LMICs, affected communities are often excluded from decision-making and see little benefit from how their data is used. This report,…reframes data governance through social licensing—a participatory model that empowers communities to collectively define, document, and enforce conditions for how their data is reused. It offers a step-by-step methodology and actionable tools, including a Social Licensing Questionnaire and adaptable contract clauses, alongisde real-world scenarios and recommendations for enforcement, policy integration, and future research. This report recasts data governance as a collective, continuous process – shifting the focus from individual consent to community decision-making…(More)”.

Humanitarian aid depends on good data: what’s wrong with the way it’s collected
Article by Vicki Squire: The defunding of the US Agency for International Development (USAID), along with reductions in aid from the UK and elsewhere, raises questions about the continued collection of data that helps inform humanitarian efforts.
Humanitarian response plans rely on accurate, accessible and up-to-date data. Aid organisations use this to review needs, monitor health and famine risks, and ensure security and access for humanitarian operations.
The reliance on data – and in particular large-scale digitalised data – has intensified in the humanitarian sector over the past few decades. Major donors all proclaim a commitment to evidence-based decision making. The International Organization for Migration’s Displacement Tracking Matrix and the REACH impact initiative are two examples designed to improve operational and strategic awareness of key needs and risks.
Humanitarian data streams have already been affected by USAID cuts. For example, the Famine Early Warning Systems Network was abruptly closed, while the Demographic and Health Surveys programme was “paused”. The latter informed global health policies in areas ranging from maternal health and domestic violence to anaemia and HIV prevalence.
The loss of reliable, accessible and up-to-date data threatens monitoring capacity and early warning systems, while reducing humanitarian access and rendering security failures more likely…(More)”.
How we think about protecting data
Article by Peter Dizikes: “How should personal data be protected? What are the best uses of it? In our networked world, questions about data privacy are ubiquitous and matter for companies, policymakers, and the public.
A new study by MIT researchers adds depth to the subject by suggesting that people’s views about privacy are not firmly fixed and can shift significantly, based on different circumstances and different uses of data.
“There is no absolute value in privacy,” says Fabio Duarte, principal research scientist in MIT’s Senseable City Lab and co-author of a new paper outlining the results. “Depending on the application, people might feel use of their data is more or less invasive.”
The study is based on an experiment the researchers conducted in multiple countries using a newly developed game that elicits public valuations of data privacy relating to different topics and domains of life.
“We show that values attributed to data are combinatorial, situational, transactional, and contextual,” the researchers write.
The open-access paper, “Data Slots: tradeoffs between privacy concerns and benefits of data-driven solutions,” is published today in Nature: Humanities and Social Sciences Communications. The authors are Martina Mazzarello, a postdoc in the Senseable City Lab; Duarte; Simone Mora, a research scientist at Senseable City Lab; Cate Heine PhD ’24 of University College London; and Carlo Ratti, director of the Senseable City Lab.
The study is based around a card game with poker-type chips the researchers created to study the issue, called Data Slots. In it, players hold hands of cards with 12 types of data — such as a personal profile, health data, vehicle location information, and more — that relate to three types of domains where data are collected: home life, work, and public spaces. After exchanging cards, the players generate ideas for data uses, then assess and invest in some of those concepts. The game has been played in-person in 18 different countries, with people from another 74 countries playing it online; over 2,000 individual player-rounds were included in the study…(More)”.