Energy and AI


Report by the International Energy Agency (IEA): “The development and uptake of artificial intelligence (AI) has accelerated in recent years – elevating the question of what widespread deployment of the technology will mean for the energy sector. There is no AI without energy – specifically electricity for data centres. At the same time, AI could transform how the energy industry operates if it is adopted at scale. However, until now, policy makers and other stakeholders have often lacked the tools to analyse both sides of this issue due to a lack of comprehensive data. 

This report from the International Energy Agency (IEA) aims to fill this gap based on new global and regional modelling and datasets, as well as extensive consultation with governments and regulators, the tech sector, the energy industry and international experts. It includes projections for how much electricity AI could consume over the next decade, as well as which energy sources are set to help meet it. It also analyses what the uptake of AI could mean for energy security, emissions, innovation and affordability…(More)”.

Data Sharing: A Case-Study of Luxury Surveillance by Tesla


Paper by Marc Schuilenburg and Yarin Eski: “Why do people voluntarily give away their personal data to private companies? In this paper, we show how data sharing is experienced at the level of Tesla car owners. We regard Tesla cars as luxury surveillance goods for which the drivers voluntarily choose to share their personal data with the US company. Based on an analysis of semi-structured interviews and observations of Tesla owners’ posts on Facebook groups, we discern three elements of luxury surveillance: socializing, enjoying and enduring. We conclude that luxury surveillance can be traced back to the social bonds created by a gift economy…(More)”.

The Future of Health Is Preventive — If We Get Data Governance Right


Article by Stefaan Verhulst: “After a long gestation period of three years, the European Health Data Space (EHDS) is now coming into effect across the European Union, potentially ushering in a new era of health data access, interoperability, and innovation. As this ambitious initiative enters the implementation phase, it brings with it the opportunity to fundamentally reshape how health systems across Europe operate. More generally, the EHDS contains important lessons (and some cautions) for the rest of the world, suggesting how a fragmented, reactive model of healthcare may transition to one that is more integrated, proactive, and prevention-oriented.

For too long, health systems–in the EU and around the world–have been built around treating diseases rather than preventing them. Now, we have an opportunity to change that paradigm. Data, and especially the advent of AI, give us the tools to predict and intervene before illness takes hold. Data offers the potential for a system that prioritizes prevention–one where individuals receive personalized guidance to stay healthy, policymakers access real-time evidence to address risks before they escalate, and epidemics are predicted weeks in advance, enabling proactive, rapid, and highly effective responses.

But to make AI-powered preventive health care a reality, and to make the EHDS a success, we need a new data governance approach, one that would include two key components:

  • The ability to reuse data collected for other purposes (e.g., mobility, retail sales, workplace trends) to improve health outcomes.
  • The ability to integrate different data sources–clinical records and electronic health records (EHRS), but also environmental, social, and economic data — to build a complete picture of health risks.

In what follows, we outline some critical aspects of this new governance framework, including responsible data access and reuse (so-called secondary use), moving beyond traditional consent models to a social license for reuse, data stewardship, and the need to prioritize high-impact applications. We conclude with some specific recommendations for the EHDS, built from the preceding general discussion about the role of AI and data in preventive health…(More)”.

Unlocking Public Value with Non-Traditional Data: Recent Use Cases and Emerging Trends


Article by Adam Zable and Stefaan Verhulst: “Non-Traditional Data (NTD)—digitally captured, mediated, or observed data such as mobile phone records, online transactions, or satellite imagery—is reshaping how we identify, understand, and respond to public interest challenges. As part of the Third Wave of Open Data, these often privately held datasets are being responsibly re-used through new governance models and cross-sector collaboration to generate public value at scale.

In our previous post, we shared emerging case studies across health, urban planning, the environment, and more. Several months later, the momentum has not only continued but diversified. New projects reaffirm NTD’s potential—especially when linked with traditional data, embedded in interdisciplinary research, and deployed in ways that are privacy-aware and impact-focused.

This update profiles recent initiatives that push the boundaries of what NTD can do. Together, they highlight the evolving domains where this type of data is helping to surface hidden inequities, improve decision-making, and build more responsive systems:

  • Financial Inclusion
  • Public Health and Well-Being
  • Socioeconomic Analysis
  • Transportation and Urban Mobility
  • Data Systems and Governance
  • Economic and Labor Dynamics
  • Digital Behavior and Communication…(More)”.

Fostering Open Data


Paper by Uri Y. Hacohen: “Data is often heralded as “the world’s most valuable resource,” yet its potential to benefit society remains unrealized due to systemic barriers in both public and private sectors. While open data-defined as data that is available, accessible, and usable-holds immense promise to advance open science, innovation, economic growth, and democratic values, its utilization is hindered by legal, technical, and organizational challenges. Public sector initiatives, such as U.S. and European Union open data regulations, face uneven enforcement and regulatory complexity, disproportionately affecting under-resourced stakeholders such as researchers. In the private sector, companies prioritize commercial interests and user privacy, often obstructing data openness through restrictive policies and technological barriers. This article proposes an innovative, four-layered policy framework to overcome these obstacles and foster data openness. The framework includes (1) improving open data infrastructures, (2) ensuring legal frameworks for open data, (3) incentivizing voluntary data sharing, and (4) imposing mandatory data sharing obligations. Each policy cluster is tailored to address sector-specific challenges and balance competing values such as privacy, property, and national security. Drawing from academic research and international case studies, the framework provides actionable solutions to transition from a siloed, proprietary data ecosystem to one that maximizes societal value. This comprehensive approach aims to reimagine data governance and unlock the transformative potential of open data…(More)”.

Trump Wants to Merge Government Data. Here Are 314 Things It Might Know About You.


Article by Emily Badger and Sheera Frenkel: “The federal government knows your mother’s maiden name and your bank account number. The student debt you hold. Your disability status. The company that employs you and the wages you earn there. And that’s just a start. It may also know your …and at least 263 more categories of data.These intimate details about the personal lives of people who live in the United States are held in disconnected data systems across the federal government — some at the Treasury, some at the Social Security Administration and some at the Department of Education, among other agencies.

The Trump administration is now trying to connect the dots of that disparate information. Last month, President Trump signed an executive order calling for the “consolidation” of these segregated records, raising the prospect of creating a kind of data trove about Americans that the government has never had before, and that members of the president’s own party have historically opposed.

The effort is being driven by Elon Musk, the world’s richest man, and his lieutenants with the Department of Government Efficiency, who have sought access to dozens of databases as they have swept through agencies across the federal government. Along the way, they have elbowed past the objections of career staff, data security protocols, national security experts and legal privacy protections…(More)”.

We Must Steward, Not Subjugate Nor Worship AI


Essay by Brian J. A. Boyd: “…How could stewardship of artificially living AI be pursued on a broader, even global, level? Here, the concept of “integral ecology” is helpful. Pope Francis uses the phrase to highlight the ways in which everything is connected, both through the web of life and in that social, political, and environmental challenges cannot be solved in isolation. The immediate need for stewardship over AI is to ensure that its demands for power and industrial production are addressed in a way that benefits those most in need, rather than de-prioritizing them further. For example, the energy requirements to develop tomorrow’s AI should spur research into small modular nuclear reactors and updated distribution systems, making energy abundant rather than causing regressive harms by driving up prices on an already overtaxed grid. More broadly, we will need to find the right institutional arrangements and incentive structures to make AI Amistics possible.

We are having a painfully overdue conversation about the nature and purpose of social media, and tech whistleblowers like Tristan Harris have offered grave warnings about how the “race to the bottom of the brain stem” is underway in AI as well. The AI equivalent of the addictive “infinite scroll” design feature of social media will likely be engagement with simulated friends — but we need not resign ourselves to it becoming part of our lives as did social media. And as there are proposals to switch from privately held Big Data to a public Data Commons, so perhaps could there be space for AI that is governed not for maximizing profit but for being sustainable as a common-pool resource, with applications and protocols ordered toward long-run benefit as defined by local communities…(More)”.

Data Localization: A Global Threat to Human Rights Online


Article by Freedom House: “From Pakistan to Zambia, governments around the world are increasingly proposing and passing data localization legislation. These laws, which refer to the rules governing the storage and transfer of electronic data across jurisdictions, are often justified as addressing concerns such as user privacy, cybersecurity, national security, and monopolistic market practices. Notwithstanding these laudable goals, data localization initiatives cause more harm than good, especially in legal environments with poor rule of law.

Data localization requirements can take many different forms. A government may require all companies collecting and processing certain types of data about local users to store the data on servers located in the country. Authorities may also restrict the foreign transfer of certain types of data or allow it only under narrow circumstances, such as after obtaining the explicit consent of users, receiving a license or permit from a public authority, or conducting a privacy assessment of the country to which the data will be transferred.

While data localization can have significant economic and security implications, the focus of this piece—inline with that of the Global Network Initiative and Freedom House—is on its potential human rights impacts, which are varied. Freedom House’s research shows that the rise in data localization policies worldwide is contributing to the global decline of internet freedom. Without robust transparency and accountability frameworks embedded into these provisions, digital rights are often put on the line. As these types of legislation continue to pop up globally, the need for rights-respecting solutions and norms for cross-border data flows is greater than ever…(More)”.

Global data-driven prediction of fire activity


Paper by Francesca Di Giuseppe, Joe McNorton, Anna Lombardi & Fredrik Wetterhall: “Recent advancements in machine learning (ML) have expanded the potential use across scientific applications, including weather and hazard forecasting. The ability of these methods to extract information from diverse and novel data types enables the transition from forecasting fire weather, to predicting actual fire activity. In this study we demonstrate that this shift is feasible also within an operational context. Traditional methods of fire forecasts tend to over predict high fire danger, particularly in fuel limited biomes, often resulting in false alarms. By using data on fuel characteristics, ignitions and observed fire activity, data-driven predictions reduce the false-alarm rate of high-danger forecasts, enhancing their accuracy. This is made possible by high quality global datasets of fuel evolution and fire detection. We find that the quality of input data is more important when improving forecasts than the complexity of the ML architecture. While the focus on ML advancements is often justified, our findings highlight the importance of investing in high-quality data and, where necessary create it through physical models. Neglecting this aspect would undermine the potential gains from ML-based approaches, emphasizing that data quality is essential to achieve meaningful progress in fire activity forecasting…(More)”.

Privacy-Enhancing and Privacy-Preserving Technologies in AI: Enabling Data Use and Operationalizing Privacy by Design and Default


Paper by the Centre for Information Policy Leadership at Hunton (“CIPL”): “provides an in-depth exploration of how privacy-enhancing technologies (“PETs”) are being deployed to address privacy within artificial intelligence (“AI”) systems. It aims to describe how these technologies can help operationalize privacy by design and default and serve as key business enablers, allowing companies and public sector organizations to access, share and use data that would otherwise be unavailable. It also seeks to demonstrate how PETs can address challenges and provide new opportunities across the AI life cycle, from data sourcing to model deployment, and includes real-world case studies…

As further detailed in the Paper, CIPL’s recommendations for boosting the adoption of PETs for AI are as follows:

Stakeholders should adopt a holistic view of the benefits of PETs in AI. PETs deliver value beyond addressing privacy and security concerns, such as fostering trust and enabling data sharing. It is crucial that stakeholders consider all these advantages when making decisions about their use.

Regulators should issue more clear and practical guidance to reduce regulatory uncertainty in the use of PETs in AI. While regulators increasingly recognize the value of PETs, clearer and more practical guidance is needed to help organizations implement these technologies effectively.

Regulators should adopt a risk-based approach to assess how PETs can meet standards for data anonymization, providing clear guidance to eliminate uncertainty. There is uncertainty around whether various PETs meet legal standards for data anonymization. A risk-based approach to defining anonymization standards could encourage wider adoption of PETs.

Deployers should take steps to provide contextually appropriate transparency to customers and data subjects. Given the complexity of PETs, deployers should ensure customers and data subjects understand how PETs function within AI models…(More)”.