Global data-driven prediction of fire activity


Paper by Francesca Di Giuseppe, Joe McNorton, Anna Lombardi & Fredrik Wetterhall: “Recent advancements in machine learning (ML) have expanded the potential use across scientific applications, including weather and hazard forecasting. The ability of these methods to extract information from diverse and novel data types enables the transition from forecasting fire weather, to predicting actual fire activity. In this study we demonstrate that this shift is feasible also within an operational context. Traditional methods of fire forecasts tend to over predict high fire danger, particularly in fuel limited biomes, often resulting in false alarms. By using data on fuel characteristics, ignitions and observed fire activity, data-driven predictions reduce the false-alarm rate of high-danger forecasts, enhancing their accuracy. This is made possible by high quality global datasets of fuel evolution and fire detection. We find that the quality of input data is more important when improving forecasts than the complexity of the ML architecture. While the focus on ML advancements is often justified, our findings highlight the importance of investing in high-quality data and, where necessary create it through physical models. Neglecting this aspect would undermine the potential gains from ML-based approaches, emphasizing that data quality is essential to achieve meaningful progress in fire activity forecasting…(More)”.

Developing countries are struggling to achieve their technology aims. Shared digital infrastructure is the answer


Article by Nii Simmonds: “The digital era offers remarkable prospects for both economic advancement and social development. Yet for emerging economies lacking energy, this potential often seems out of reach. The harsh truths of inconsistent electricity supply and scarce resources looms large over their digital ambitions. Nevertheless, a ray of hope shines through a strategy I call shared digital infrastructure (SDI). This cooperative model has the ability to turn these obstacles into opportunities for growth. By collaborating through regional country partnerships and bodies such as the Association of Southeast Asian Nations (ASEAN), the African Union (AU) and the Caribbean Community (CARICOM), these countries can harness the revolutionary power of digital technology, despite the challenges.

The digital economy is a critical driver of global GDP, with innovations in artificial intelligence, e-commerce and financial technology transforming industries at an unprecedented pace. At the heart of this transformation are data centres, which serve as the backbone of digital services, cloud computing and AI-driven applications. Yet many developing nations struggle to establish and maintain such facilities due to high energy costs, inadequate grid reliability and limited investment capital…(More)”.

Privacy-Enhancing and Privacy-Preserving Technologies in AI: Enabling Data Use and Operationalizing Privacy by Design and Default


Paper by the Centre for Information Policy Leadership at Hunton (“CIPL”): “provides an in-depth exploration of how privacy-enhancing technologies (“PETs”) are being deployed to address privacy within artificial intelligence (“AI”) systems. It aims to describe how these technologies can help operationalize privacy by design and default and serve as key business enablers, allowing companies and public sector organizations to access, share and use data that would otherwise be unavailable. It also seeks to demonstrate how PETs can address challenges and provide new opportunities across the AI life cycle, from data sourcing to model deployment, and includes real-world case studies…

As further detailed in the Paper, CIPL’s recommendations for boosting the adoption of PETs for AI are as follows:

Stakeholders should adopt a holistic view of the benefits of PETs in AI. PETs deliver value beyond addressing privacy and security concerns, such as fostering trust and enabling data sharing. It is crucial that stakeholders consider all these advantages when making decisions about their use.

Regulators should issue more clear and practical guidance to reduce regulatory uncertainty in the use of PETs in AI. While regulators increasingly recognize the value of PETs, clearer and more practical guidance is needed to help organizations implement these technologies effectively.

Regulators should adopt a risk-based approach to assess how PETs can meet standards for data anonymization, providing clear guidance to eliminate uncertainty. There is uncertainty around whether various PETs meet legal standards for data anonymization. A risk-based approach to defining anonymization standards could encourage wider adoption of PETs.

Deployers should take steps to provide contextually appropriate transparency to customers and data subjects. Given the complexity of PETs, deployers should ensure customers and data subjects understand how PETs function within AI models…(More)”.

LLM Social Simulations Are a Promising Research Method


Paper by Jacy Reese Anthis et al: “Accurate and verifiable large language model (LLM) simulations of human research subjects promise an accessible data source for understanding human behavior and training new AI systems. However, results to date have been limited, and few social scientists have adopted these methods. In this position paper, we argue that the promise of LLM social simulations can be achieved by addressing five tractable challenges. We ground our argument in a literature survey of empirical comparisons between LLMs and human research subjects, commentaries on the topic, and related work. We identify promising directions with prompting, fine-tuning, and complementary methods. We believe that LLM social simulations can already be used for exploratory research, such as pilot experiments for psychology, economics, sociology, and marketing. More widespread use may soon be possible with rapidly advancing LLM capabilities, and researchers should prioritize developing conceptual models and evaluations that can be iteratively deployed and refined at pace with ongoing AI advances…(More)”.

Situating Digital Self-Determination (DSD): A Comparison with Existing and Emerging Digital and Data Governance Approaches


Paper by Sara Marcucci and Stefaan Verhulst: “In today’s increasingly complex digital landscape, traditional data governance models-such as consent-based, ownership-based, and sovereignty-based approaches-are proving insufficient to address the evolving ethical, social, and political dimensions of data use. These frameworks, often grounded in static and individualistic notions of control, struggle to keep pace with the fluidity and relational nature of contemporary data ecosystems. This paper proposes Digital Self-Determination (DSD) as a complementary and necessary evolution of existing models, offering a more participatory, adaptive, and ethically grounded approach to data governance. Centering ongoing agency, collective participation, and contextual responsiveness, DSD builds on foundational principles of consent and control while addressing their limitations. Drawing on comparisons with a range of governance models-including risk-based, compliance-oriented, principles-driven, and justice-centered frameworks-this paper highlights DSD’s unique contribution: its capacity to enable individuals and communities to actively shape how data about them is used, shared, and governed over time. In doing so, it reimagines data governance as a living, co-constructed practice grounded in trust, accountability, and care. Through this lens, the paper offers a framework for comparing different governance approaches and embedding DSD into existing paradigms, inviting policymakers and practitioners to consider how more inclusive and responsive forms of digital governance might be realized…(More)”.

Engaging Youth on Responsible Data Reuse: 5 Lessons Learnt from a Multi-Country Experiment


Article by Elena Murray, Moiz Shaikh and Stefaan G. Verhulst: “Young people seeking essential services — like mental health care, education, or public benefits — are often asked to share personal data in order to access the service, without having any say in how it is being collected, shared or used, or why. If young people distrust how their data is being used, they may avoid services or withhold important information, fearing misuse. This can unintentionally widen the very gaps these services aim to close.

To build trust, service providers and policymakers must involve young people in co-designing how their data is collected and used. Understanding their concerns, values, and expectations is key to developing data practices that reflect their needs. Empowering young people to develop the conditions for data re-use and design solutions to their concerns enables digital self determination.

The question is then: what does meaningful engagement actually look like — and how can we get it right?

To answer that question, we engaged four partners in four different countries and conducted:

  • 1000 hours of youth participation, involving more than 70 young people.
  • 12 youth engagement events.
  • Six expert talks and mentorship sessions.

These activities were undertaken as part of the NextGenData project, a year-long global collaboration supported by the Botnar Foundation, that piloted a methodology for youth engagement on responsible data reuse in Moldova, Tanzania, India and Kyrgyzstan.

A key outcome of our work was a youth engagement methodology, which we recently launched. In the below, we reflect on what we learnt — and how we can apply these learnings to ensure that the future of data-driven services both serves the needs of, and is guided by, young people.

Lessons Learnt:…(More)”

A graph illustrating the engagement cycle on data literacy: Foster Data Literacy, Develop Real-World Use Cases, Align with Local Realities, Optimise Participation, Implement Scalable Methodologies
A Cycle for Youth Engagement on Data — NextGenData Project

AI Liability Along the Value Chain


Report by Beatriz Botero Arcila: “…explores how liability law can help solve the “problem of many hands” in AI: that is, determining who is responsible for harm that has been dealt in a value chain in which a variety of different companies and actors might be contributing to the development of any given AI system. This is aggravated by the fact that AI systems are both opaque and technically complex, making their behavior hard to predict.

Why AI Liability Matters

To find meaningful solutions to this problem, different kinds of experts have to come together. This resource is designed for a wide audience, but we indicate how specific audiences can best make use of different sections, overviews, and case studies.

Specifically, the report:

  • Proposes a 3-step analysis to consider how liability should be allocated along the value chain: 1) The choice of liability regime, 2) how liability should be shared amongst actors along the value chain and 3) whether and how information asymmetries will be addressed.
  • Argues that where ex-ante AI regulation is already in place, policymakers should consider how liability rules will interact with these rules.
  • Proposes a baseline liability regime where actors along the AI value chain share responsibility if fault can be demonstrated, paired with measures to alleviate or shift the burden of proof and to enable better access to evidence — which would incentivize companies to act with sufficient care and address information asymmetries between claimants and companies.
  • Argues that in some cases, courts and regulators should extend a stricter regime, such as product liability or strict liability.
  • Analyzes liability rules in the EU based on this framework…(More)”.

Europe’s GDPR privacy law is headed for red tape bonfire within ‘weeks’


Article by Ellen O’Regan: “Europe’s most famous technology law, the GDPR, is next on the hit list as the European Union pushes ahead with its regulatory killing spree to slash laws it reckons are weighing down its businesses.

The European Commission plans to present a proposal to cut back the General Data Protection Regulation, or GDPR for short, in the next couple of weeks. Slashing regulation is a key focus for Commission President Ursula von der Leyen, as part of an attempt to make businesses in Europe more competitive with rivals in the United States, China and elsewhere. 

The EU’s executive arm has already unveiled packages to simplify rules around sustainability reporting and accessing EU investment. The aim is for companies to waste less time and money on complying with complex legal and regulatory requirements imposed by EU laws…Seven years later, Brussels is taking out the scissors to give its (in)famous privacy law a trim.

There are “a lot of good things about GDPR, [and] privacy is completely necessary. But we don’t need to regulate in a stupid way. We need to make it easy for businesses and for companies to comply,” Danish Digital Minister Caroline Stage Olsen told reporters last week. Denmark will chair the work in the EU Council in the second half of 2025 as part of its rotating presidency.

The criticism of the GDPR echoes the views of former Italian Prime Minister Mario Draghi, who released a landmark economic report last September warning that Europe’s complex laws were preventing its economy from catching up with the United States and China. “The EU’s regulatory stance towards tech companies hampers innovation,” Draghi wrote, singling out the Artificial Intelligence Act and the GDPR…(More)”.

Digital Technologies and Participatory Governance in Local Settings: Comparing Digital Civic Engagement Initiatives During the COVID-19 Outbreak


Chapter by Nathalie Colasanti, Chiara Fantauzzi, Rocco Frondizi & Noemi Rossi: “Governance paradigms have undergone a deep transformation during the COVID-19 pandemic, necessitating agile, inclusive, and responsive mechanisms to address evolving challenges. Participatory governance has emerged as a guiding principle, emphasizing inclusive decision-making processes and collaboration among diverse stakeholders. In the outbreak context, digital technologies have played a crucial role in enabling participatory governance to flourish, democratizing participation, and facilitating the rapid dissemination of accurate information. These technologies have also empowered grassroots initiatives, such as civic hacking, to address societal challenges and mobilize communities for collective action. This study delves into the realm of bottom-up participatory initiatives at the local level, focusing on two emblematic cases of civic hacking experiences launched during the pandemic, the first in Wuhan, China, and the second in Italy. Through a comparative lens, drawing upon secondary sources, the aim is to analyze the dynamics, efficacy, and implications of these initiatives, shedding light on the evolving landscape of participatory governance in times of crisis. Findings underline the transformative potential of civic hacking and participatory governance in crisis response, highlighting the importance of collaboration, transparency, and inclusivity…(More)”.

The Measure of Progress: Counting What Really Matters


Book by Diane Coyle: “The ways that statisticians and governments measure the economy were developed in the 1940s, when the urgent economic problems were entirely different from those of today. In The Measure of Progress, Diane Coyle argues that the framework underpinning today’s economic statistics is so outdated that it functions as a distorting lens, or even a set of blinkers. When policymakers rely on such an antiquated conceptual tool, how can they measure, understand, and respond with any precision to what is happening in today’s digital economy? Coyle makes the case for a new framework, one that takes into consideration current economic realities.

Coyle explains why economic statistics matter. They are essential for guiding better economic policies; they involve questions of freedom, justice, life, and death. Governments use statistics that affect people’s lives in ways large and small. The metrics for economic growth were developed when a lack of physical rather than natural capital was the binding constraint on growth, intangible value was less important, and the pressing economic policy challenge was managing demand rather than supply. Today’s challenges are different. Growth in living standards in rich economies has slowed, despite remarkable innovation, particularly in digital technologies. As a result, politics is contentious and democracy strained.

Coyle argues that to understand the current economy, we need different data collected in a different framework of categories and definitions, and she offers some suggestions about what this would entail. Only with a new approach to measurement will we be able to achieve the right kind of growth for the benefit of all…(More)”.