The EU’s AI Power Play: Between Deregulation and Innovation


Article by Raluca Csernatoni: “From the outset, the European Union (EU) has positioned itself as a trailblazer in AI governance with the world’s first comprehensive legal framework for AI systems in use, the AI Act. The EU’s approach to governing artificial intelligence (AI) has been characterized by a strong precautionary and ethics-driven philosophy. This ambitious regulation reflects the EU’s long-standing approach of prioritizing high ethical standards and fundamental rights in tech and digital policies—a strategy of fostering both excellence and trust in human-centric AI models. Yet, framed as essential to keep pace with U.S. and Chinese AI giants, the EU has recently taken a deregulatory turn that risks trading away democratic safeguards, without addressing systemic challenges to AI innovation.

The EU now stands at a crossroads: it can forge ahead with bold, home-grown AI innovation underpinned by robust regulation, or it can loosen its ethical guardrails, only to find itself stripped of both technological autonomy and regulatory sway. While Brussels’s recent deregulatory turn is framed as a much needed competitiveness boost, the real obstacles to Europe’s digital renaissance lie elsewhere: persistent underfunding, siloed markets, and reliance on non-EU infrastructures…(More)”

From Software to Society — Openness in a changing world


Report by Henriette Litta and Peter Bihr: “…takes stock and looks to the future: What does openness mean in the digital age? Is the concept still up to date? The study traces the development of openness and analyses current challenges. It is based on interviews with experts and extensive literature research. The key insights at a glance are:

Give Openness a purpose. Especially in times of increasing injustice, surveillance and power monopolies, a clear framework for meaningful openness is needed, as this is often lacking. Companies market ‘open’ products without enabling co-creation. Political actors invoke openness without strengthening democratic control. This is particularly evident when dealing with AI. AI systems are complex and are often dominated by a few tech companies – which makes opening them up a fundamental challenge. The dominance of some tech companies is also massively exploited, which can lead to the censorship of other opinions.

Protect Openness by adding guard rails. Those who demand openness must also be prepared to get involved in political disputes – against a market monopoly, for example. According to Litta and Bihr, this requires new licence models that include obligations to return and share, as well as stricter enforcement of antitrust law and data protection. Openness therefore needs rules…(More)”.

Federated learning for children’s data


Article by Roy Saurabh: “Across the world, governments are prioritizing the protection of citizens’ data – especially that of children. New laws, dedicated data protection authorities, and digital infrastructure initiatives reflect a growing recognition that data is not just an asset, but a foundation for public trust. 

Yet a major challenge remains: how can governments use sensitive data to improve outcomes – such as in education – without undermining the very privacy protections they are committed to uphold?

One promising answer lies in federated, governance-aware approaches to data use. But realizing this potential requires more than new technology; it demands robust data governance frameworks designed from the outset.

Data governance: The missing link

In many countries, ministries of education, health, and social protection each hold pieces of the puzzle that together could provide a more complete picture of children’s learning and well-being. For example, a child’s school attendance, nutritional status, and family circumstances all shape their ability to thrive, yet these records are kept in separate systems.

Efforts to combine such data often run into legal and technical barriers. Centralized data lakes raise concerns about consent, security, and compliance with privacy laws. In fact, many international standards stress the principle of data minimization – the idea that personal information should not be gathered or combined unnecessarily. 

“In many countries, ministries of education, health, and social protection each hold pieces of the puzzle that together could provide a more complete picture of children’s learning and well-being.”

This is where the right data governance frameworks become essential. Effective governance defines clear rules about how data can be accessed, shared, and used – specifying who has the authority, what purposes are permitted, and how rights are protected. These frameworks make it possible to collaborate with data responsibly, especially when it comes to children…(More)”

Reimagining Data Governance for AI: Operationalizing Social Licensing for Data Reuse


Report by Stefaan Verhulst, Adam Zable, Andrew J. Zahuranec, and Peter Addo: “…introduces a practical, community-centered framework for governing data reuse in the development and deployment of artificial intelligence systems in low- and middle-income countries (LMICs). As AI increasingly relies on data from LMICs, affected communities are often excluded from decision-making and see little benefit from how their data is used. This report,…reframes data governance through social licensing—a participatory model that empowers communities to collectively define, document, and enforce conditions for how their data is reused. It offers a step-by-step methodology and actionable tools, including a Social Licensing Questionnaire and adaptable contract clauses, alongisde real-world scenarios and recommendations for enforcement, policy integration, and future research. This report recasts data governance as a collective, continuous process – shifting the focus from individual consent to community decision-making…(More)”.

Humanitarian aid depends on good data: what’s wrong with the way it’s collected


Article by Vicki Squire: The defunding of the US Agency for International Development (USAID), along with reductions in aid from the UK and elsewhere, raises questions about the continued collection of data that helps inform humanitarian efforts.

Humanitarian response plans rely on accurate, accessible and up-to-date data. Aid organisations use this to review needs, monitor health and famine risks, and ensure security and access for humanitarian operations.

The reliance on data – and in particular large-scale digitalised data – has intensified in the humanitarian sector over the past few decades. Major donors all proclaim a commitment to evidence-based decision making. The International Organization for Migration’s Displacement Tracking Matrix and the REACH impact initiative are two examples designed to improve operational and strategic awareness of key needs and risks.

Humanitarian data streams have already been affected by USAID cuts. For example, the Famine Early Warning Systems Network was abruptly closed, while the Demographic and Health Surveys programme was “paused”. The latter informed global health policies in areas ranging from maternal health and domestic violence to anaemia and HIV prevalence.

The loss of reliable, accessible and up-to-date data threatens monitoring capacity and early warning systems, while reducing humanitarian access and rendering security failures more likely…(More)”.

How we think about protecting data


Article by Peter Dizikes: “How should personal data be protected? What are the best uses of it? In our networked world, questions about data privacy are ubiquitous and matter for companies, policymakers, and the public.

A new study by MIT researchers adds depth to the subject by suggesting that people’s views about privacy are not firmly fixed and can shift significantly, based on different circumstances and different uses of data.

“There is no absolute value in privacy,” says Fabio Duarte, principal research scientist in MIT’s Senseable City Lab and co-author of a new paper outlining the results. “Depending on the application, people might feel use of their data is more or less invasive.”

The study is based on an experiment the researchers conducted in multiple countries using a newly developed game that elicits public valuations of data privacy relating to different topics and domains of life.

“We show that values attributed to data are combinatorial, situational, transactional, and contextual,” the researchers write.

The open-access paper, “Data Slots: tradeoffs between privacy concerns and benefits of data-driven solutions,” is published today in Nature: Humanities and Social Sciences Communications. The authors are Martina Mazzarello, a postdoc in the Senseable City Lab; Duarte; Simone Mora, a research scientist at Senseable City Lab; Cate Heine PhD ’24 of University College London; and Carlo Ratti, director of the Senseable City Lab.

The study is based around a card game with poker-type chips the researchers created to study the issue, called Data Slots. In it, players hold hands of cards with 12 types of data — such as a personal profile, health data, vehicle location information, and more — that relate to three types of domains where data are collected: home life, work, and public spaces. After exchanging cards, the players generate ideas for data uses, then assess and invest in some of those concepts. The game has been played in-person in 18 different countries, with people from another 74 countries playing it online; over 2,000 individual player-rounds were included in the study…(More)”.

Farmers win legal fight to bring climate resources back to federal websites


Article by Justine Calma: “After farmers filed suit, the US Department of Agriculture (USDA) has agreed to restore climate information to webpages it took down soon after President Donald Trump took office this year.

The US Department of Justice filed a letter late last night on behalf of the USDA that says the agency “will restore the climate-change-related web content that was removed post-inauguration, including all USDA webpages and interactive tools” that were named in the plaintiffs’ complaint. It says the work is already “underway” and should be mostly done in about two weeks.

If the Trump administration fulfills that commitment, it’ll be a significant victory for farmers and other Americans who rely on scientific data that has disappeared from federal websites since January…(More)”.

Leading, not lagging: Africa’s gen AI opportunity


Article by Mayowa Kuyoro, Umar Bagus: “The rapid rise of gen AI has captured the world’s imagination and accelerated the integration of AI into the global economy and the lives of people across the world. Gen AI heralds a step change in productivity. As institutions apply AI in novel ways, beyond the advanced analytics and machine learning (ML) applications of the past ten years, the global economy could increase significantly, improving the lives and livelihoods of millions.1

Nowhere is this truer than in Africa, a continent that has already demonstrated its ability to use technology to leapfrog traditional development pathways; for example, mobile technology overcoming the fixed-line internet gap, mobile payments in Kenya, and numerous African institutions making the leap to cloud faster than their peers in developed markets.2 Africa has been quick on the uptake with gen AI, too, with many unique and ingenious applications and deployments well underway…(More)”.

Across McKinsey’s client service work in Africa, many institutions have tested and deployed AI solutions. Our research has found that more than 40 percent of institutions have either started to experiment with gen AI or have already implemented significant solutions (see sidebar “About the research inputs”). However, the continent has so far only scratched the surface of what is possible, with both AI and gen AI. If institutions can address barriers and focus on building for scale, our analysis suggests African economies could unlock up to $100 billion in annual economic value across multiple sectors from gen AI alone. That is in addition to the still-untapped potential from traditional AI and ML in many sectors today—the combined traditional AI and gen AI total is more than double what gen AI can unlock on its own, with traditional AI making up at least 60 percent of the value…(More)”

New data tools enhance the development effectiveness of tourism investment


Article by Louise Twining-Ward, Alex Pio and Alba Suris Coll-Vinent: “The tourism sector is a major driver of economic growth and inclusive job creation. Tourism generates a high number of jobs, especially for women (UN Tourism). In 2024, tourism was responsible for one in ten jobs worldwide, delivering 337.7 million total jobs, and accounted for 10.5 percent of global GDP . For many developing countries, it is a primary generator of foreign exchange.

The growth of this vital sector depends heavily on public investment in infrastructure and services. But rapid change, due to uncertain geopolitics, climate shocks, and shifting consumer behavior, can make it hard to know how best to spend scarce resources. Traditional data sources are unable to keep up, leaving policymakers without the timely insights needed to effectively manage mounting complexities. Only a few developing coutries collect and maintain tourism satellite accounts (TSAs), which help capture tourism’s contribution to their economies. However, even in these countries, tourist arrival data and spending behavior, through immigration data and visitor surveys, are often processed with a lag. There is an urgent need for more accessible, more granular, and more timely data tools.

Emerging Data Tools

For this reason, the World Bank partnered with Visa to access anonymized and aggregated credit card spend data in the Caribbean and attempt to fill data gaps. This and other emerging tools for policymaking—such as satellite and geospatial mapping, analysis of online reviews, artificial intelligence, and advanced analytics—now allow tourism destinations to take a closer look at local demand patterns, gauge visitor satisfaction in near-real time, and measure progress on everything from carbon footprints to women’s employment in tourism…(More)”.

The Right to AI


Paper by Rashid Mushkani, Hugo Berard, Allison Cohen, Shin Koeski: “This paper proposes a Right to AI, which asserts that individuals and communities should meaningfully participate in the development and governance of the AI systems that shape their lives. Motivated by the increasing deployment of AI in critical domains and inspired by Henri Lefebvre’s concept of the Right to the City, we reconceptualize AI as a societal infrastructure, rather than merely a product of expert design. In this paper, we critically evaluate how generative agents, large-scale data extraction, and diverse cultural values bring new complexities to AI oversight. The paper proposes that grassroots participatory methodologies can mitigate biased outcomes and enhance social responsiveness. It asserts that data is socially produced and should be managed and owned collectively. Drawing on Sherry Arnstein’s Ladder of Citizen Participation and analyzing nine case studies, the paper develops a four-tier model for the Right to AI that situates the current paradigm and envisions an aspirational future. It proposes recommendations for inclusive data ownership, transparent design processes, and stakeholder-driven oversight. We also discuss market-led and state-centric alternatives and argue that participatory approaches offer a better balance between technical efficiency and democratic legitimacy…(More)”.