Humanitarian aid depends on good data: what’s wrong with the way it’s collected


Article by Vicki Squire: The defunding of the US Agency for International Development (USAID), along with reductions in aid from the UK and elsewhere, raises questions about the continued collection of data that helps inform humanitarian efforts.

Humanitarian response plans rely on accurate, accessible and up-to-date data. Aid organisations use this to review needs, monitor health and famine risks, and ensure security and access for humanitarian operations.

The reliance on data – and in particular large-scale digitalised data – has intensified in the humanitarian sector over the past few decades. Major donors all proclaim a commitment to evidence-based decision making. The International Organization for Migration’s Displacement Tracking Matrix and the REACH impact initiative are two examples designed to improve operational and strategic awareness of key needs and risks.

Humanitarian data streams have already been affected by USAID cuts. For example, the Famine Early Warning Systems Network was abruptly closed, while the Demographic and Health Surveys programme was “paused”. The latter informed global health policies in areas ranging from maternal health and domestic violence to anaemia and HIV prevalence.

The loss of reliable, accessible and up-to-date data threatens monitoring capacity and early warning systems, while reducing humanitarian access and rendering security failures more likely…(More)”.

How we think about protecting data


Article by Peter Dizikes: “How should personal data be protected? What are the best uses of it? In our networked world, questions about data privacy are ubiquitous and matter for companies, policymakers, and the public.

A new study by MIT researchers adds depth to the subject by suggesting that people’s views about privacy are not firmly fixed and can shift significantly, based on different circumstances and different uses of data.

“There is no absolute value in privacy,” says Fabio Duarte, principal research scientist in MIT’s Senseable City Lab and co-author of a new paper outlining the results. “Depending on the application, people might feel use of their data is more or less invasive.”

The study is based on an experiment the researchers conducted in multiple countries using a newly developed game that elicits public valuations of data privacy relating to different topics and domains of life.

“We show that values attributed to data are combinatorial, situational, transactional, and contextual,” the researchers write.

The open-access paper, “Data Slots: tradeoffs between privacy concerns and benefits of data-driven solutions,” is published today in Nature: Humanities and Social Sciences Communications. The authors are Martina Mazzarello, a postdoc in the Senseable City Lab; Duarte; Simone Mora, a research scientist at Senseable City Lab; Cate Heine PhD ’24 of University College London; and Carlo Ratti, director of the Senseable City Lab.

The study is based around a card game with poker-type chips the researchers created to study the issue, called Data Slots. In it, players hold hands of cards with 12 types of data — such as a personal profile, health data, vehicle location information, and more — that relate to three types of domains where data are collected: home life, work, and public spaces. After exchanging cards, the players generate ideas for data uses, then assess and invest in some of those concepts. The game has been played in-person in 18 different countries, with people from another 74 countries playing it online; over 2,000 individual player-rounds were included in the study…(More)”.

Farmers win legal fight to bring climate resources back to federal websites


Article by Justine Calma: “After farmers filed suit, the US Department of Agriculture (USDA) has agreed to restore climate information to webpages it took down soon after President Donald Trump took office this year.

The US Department of Justice filed a letter late last night on behalf of the USDA that says the agency “will restore the climate-change-related web content that was removed post-inauguration, including all USDA webpages and interactive tools” that were named in the plaintiffs’ complaint. It says the work is already “underway” and should be mostly done in about two weeks.

If the Trump administration fulfills that commitment, it’ll be a significant victory for farmers and other Americans who rely on scientific data that has disappeared from federal websites since January…(More)”.

Leading, not lagging: Africa’s gen AI opportunity


Article by Mayowa Kuyoro, Umar Bagus: “The rapid rise of gen AI has captured the world’s imagination and accelerated the integration of AI into the global economy and the lives of people across the world. Gen AI heralds a step change in productivity. As institutions apply AI in novel ways, beyond the advanced analytics and machine learning (ML) applications of the past ten years, the global economy could increase significantly, improving the lives and livelihoods of millions.1

Nowhere is this truer than in Africa, a continent that has already demonstrated its ability to use technology to leapfrog traditional development pathways; for example, mobile technology overcoming the fixed-line internet gap, mobile payments in Kenya, and numerous African institutions making the leap to cloud faster than their peers in developed markets.2 Africa has been quick on the uptake with gen AI, too, with many unique and ingenious applications and deployments well underway…(More)”.

Across McKinsey’s client service work in Africa, many institutions have tested and deployed AI solutions. Our research has found that more than 40 percent of institutions have either started to experiment with gen AI or have already implemented significant solutions (see sidebar “About the research inputs”). However, the continent has so far only scratched the surface of what is possible, with both AI and gen AI. If institutions can address barriers and focus on building for scale, our analysis suggests African economies could unlock up to $100 billion in annual economic value across multiple sectors from gen AI alone. That is in addition to the still-untapped potential from traditional AI and ML in many sectors today—the combined traditional AI and gen AI total is more than double what gen AI can unlock on its own, with traditional AI making up at least 60 percent of the value…(More)”

New data tools enhance the development effectiveness of tourism investment


Article by Louise Twining-Ward, Alex Pio and Alba Suris Coll-Vinent: “The tourism sector is a major driver of economic growth and inclusive job creation. Tourism generates a high number of jobs, especially for women (UN Tourism). In 2024, tourism was responsible for one in ten jobs worldwide, delivering 337.7 million total jobs, and accounted for 10.5 percent of global GDP . For many developing countries, it is a primary generator of foreign exchange.

The growth of this vital sector depends heavily on public investment in infrastructure and services. But rapid change, due to uncertain geopolitics, climate shocks, and shifting consumer behavior, can make it hard to know how best to spend scarce resources. Traditional data sources are unable to keep up, leaving policymakers without the timely insights needed to effectively manage mounting complexities. Only a few developing coutries collect and maintain tourism satellite accounts (TSAs), which help capture tourism’s contribution to their economies. However, even in these countries, tourist arrival data and spending behavior, through immigration data and visitor surveys, are often processed with a lag. There is an urgent need for more accessible, more granular, and more timely data tools.

Emerging Data Tools

For this reason, the World Bank partnered with Visa to access anonymized and aggregated credit card spend data in the Caribbean and attempt to fill data gaps. This and other emerging tools for policymaking—such as satellite and geospatial mapping, analysis of online reviews, artificial intelligence, and advanced analytics—now allow tourism destinations to take a closer look at local demand patterns, gauge visitor satisfaction in near-real time, and measure progress on everything from carbon footprints to women’s employment in tourism…(More)”.

The Right to AI


Paper by Rashid Mushkani, Hugo Berard, Allison Cohen, Shin Koeski: “This paper proposes a Right to AI, which asserts that individuals and communities should meaningfully participate in the development and governance of the AI systems that shape their lives. Motivated by the increasing deployment of AI in critical domains and inspired by Henri Lefebvre’s concept of the Right to the City, we reconceptualize AI as a societal infrastructure, rather than merely a product of expert design. In this paper, we critically evaluate how generative agents, large-scale data extraction, and diverse cultural values bring new complexities to AI oversight. The paper proposes that grassroots participatory methodologies can mitigate biased outcomes and enhance social responsiveness. It asserts that data is socially produced and should be managed and owned collectively. Drawing on Sherry Arnstein’s Ladder of Citizen Participation and analyzing nine case studies, the paper develops a four-tier model for the Right to AI that situates the current paradigm and envisions an aspirational future. It proposes recommendations for inclusive data ownership, transparent design processes, and stakeholder-driven oversight. We also discuss market-led and state-centric alternatives and argue that participatory approaches offer a better balance between technical efficiency and democratic legitimacy…(More)”.

Societal and technological progress as sewing an ever-growing, ever-changing, patchy, and polychrome quilt


Paper by Joel Z. Leibo et al: “Artificial Intelligence (AI) systems are increasingly placed in positions where their decisions have real consequences, e.g., moderating online spaces, conducting research, and advising on policy. Ensuring they operate in a safe and ethically acceptable fashion is thus critical. However, most solutions have been a form of one-size-fits-all “alignment”. We are worried that such systems, which overlook enduring moral diversity, will spark resistance, erode trust, and destabilize our institutions. This paper traces the underlying problem to an often-unstated Axiom of Rational Convergence: the idea that under ideal conditions, rational agents will converge in the limit of conversation on a single ethics. Treating that premise as both optional and doubtful, we propose what we call the appropriateness framework: an alternative approach grounded in conflict theory, cultural evolution, multi-agent systems, and institutional economics. The appropriateness framework treats persistent disagreement as the normal case and designs for it by applying four principles: (1) contextual grounding, (2) community customization, (3) continual adaptation, and (4) polycentric governance. We argue here that adopting these design principles is a good way to shift the main alignment metaphor from moral unification to a more productive metaphor of conflict management, and that taking this step is both desirable and urgent…(More)”.

The European Data Cooperative (EDC) 


Invest Europe: “The European Data Cooperative (EDC) is a joint initiative developed by Invest Europe and its national association partners to collect Europe-wide industry data on activity (fundraising, investments, & divestments), economic impact (Employment, Turnover, EBITDA, & CAPEX) and ESG.

The EDC platform is jointly owned and operated by the private equity and venture capital associations of Europe. It serves as a single data entry point for their members and other contributors across the continent. The EDC brings together:

  • 4,000 firms
  • 10,900 funds
  • 86,700 portfolio companies
  • 330,900 transactions

Using one platform with a standardised methodology allows us to have consistent, robust pan-European statistics that are comparable across the region…(More)”

Indiana Faces a Data Center Backlash


Article by Matthew Zeitlin: “Indiana has power. Indiana has transmission. Indiana has a business-friendly Republican government. Indiana is close to Chicago but — crucially — not in Illinois. All of this has led to a huge surge of data center development in the “Crossroads of America.” It has also led to an upswell of local opposition.

There are almost 30 active data center proposals in Indiana, plus five that have already been rejected in the past year, according to data collected by the environmentalist group Citizens Action Coalition. GoogleAmazon, and Meta have all announced projects in the state since the beginning of 2024.

Nipsco, one of the state’s utilities, has projected 2,600 megawatts worth of new load by the middle of the next decade as its base scenario, mostly attributable to “large economic development projects.” In a more aggressive scenario, it sees 3,200 megawatts of new load — that’s three large nuclear reactors’ worth — by 2028 and 8,600 megawatts by 2035. While short of, say, the almost 36,500 megawatts worth of load growth planned in Georgia for the next decade, it’s still a vast range of outcomes that requires some kind of advanced planning.

That new electricity consumption will likely be powered by fossil fuels. Projected load growth in the state has extended a lifeline to Indiana’s coal-fired power plants, with retirement dates for some of the fleet being pushed out to late in the 2030s. It’s also created a market for new natural gas-fired plants that utilities say are necessary to power the expected new load.

State and local political leaders have greeted these new data center projects with enthusiasm, Ben Inskeep, the program director at CAC, told me. “Economic development is king here,” he said. “That is what all the politicians and regulators say their number one concern is: attracting economic development.”..(More)”.

The world at our fingertips, just out of reach: the algorithmic age of AI


Article by Soumi Banerjee: “Artificial intelligence (AI) has made global movements, testimonies, and critiques seem just a swipe away. The digital realm, powered by machine learning and algorithmic recommendation systems, offers an abundance of visual, textual, and auditory information. With a few swipes or keystrokes, the unbounded world lies open before us. Yet this ‘openness’ conceals a fundamental paradox: the distinction between availability and accessibility.

What is technically available is not always epistemically accessible. What appears global is often algorithmically curated. And what is served to users under the guise of choice frequently reflects the imperatives of engagement, profit, and emotional resonance over critical understanding or cognitive expansion.

The transformative potential of AI in democratising access to information comes with risks. Algorithmic enclosure and content curation can deepen epistemic inequality, particularly for the youth, whose digital fluency often masks a lack of epistemic literacy. What we need is algorithmic transparency, civic education in media literacy, and inclusive knowledge formats…(More)”.