Mapping the Unmapped


Article by Maddy Crowell: “…Most of St. Lucia, which sits at the southern end of an archipelago stretching from Trinidad and Tobago to the Bahamas, is poorly mapped. Aside from strips of sandy white beaches that hug the coastline, the island is draped with dense rainforest. A few green signs hang limp and faded from utility poles like an afterthought, identifying streets named during more than a century of dueling British and French colonial rule. One major road, Micoud Highway, runs like a vein from north to south, carting tourists from the airport to beachfront resorts. Little of this is accurately represented on Google Maps. Almost nobody uses, or has, a conventional address. Locals orient one another with landmarks: the red house on the hill, the cottage next to the church, the park across from Care Growell School.

Our van wound off Micoud Highway into an empty lot beneath the shade of a banana tree. A dog panted, belly up, under the hot November sun. The group had been recruited by the Humanitarian OpenStreetMap Team, or HOT, a nonprofit that uses an open-source data platform called OpenStreetMap to create a map of the world that resembles Google’s with one key exception: Anyone can edit it, making it a sort of Wikipedia for cartographers.

The organization has an ambitious goal: Map the world’s unmapped places to help relief workers reach people when the next hurricanefire, or other crisis strikes. Since its founding in 2010, some 340,000 volunteers around the world have been remotely editing OpenStreetMap to better represent the Caribbean, Southeast Asia, parts of Africa and other regions prone to natural disasters or humanitarian emergencies. In that time, they have mapped more than 2.1 million miles of roads and 156 million buildings. They use aerial imagery captured by drones, aircraft, or satellites to help trace unmarked roads, waterways, buildings, and critical infrastructure. Once this digital chart is more clearly defined, field-mapping expeditions like the one we were taking add the names of every road, house, church, or business represented by gray silhouettes on their paper maps. The effort fine-tunes the places that bigger players like Google Maps get wrong — or don’t get at all…(More)”

Why are “missions” proving so difficult?


Article by James Plunkett: “…Unlike many political soundbites, however, missions have a strong academic heritage, drawing on years of work from Mariana Mazzucato and others. They gained support as a way for governments to be less agnostic about the direction of economic growth and its social implications, most obviously on issues like climate change, while still avoiding old-school statism. The idea is to pursue big goals not with top-down planning but with what Mazzucato calls ‘orchestration’, using the power of the state to drive innovation and shape markets to an outcome.

For these reasons, missions have proven increasingly popular with governments. They have been used by administrations from the EU to South Korea and Finland, and even in Britain under Theresa May, although she didn’t have time to make them stick.

Despite these good intentions and heritage, however, missions are proving difficult. Some say the UK government is “mission-washing” – using the word, but not really adopting the ways of working. And although missions were mentioned in the spending review, their role was notably muted when compared with the central position they had in Labour’s manifesto.

Still, it would seem a shame to let missions falter without interrogating the reasons. So why are missions so difficult? And what, if anything, could be done to strengthen them as Labour moves into year two? I’ll touch on four characteristics of missions that jar with Whitehall’s natural instincts, and in each case I’ll ask how it’s going, and how Labour could be bolder…(More)”.

Bad data leads to bad policy


Article by Georges-Simon Ulrich: “When the UN created a Statistical Commission in 1946, the world was still recovering from the devastation of the second world war. Then, there was broad consensus that only reliable, internationally comparable data could prevent conflict, combat poverty and anchor global co-operation. Nearly 80 years later, this insight remains just as relevant, but the context has changed dramatically…

This erosion of institutional capacity could not come at a more critical moment. The UN is unable to respond adequately as it is facing a staffing shortfall itself. Due to ongoing austerity measures at the UN, many senior positions remain vacant, and the director of the UN Statistics Division has retired, with no successor appointed. This comes at a time when bold and innovative initiatives — such as a newly envisioned Trusted Data Observatory — are urgently needed to make official statistics more accessible and machine-readable.

Meanwhile, the threat of targeted disinformation is growing. On social media, distorted or manipulated content spreads at unprecedented speed. Emerging tools like AI chatbots exacerbate the problem. These systems rely on web content, not verified data, and are not built to separate truth from falsehood. Making matters worse, many governments cannot currently make their data usable for AI because it is not standardised, not machine-readable, or not openly accessible. The space for sober, evidence-based discourse is shrinking.

This trend undermines public trust in institutions, strips policymaking of its legitimacy, and jeopardises the UN Sustainable Development Goals (SDGs). Without reliable data, governments will be flying blind — or worse: they will be deliberately misled.

When countries lose control of their own data, or cannot integrate it into global decision-making processes, they become bystanders to their own development. Decisions about their economies, societies and environments are then outsourced to AI systems trained on skewed, unrepresentative data. The global south is particularly at risk, with many countries lacking access to quality data infrastructures. In countries such as Ethiopia, unverified information spreading rapidly on social media has fuelled misinformation-driven violence.

The Covid-19 pandemic demonstrated that strong data systems enable better crisis response. To counter these risks, the creation of a global Trusted Data Observatory (TDO) is essential. This UN co-ordinated, democratically governed platform would help catalogue and make accessible trusted data around the world — while fully respecting national sovereignty…(More)”

AI and Assembly: Coming Together and Apart in a Datafied World


Book edited by Toussaint Nothias and Lucy Bernholz: “Artificial intelligence has moved from the lab into everyday life and is now seemingly everywhere. As AI creeps into every aspect of our lives, the data grab required to power AI also expands. People worldwide are tracked, analyzed, and influenced, whether on or off their screens, inside their homes or outside in public, still or in transit, alone or together. What does this mean for our ability to assemble with others for collective action, including protesting, holding community meetings and organizing rallies ? In this context, where and how does assembly take place, and who participates by choice and who by coercion? AI and Assembly explores these questions and offers global perspectives on the present and future of assembly in a world taken over by AI.

The contributors analyze how AI threatens free assembly by clustering people without consent, amplifying social biases, and empowering authoritarian surveillance. But they also explore new forms of associational life that emerge in response to these harms, from communities in the US conducting algorithmic audits to human rights activists in East Africa calling for biometric data protection and rideshare drivers in London advocating for fair pay. Ultimately, AI and Assembly is a rallying cry for those committed to a digital future beyond the narrow horizon of corporate extraction and state surveillance…(More)”.

AGI vs. AAI: Grassroots Ingenuity and Frugal Innovation Will Shape the Future


Article by Akash Kapur: “Step back from the day-to-day flurry surrounding AI, and a global divergence in narratives is becoming increasingly clear. In Silicon Valley, New York, and London, the conversation centers on the long-range pursuit of artificial general intelligence (AGI)—systems that might one day equal or surpass humans at almost everything. This is the moon-shot paradigm, fueled by multi-billion-dollar capital expenditure and almost metaphysical ambition.

In contrast, much of the Global South is converging on something more grounded: the search for near-term, proven use cases that can be deployed with today’s hardware, and limited budgets and bandwidth. Call it Applied AI, or AAI. This quest for applicability—and relevance—is more humble than AGI. Its yardstick for success is more measured, and certainly less existential. Rather than pose profound questions about the nature of consciousness and humanity, Applied AI asks questions like: Does the model fix a real-world problem? Can it run on patchy 4G, a mid-range GPU, or a refurbished phone? What new yield can it bring to farmers or fishermen, or which bureaucratic bottleneck can it cut?

One way to think of AAI is as intelligence that ships. Vernacular chatbots, offline crop-disease detectors, speech-to-text tools for courtrooms: examples of similar applications and products, tailored and designed for specific sectors, are growing fast. In Africa, PlantVillage Nuru helps Kenyan farmers diagnose crop diseases entirely offline; South-Africa-based Lelapa AI is training “small language models” for at least 13 African languages; and Nigeria’s EqualyzAI runs chatbots that are trained to provide Hausa and Yoruba translations for customers…(More)”.

What Counts as Discovery?


Essay by Nisheeth Vishnoi: “Long before there were “scientists,” there was science. Across every continent, humans developed knowledge systems grounded in experience, abstraction, and prediction—driven not merely by curiosity, but by a desire to transform patterns into principles, and observation into discovery. Farmers tracked solstices, sailors read stars, artisans perfected metallurgy, and physicians documented plant remedies. They built calendars, mapped cycles, and tested interventions—turning empirical insight into reliable knowledge.

From the oral sciences of Africa, which encoded botanical, medical, and ecological knowledge across generations, to the astronomical observatories of Mesoamerica, where priests tracked solstices, eclipses, and planetary motion with remarkable accuracy, early human civilizations sought more than survival. In Babylon, scribes logged celestial movements and built predictive models; in India, the architects of Vedic altars designed ritual structures whose proportions mirrored cosmic rhythms, embedding arithmetic and geometry into sacred form. Across these diverse cultures, discovery was not a separate enterprise—it was entwined with ritual, survival, and meaning. Yet the tools were recognizably scientific: systematic observation, abstraction, and the search for hidden order.

This was science before the name. And it reminds us that discovery has never belonged to any one civilization or era. Discovery is not intelligence itself, but one of its sharpest expressions—an act that turns perception into principle through a conceptual leap. While intelligence is broader and encompasses adaptation, inference, and learning in various forms (biological, cultural, and even mechanical), discovery marks those moments when something new is framed, not just found. 

Life forms learn, adapt, and even innovate. But it is humans who turned observation into explanation, explanation into abstraction, and abstraction into method. The rise of formal science brought mathematical structure and experiment, but it did not invent the impulse to understand—it gave it form, language, and reach.

And today, we stand at the edge of something unfamiliar: the possibility of lifeless discoveries. Artificial Intelligence machines, built without awareness or curiosity, are beginning to surface patterns and propose explanations, sometimes without our full understanding. If science has long been a dialogue between the world and living minds, we are now entering a strange new phase: abstraction without awareness, discovery without a discoverer.

AI systems now assist in everything from understanding black holes to predicting protein folds and even symbolic equation discovery. They parse vast datasets, detect regularities, and generate increasingly sophisticated outputs. Some claim they’re not just accelerating research, but beginning to reshape science itself—perhaps even to discover.

But what truly counts as a scientific discovery? This essay examines that question…(More)”

National engagement on public trust in data use for single patient record and GP health record published


HTN Article: “A large-scale public engagement report commissioned by NHSE on building and maintaining public trust in data use across health and care has been published, focusing on the approach to creating a single patient record and the secondary use of GP data.

It noted “relief” and “enthusiasm” from participants around not having to repeat their health history when interacting with different parts of the health and care system, and highlighted concerns about data accuracy, privacy, and security.

120 participants were recruited for tier one, with 98 remaining by the end, for 15 hours of deliberation over three days in locations including Liverpool, Leicester, Portsmouth, and South London. Inclusive engagement for tier two recruited 76 people from “seldom heard groups” such as those with health needs or socially marginalised groups for interviews and small group sessions. A nationally representative ten-minute online survey with 2,000 people was also carried out in tier three.

“To start with, the concept of a single patient record was met with relief and enthusiasm across Tier 1 and Tier 2 participants,” according to the report….

When it comes to GP data, participants were “largely unaware” of secondary uses, but initially expressed comfort in the idea of it being used for saving lives, improving care, prevention, and efficiency in delivery of services. Concerns were broadly similar to those about the single patient record: concerns about data breaches, incorrect data, misuse, sensitivity of data being shared, bias against individuals, and the potential for re-identification. Some participants felt GP data should be treated differently because “it is likely to contain more intimate information”, offering greater risk to the individual patient if data were to be misused. Others felt it should be included alongside secondary care data to ensure a “comprehensive dataset”.

Participants were “reassured” overall by safeguards in place such as de-identification, staff training in data handling and security, and data regulation such as GDPR and the Data Protection Act. “There was a widespread feeling among Tier 1 and Tier 2 participants that the current model of the GP being the data controller for both direct care and secondary uses placed too much of a burden on GPs when it came to how data is used for secondary purposes,” findings show. “They wanted to see a new model which would allow for greater consistency of approach, transparency, and accountability.” Tier one participants suggested this could be a move to national or regional decision-making on secondary use. Tier three participants who only engaged with the topic online were “more resistant” to moving away from GPs as sole data controllers, with the report stating: “This greater reluctance to change demonstrates the need for careful communication with the public about this topic as changes are made, and continued involvement of the public.”..(More)”.

AI is supercharging war. Could it also help broker peace?


Article by Tina Amirtha: “Can we measure what is in our hearts and minds, and could it help us end wars any sooner? These are the questions that consume entrepreneur Shawn Guttman, a Canadian émigré who recently gave up his yearslong teaching position in Israel to accelerate a path to peace—using an algorithm.

Living some 75 miles north of Tel Aviv, Guttman is no stranger to the uncertainties of conflict. Over the past few months, miscalculated drone strikes and imprecise missile targets—some intended for larger cities—have occasionally landed dangerously close to his town, sending him to bomb shelters more than once.

“When something big happens, we can point to it and say, ‘Right, that happened because five years ago we did A, B, and C, and look at its effect,’” he says over Google Meet from his office, following a recent trip to the shelter. Behind him, souvenirs from the 1979 Egypt-Israel and 1994 Israel-Jordan peace treaties are visible. “I’m tired of that perspective.”

The startup he cofounded, Didi, is taking a different approach. Its aim is to analyze data across news outlets, political discourse, and social media to identify opportune moments to broker peace. Inspired by political scientist I. William Zartman’s “ripeness” theory, the algorithm—called the Ripeness Index—is designed to tell negotiators, organizers, diplomats, and nongovernmental organizations (NGOs) exactly when conditions are “ripe” to initiate peace negotiations, build coalitions, or launch grassroots campaigns.

During ongoing U.S.-led negotiations over the war in Gaza, both Israel and Hamas have entrenched themselves in opposing bargaining positions. Meanwhile, Israel’s traditional allies, including the U.S., have expressed growing frustration over the war and the dire humanitarian conditions in the enclave, where the threat of famine looms.

In Israel, Didi’s data is already informing grassroots organizations as they strategize which media outlets to target and how to time public actions, such as protests, in coordination with coalition partners. Guttman and his collaborators hope that eventually negotiators will use the model’s insights to help broker lasting peace.

Guttman’s project is part of a rising wave of so-called PeaceTech—a movement using technology to make negotiations more inclusive and data-driven. This includes AI from Hala Systems, which uses satellite imagery and data fusion to monitor ceasefires in Yemen and Ukraine. Another AI startup, Remesh, has been active across the Middle East, helping organizations of all sizes canvas key stakeholders. Its algorithm clusters similar opinions, giving policymakers and mediators a clearer view of public sentiment and division.

A range of NGOs and academic researchers have also developed digital tools for peacebuilding. The nonprofit Computational Democracy Project created Pol.is, an open-source platform that enables citizens to crowdsource outcomes to public debates. Meanwhile, the Futures Lab at the Center for Strategic and International Studies built a peace agreement simulator, complete with a chart to track how well each stakeholder’s needs are met.

Guttman knows it’s an uphill battle. In addition to the ethical and privacy concerns of using AI to interpret public sentiment, PeaceTech also faces financial hurdles. These companies must find ways to sustain themselves amid shrinking public funding and a transatlantic surge in defense spending, which has pulled resources away from peacebuilding initiatives.

Still, Guttman and his investors remain undeterred. One way to view the opportunity for PeaceTech is by looking at the economic toll of war. In its Global Peace Index 2024, the Institute for Economics and Peace’s Vision of Humanity platform estimated that economic disruption due to violence and the fear of violence cost the world $19.1 trillion in 2023, or about 13 percent of global GDP. Guttman sees plenty of commercial potential in times of peace as well.

“Can we make billions of dollars,” Guttman asks, “and save the world—and create peace?” ..(More)”….See also Kluz Prize for PeaceTech (Applications Open)

5 Ways AI is Boosting Citizen Engagement in Africa’s Democracies


Article by Peter Agbesi Adivor: “Artificial Intelligence (AI) is increasingly influencing democratic participation across Africa. From campaigning to voter education, AI is transforming electoral processes across the continent. While concerns about misinformation and government overreach persist, AI also offers promising avenues to enhance citizen engagement. This article explores five key ways AI is fostering more inclusive and participatory democracies in Africa.

1. AI-Powered Voter Education and Campaign

AI-driven platforms are revolutionizing voter education by providing accessible, real-time information. These platforms ensure citizens receive standardized electoral information delivered to them on their digital devices regardless of their geographical location, significantly reducing the cost for political actors as well as state and non-state actors who focus on voter education. They also ensure that those who can navigate these tools easily access the needed information, allowing authorities to focus limited resources on citizens on the other side of the digital divide.

 In Nigeria, ChatVE developed CitiBot, an AI-powered chatbot deployed during the 2024 Edo State elections to educate citizens on their civic rights and responsibilities via WhatsApp and Telegram. The bot offered information on voting procedures, eligibility, and the importance of participation.

Similarly, in South Africa, the Rivonia Circle introduced Thoko the Bot, an AI chatbot designed to answer voters’ questions about the electoral process, including where and how to vote, and the significance of participating in elections.

These AI tools enhance voter understanding and engagement by providing personalized, easily accessible information, thereby encouraging greater participation in democratic processes…(More)”.

Unequal Journeys to Food Markets: Continental-Scale Evidence from Open Data in Africa


Paper by Robert Benassai-Dalmau, et al: “Food market accessibility is a critical yet underexplored dimension of food systems, particularly in low- and middle-income countries. Here, we present a continent-wide assessment of spatial food market accessibility in Africa, integrating open geospatial data from OpenStreetMap and the World Food Programme. We compare three complementary metrics: travel time to the nearest market, market availability within a 30-minute threshold, and an entropy-based measure of spatial distribution, to quantify accessibility across diverse settings. Our analysis reveals pronounced disparities: rural and economically disadvantaged populations face substantially higher travel times, limited market reach, and less spatial redundancy. These accessibility patterns align with socioeconomic stratification, as measured by the Relative Wealth Index, and moderately correlate with food insecurity levels, assessed using the Integrated Food Security Phase Classification. Overall, results suggest that access to food markets plays a relevant role in shaping food security outcomes and reflects broader geographic and economic inequalities. This framework provides a scalable, data-driven approach for identifying underserved regions and supporting equitable infrastructure planning and policy design across diverse African contexts…(More)”.