AI in Global Development Playbook


USAID Playbook: “…When used effectively and responsibly, AI holds the potential to accelerate progress on sustainable development and close digital divides, but it also poses risks that could further impede progress toward these goals. With the right enabling environment and ecosystem of actors, AI can enhance efficiency and accelerate development outcomes in sectors such as health, education, agriculture, energy, manufacturing, and delivering public services. The United States aims to ensure that the benefits of AI are shared equitably across the globe.

Distilled from consultations with hundreds of government officials, non-governmental organizations, technology firms and startups, and individuals from around the world, the AI in Global Development Playbook is a roadmap to develop the capacity, ecosystems, frameworks, partnerships, applications, and institutions to leverage safe, secure, and trustworthy AI for sustainable development.

The United States’ current efforts are grounded in the belief that AI, when developed and deployed responsibly, can be a powerful force for achieving the Sustainable Development Goals and addressing some of the world’s most urgent challenges. Looking ahead, the United States will continue to support low- and middle-income countries through funding, advocacy, and convening efforts–collectively navigating the complexities of the digital age and working toward a future in which the benefits of technological development are widely shared.

This Playbook seeks to underscore AI as a uniquely global opportunity with far-reaching impacts and potential risks. It highlights that safe, secure, and trustworthy design, deployment, and use of AI is not only possible but essential. Recognizing that international cooperation and multi-stakeholder partnerships are key in achieving progress, we invite others to contribute their expertise, resources, and perspectives to enrich and expand this framework.

The true measure of progress in responsible AI is not in the sophistication of our machines but in the quality of life the technology enhances. Together we can work toward ensuring the promise of AI is realized in service of this goal…(More)”

Artificial intelligence (AI) in action: A preliminary review of AI use for democracy support


Policy paper by Grahm Tuohy-Gaydos: “…provides a working definition of AI for Westminster Foundation for Democracy (WFD) and the broader democracy support sector. It then provides a preliminary review of how AI is being used to enhance democratic practices worldwide, focusing on several themes including: accountability and transparency, elections, environmental democracy, inclusion, openness and participation, and women’s political leadership. The paper also highlights potential risks and areas of development in the future. Finally, the paper shares five recommendations for WFD and democracy support organisations to consider advancing their ‘digital democracy’ agenda. This policy paper also offers additional information regarding AI classification and other resources for identifying good practice and innovative solutions. Its findings may be relevant to WFD staff members, international development practitioners, civil society organisations, and persons interested in using emerging technologies within governmental settings…(More)”.

China’s biggest AI model is challenging American dominance


Article by Sam Eifling: “So far, the AI boom has been dominated by U.S. companies like OpenAI, Google, and Meta. In recent months, though, a new name has been popping up on benchmarking lists: Alibaba’s Qwen. Over the past few months, variants of Qwen have been topping the leaderboards of sites that measure an AI model’s performance.

“Qwen 72B is the king, and Chinese models are dominating,” Hugging Face CEO Clem Delangue wrote in June, after a Qwen-based model first rose to the top of his company’s Open LLM leaderboard.

It’s a surprising turnaround for the Chinese AI industry, which many thought was doomed by semiconductor restrictions and limitations on computing power. Qwen’s success is showing that China can compete with the world’s best AI models — raising serious questions about how long U.S. companies will continue to dominate the field. And by focusing on capabilities like language support, Qwen is breaking new ground on what an AI model can do — and who it can be built for.

Those capabilities have come as a surprise to many developers, even those working on Qwen itself. AI developer David Ng used Qwen to build the model that topped the Open LLM leaderboard. He’s built models using Meta and Google’s technology also but says Alibaba’s gave him the best results. “For some reason, it works best on the Chinese models,” he told Rest of World. “I don’t know why.”..(More)”

Why is it so hard to establish the death toll?


Article by Smriti Mallapaty: “Given the uncertainty of counting fatalities during conflict, researchers use other ways to estimate mortality.

One common method uses household surveys, says Debarati Guha-Sapir, an epidemiologist who specializes in civil conflicts at the University of Louvain in Louvain-la-Neuve, Belgium, and is based in Brussels. A sample of the population is asked how many people in their family have died over a specific period of time. This approach has been used to count deaths in conflicts elsewhere, including in Iraq3 and the Central African Republic4.

The situation in Gaza right now is not conducive to a survey, given the level of movement and displacement, say researchers. And it would be irresponsible to send data collectors into an active conflict and put their lives at risk, says Ball.

There are also ethical concerns around intruding on people who lack basic access to food and medication to ask about deaths in their families, says Jamaluddine. Surveys will have to wait for the conflict to end and movement to ease, say researchers.

Another approach is to compare multiple independent lists of fatalities and calculate mortality from the overlap between them. The Human Rights Data Analysis Group used this approach to estimate the number of people killed in Syria between 2011 and 2014. Jamaluddine hopes to use the ministry fatality data in conjunction with those posted on social media by several informal groups to estimate mortality in this way. But Guha-Sapir says this method relies on the population being stable and not moving around, which is often not the case in conflict-affected communities.

In addition to deaths immediately caused by the violence, some civilians die of the spread of infectious diseases, starvation or lack of access to health care. In February, Jamaluddine and her colleagues used modelling to make projections of excess deaths due to the war and found that, in a continued scenario of six months of escalated conflict, 68,650 people could die from traumatic injuries, 2,680 from non-communicable diseases such as cancer and 2,720 from infectious diseases — along with thousands more if an epidemic were to break out. On 30 July, the ministry declared a polio epidemic in Gaza after detecting the virus in sewage samples, and in mid-August it confirmed the first case of polio in 25 years, in a 10-month-old baby…

The longer the conflict continues, the harder it will be to get reliable estimates, because “reports by survivors get worse as time goes by”, says Jon Pedersen, a demographer at !Mikro in Oslo, who advises international agencies on mortality estimates…(More)”.

Germany’s botched data revamp leaves economists ‘flying blind’


Article by Olaf Storbeck: “Germany’s statistical office has suspended some of its most important indicators after botching a data update, leaving citizens and economists in the dark at a time when the country is trying to boost flagging growth.

In a nation once famed for its punctuality and reliability, even its notoriously diligent beancounters have become part of a growing perception that “nothing works any more” as Germans moan about delayed trains, derelict roads and bridges, and widespread staff shortages.

“There used to be certain aspects in life that you could just rely on, and the fact that official statistics are published on time was one of them — not any more,” said Jörg Krämer, chief economist of Commerzbank, adding that the suspended data was also closely watched by monetary policymakers and investors.

Since May the Federal Statistical Office (Destatis) has not updated time-series data for retail and wholesale sales, as well as revenue from the services sector, hospitality, car dealers and garages.

These indicators, which are published monthly and adjusted for seasonal changes, are a key component of GDP and crucial for assessing consumer demand in the EU’s largest economy.

Private consumption accounted for 52.7 per cent of German output in 2023. Retail sales made up 28 per cent of private consumption but shrank 3.4 per cent from a year earlier. Overall GDP declined 0.3 per cent last year, Destatis said.

The Wiesbaden-based authority, which was established in 1948, said the outages had been caused by IT issues and a complex methodological change in EU business statistics in a bid to boost accuracy.

Destatis has been working on the project since the EU directive in 2019, and the deadline for implementing the changes is December.

But a series of glitches, data issues and IT delays meant Destatis has been unable to publish retail sales and other services data for four months.

A key complication is that the revenues of companies that operate in both services and manufacturing will now be reported differently for each sector. In the past, all revenue was treated as either services or manufacturing, depending on which unit was bigger…(More)”

Synthetic Data and Social Science Research


Paper by Jordan C. Stanley & Evan S. Totty: “Synthetic microdata – data retaining the structure of original microdata while replacing original values with modeled values for the sake of privacy – presents an opportunity to increase access to useful microdata for data users while meeting the privacy and confidentiality requirements for data providers. Synthetic data could be sufficient for many purposes, but lingering accuracy concerns could be addressed with a validation system through which the data providers run the external researcher’s code on the internal data and share cleared output with the researcher. The U.S. Census Bureau has experience running such systems. In this chapter, we first describe the role of synthetic data within a tiered data access system and the importance of synthetic data accuracy in achieving a viable synthetic data product. Next, we review results from a recent set of empirical analyses we conducted to assess accuracy in the Survey of Income & Program Participation (SIPP) Synthetic Beta (SSB), a Census Bureau product that made linked survey-administrative data publicly available. Given this analysis and our experience working on the SSB project, we conclude with thoughts and questions regarding future implementations of synthetic data with validation…(More)”

Artificial Intelligence as a Catalyzer for Open Government Data Ecosystems: A Typological Theory Approach


Paper by Anthony Simonofski et al: “Artificial Intelligence (AI) within digital government has witnessed growing interest as it can improve governance processes and stimulate citizen engagement. Despite the rise of Generative AI, discussions on AI fusion with Open Government Data (OGD) remain limited to specific implementations and scattered across disciplines. Drawing from the synthesis of the literature through a systematic review, this study examines and structures how AI can enrich OGD initiatives. Employing a typological approach, ideal profiles of AI application within the OGD lifecycle are formalized, capturing varied roles across the portal and ecosystems perspectives. The resulting conceptual framework identifies eight ideal types of AI applications for OGD: AI as Portal Curator, Explorer, Linker, and Monitor, and AI as Ecosystem Data Retriever, Connecter, Value Developer and Engager. This theoretical foundation shows the under-investigation of some types and will inform policymakers, practitioners, and researchers in leveraging AI to cultivate OGD ecosystems…(More)”.

Visualizing Ship Movements with AIS Data


Article by Jon Keegan: “As we run, drive, bike, and fly, humans leave behind telltale tracks of movement on Earth—if you know where to look. Physical tracks, thermal signatures, and chemical traces can reveal where we’ve been. But another type of breadcrumb trail comes from the radio signals emitted by the cars, planes, trains, and boats we use.

Just like ADS-B transmitters on airplanes, which provide real-time location, identification, speed, and orientation data, the AIS (Automatic Identification System) performs the same function for ships at sea.

Operating at 161.975 and 162.025 MHz, AIS transmitters broadcast a ship’s identification number, name, call sign, length, beam, type, and antenna location every six minutes. Ship location, position timestamp, and direction are transmitted more frequently. The primary purpose of AIS is maritime safety—it helps prevent collisions, assists in rescues, and provides insight into the impact of ship traffic on marine life.

Unlike ADS-B in a plane, AIS can only be turned off in rare circumstances. The result of this is a treasure trove of fascinating ship movement data. You can even watch live ship data on sites like Vessel Finder.

Using NOAA’s “Marine Cadastre” tool, you can download 16 years’ worth of detailed daily ship movements (filtered to the minute), in addition to “transit count” maps generated from a year’s worth of data to show each ship’s accumulated paths…(More)”.

Data Privacy for Record Linkage and Beyond


Paper by Shurong Lin & Eric Kolaczyk: “In a data-driven world, two prominent research problems are record linkage and data privacy, among others. Record linkage is essential for improving decision-making by integrating information of the same entities from different sources. On the other hand, data privacy research seeks to balance the need to extract accurate insights from data with the imperative to protect the privacy of the entities involved. Inevitably, data privacy issues arise in the context of record linkage. This article identifies two complementary aspects at the intersection of these two fields: (1) how to ensure privacy during record linkage and (2) how to mitigate privacy risks when releasing the analysis results after record linkage. We specifically discuss privacy-preserving record linkage, differentially private regression, and related topics…(More)”.

Mapping AI Narratives at the Local Level


Article for Urban AI: “In May 2024, Nantes Métropole (France) launched a pioneering initiative titled “Nantes Débat de l’IA” (meaning “Nantes is Debating AI”). This year-long project is designed to curate the organization of events dedicated to artificial intelligence (AI) across the territory. The primary aim of this initiative is to foster dialogue among local stakeholders, enabling them to engage in meaningful discussions, exchange ideas, and develop a shared understanding of AI’s impact on the region.

Over the course of one year, the Nantes metropolitan area will host around sixty events focused on AI, bringing together a wide range of participants, including policymakers, businesses, researchers, and civil society. These events provide a platform for these diverse actors to share their perspectives, debate critical issues, and explore the potential opportunities and challenges AI presents. Through this collaborative process, the goal is to cultivate a common culture around AI, ensuring that all relevant voices are heard as the city navigates to integrate this transformative technology…(More)”.