Proactive Mapping to Manage Disaster


Article by Andrew Mambondiyani: “..In March 2019, Cyclone Idai ravaged Zimbabwe, killing hundreds of people and leaving a trail of destruction. The Global INFORM Risk Index data shows that Zimbabwe is highly vulnerable to extreme climate-related events like floods, cyclones, and droughts, which in turn destroy infrastructure, displace people, and result in loss of lives and livelihoods.

Severe weather events like Idai have exposed the shortcomings of Zimbabwe’s traditional disaster-management system, which was devised to respond to environmental disasters by providing relief and rehabilitation of infrastructure and communities. After Idai, a team of climate-change researchers from three Zimbabwean universities and the local NGO DanChurchAid (DCA) concluded that the nation must adopt a more proactive approach by establishing an early-warning system to better prepare for and thereby prevent significant damage and death from such disasters.

In response to these findings, the Open Mapping Hub—Eastern and Southern Africa (ESA Hub)—launched a program in 2022 to develop an anticipatory-response approach in Zimbabwe. The ESA Hub is a regional NGO based in Kenya created by the Humanitarian OpenStreetMap Team (HOT), an international nonprofit that uses open-mapping technology to reduce environmental disaster risk. One of HOT’s four global hubs and its first in Africa, the ESA Hub was created in 2021 to facilitate the aggregation, utilization, and dissemination of high-quality open-mapping data across 23 countries in Eastern and Southern Africa. Open-source expert Monica Nthiga leads the hub’s team of 13 experts in mapping, open data, and digital content. The team collaborates with community-based organizations, humanitarian organizations, governments, and UN agencies to meet their specific mapping needs to best anticipate future climate-related disasters.

“The ESA Hub’s [anticipatory-response] project demonstrates how preemptive mapping can enhance disaster preparedness and resilience planning,” says Wilson Munyaradzi, disaster-services manager at the ESA Hub.

Open-mapping tools and workflows enable the hub to collect geospatial data to be stored, edited, and reviewed for quality assurance prior to being shared with its partners. “Geospatial data has the potential to identify key features of the landscape that can help plan and prepare before disasters occur so that mitigation methods are put in place to protect lives and livelihoods,” Munyaradzi says…(More)”.

The Emerging Age of AI Diplomacy


Article by Sam Winter-Levy: “In a vast conference room, below chandeliers and flashing lights, dozens of dancers waved fluorescent bars in an intricately choreographed routine. Green Matrix code rained down in the background on a screen that displayed skyscrapers soaring from a desert landscape. The world was witnessing the emergence of “a sublime and transcendent entity,” a narrator declared: artificial intelligence. As if to highlight AI’s transformative potential, a digital avatar—Artificial Superintelligence One—approached a young boy and together they began to sing John Lennon’s “Imagine.” The audience applauded enthusiastically. With that, the final day dawned on what one government minister in attendance described as the “world’s largest AI thought leadership event.”

This surreal display took place not in Palo Alto or Menlo Park but in Riyadh, Saudi Arabia, at the third edition of the city’s Global AI Summit, in September of this year. In a cavernous exhibition center next to the Ritz Carlton, where Crown Prince Mohammed bin Salman imprisoned hundreds of wealthy Saudis on charges of corruption in 2017,robots poured tea and mixed drinks. Officials in ankle-length white robes hailed Saudi Arabia’s progress on AI. American and Chinese technology companies pitched their products and announced memorandums of understanding with the government. Attendantsdistributed stickers that declared, “Data is the new oil.”

For Saudi Arabia and its neighbor, the United Arab Emirates (UAE), AI plays an increasingly central role in their attempts to transform their oil wealth into new economic models before the world transitions away from fossil fuels. For American AI companies, hungry for capital and energy, the two Gulf states and their sovereign wealth funds are tantalizing partners. And some policymakers in Washington see a once-in-a-generation opportunity to promise access to American computing power in a bid to lure the Gulf states away from China and deepen an anti-Iranian coalition in the Middle East….The two Gulf states’ interest in AI is not new, but it has intensified in recent months. Saudi Arabia plans to create a $40 billion fund to invest in AI and has set up Silicon Valley–inspired startup accelerators to entice coders to Riyadh. In 2019, the UAE launched the world’s first university dedicated to AI, and since 2021, the number of AI workers in the country has quadrupled, according to government figures. The UAE has also released a series of open-source large language models that it claims rival those of Google and Meta, and earlier this year it launched an investment firm focused on AI and semiconductors that could surpass $100 billion in assets under management…(More)”.

Nature-rich nations push for biodata payout


Article by Lee Harris: “Before the current generation of weight-loss drugs, there was hoodia, a cactus that grows in southern Africa’s Kalahari Desert, and which members of the region’s San tribe have long used to stave off hunger. UK-based Phytopharm licensed the active ingredient in the cactus in 1996, and made numerous attempts to commercialise weight-loss products derived from it.

The company won licensing deals with Pfizer and Unilever, but drew outrage from campaigners who argued that the country was ripping off indigenous groups that had made the discovery. Indignation grew after the chief executive said it could not compensate local tribes because “the people who discovered the plant have disappeared”. (They had not).

This is just one example of companies using biological resources discovered in other countries for financial gain. The UN has attempted to set fairer terms with treaties such as the 1992 Convention on Biological Diversity, which deals with the sharing of genetic resources. But this approach has been seen by many developing countries as unsatisfactory. And earlier tools governing trade in plants and microbes may become less useful as biological data is now frequently transmitted in the form of so-called digital sequence information — the genetic code derived from those physical resources.

Now, the UN is working on a fund to pay stewards of biodiversity — notably communities in lower-income countries — for discoveries made with genetic data from their ecosystems. The mechanism was established in 2022 as part of the Conference of Parties to the UN Convention on Biological Diversity, a sister process to the climate “COP” initiative. But the question of how it will be governed and funded will be on the table at the October COP16 summit in Cali, Colombia.

If such a fund comes to fruition — a big “if” — it could raise billions for biodiversity goals. The sectors that depend on this genetic data — notably, pharmaceuticals, biotech and agribusiness — generate revenues exceeding $1tn annually, and African countries plan to push for these sectors to contribute 1 per cent of all global retail sales to the fund, according to Bloomberg.

There’s reason to temper expectations, however. Such a fund would lack the power to compel national governments or industries to pay up. Instead, the strategy is focused around raising ambition — and public pressure — for key industries to make voluntary contributions…(More)”.

The New Artificial Intelligentsia


Essay by Ruha Benjamin: “In the Fall of 2016, I gave a talk at the Institute for Advanced Study in Princeton titled “Are Robots Racist?” Headlines such as “Can Computers Be Racist? The Human-Like Bias of Algorithms,” “Artificial Intelligence’s White Guy Problem,” and “Is an Algorithm Any Less Racist Than a Human?” had captured my attention in the months before. What better venue to discuss the growing concerns about emerging technologies, I thought, than an institution established during the early rise of fascism in Europe, which once housed intellectual giants like J. Robert Oppenheimer and Albert Einstein, and prides itself on “protecting and promoting independent inquiry.”

My initial remarks focused on how emerging technologies reflect and reproduce social inequities, using specific examples of what some termed “algorithmic discrimination” and “machine bias.” A lively discussion ensued. The most memorable exchange was with a mathematician who politely acknowledged the importance of the issues I raised but then assured me that “as AI advances, it will eventually show us how to address these problems.” Struck by his earnest faith in technology as a force for good, I wanted to sputter, “But what about those already being harmed by the deployment of experimental AI in healthcareeducationcriminal justice, and more—are they expected to wait for a mythical future where sentient systems act as sage stewards of humanity?”

Fast-forward almost 10 years, and we are living in the imagination of AI evangelists racing to build artificial general intelligence (AGI), even as they warn of its potential to destroy us. This gospel of love and fear insists on “aligning” AI with human values to rein in these digital deities. OpenAI, the company behind ChatGPT, echoed the sentiment of my IAS colleague: “We are improving our AI systems’ ability to learn from human feedback and to assist humans at evaluating AI. Our goal is to build a sufficiently aligned AI system that can help us solve all other alignment problems.” They envision a time when, eventually, “our AI systems can take over more and more of our alignment work and ultimately conceive, implement, study, and develop better alignment techniques than we have now. They will work together with humans to ensure that their own successors are more aligned with humans.” For many, this is not reassuring…(More)”.

The Critical Role of Questions in Building Resilient Democracies


Article by Stefaan G. Verhulst, Hannah Chafetz, and Alex Fischer: “Asking questions in new and participatory ways can complement advancements in data science and AI while enabling more inclusive and more adaptive democracies…

Yet a crisis, as the saying goes, always contains kernels of opportunity. Buried within our current dilemma—indeed, within one of the underlying causes of it—is a potential solution. Democracies are resilient and adaptive, not static. And importantly, data and artificial intelligence (AI), if implemented responsibly, can contribute to making them more resilient. Technologies such as AI-supported digital public squares and crowd-sourcing are examples of how generative AI and large language models can improve community connectivity, societal health, and public services. Communities can leverage these tools for democratic participation and democratizing information. Through this period of technological transition, policy makers and communities are imagining how digital technologies can better engage our collective intelligence

Achieving this requires new tools and approaches, specifically the collective process of asking better questions.

Formulated inclusively, questions help establish shared priorities and impart focus, efficiency, and equity to public policy. For instance, school systems can identify indicators and patterns of experiences, such as low attendance rates, that signal a student is at risk of not completing school. However, they rarely ask the positive outlier question of what enables some at-risk students to overcome challenges and finish school. Is it a good teacher relationship, an after-school program, the support of a family member, or a combination of these and other factors? Asking outlier (and orphan, or overlooked and neglected) questions can help refocus programs and guide policies toward areas with the highest potential for impact.

Not asking the right questions can also have adverse effects. For example, many city governments have not asked whether and how people of different genders, in different age groups, or with different physical mobility needs experience local public transportation systems. Creating the necessary infrastructure for people with a variety of needs to travel safely and efficiently increases health and well-being. Questions like whether sidewalks are big enough for strollers and whether there is sufficient public transport near schools can help spotlight areas for improvement, and show where age- or gender-disaggregated data is needed most…(More)”.

How elderly dementia patients are unwittingly fueling political campaigns


Article by Blake Ellis, et al: “The 80-year-old communications engineer from Texas had saved for decades, driving around in an old car and buying clothes from thrift stores so he’d have enough money to enjoy his retirement years.

But as dementia robbed him of his reasoning abilities, he began making online political donations over and over again — eventually telling his son he believed he was part of a network of political operatives communicating with key Republican leaders. In less than two years, the man became one of the country’s largest grassroots supporters of the Republican Party, ultimately giving away nearly half a million dollars to former President Donald Trump and other candidates. Now, the savings account he spent his whole life building is practically empty.

The story of this unlikely political benefactor is one of many playing out across the country.

More than 1,000 reports filed with government agencies and consumer advocacy groups reviewed by CNN, along with an analysis of campaign finance data and interviews with dozens of contributors and their family members, show how deceptive political fundraisers have victimized hundreds of elderly Americans and misled those battling dementia or other cognitive impairments into giving away millions of dollars — far more than they ever intended. Some unintentionally joined the ranks of the top grassroots political donors in the country as they tapped into retirement savings and went into debt, contributing six-figure sums through thousands of transactions…(More)”.

AI in the Public Service: Here for Good


Special Issue of Ethos: “…For the public good, we want AI to help unlock and drive transformative impact, in areas where there is significant potential for breakthroughs, such as cancer research, material sciences or climate change. But we also want to raise the level of generalised adoption. For the user base in the public sector, we want to learn how best to use this new tool in ways that can allow us to not only do things better, but do better things.

This is not to suggest that AI is always the best solution: it is one of many tools in the digital toolkit. Sometimes, simpler computational methods will suffice. That said, AI represents new, untapped potential for the Public Service to enhance our daily work and deliver better outcomes that ultimately benefit Singapore and Singaporeans….

To promote general adoption, we made available AI tools, such as Pair, 1 SmartCompose, 2 and AIBots. 3 They are useful to a wide range of public officers for many general tasks. Other common tools of this nature may include chatbots to support customer-facing and service delivery needs, translation, summarisation, and so on. Much of what public officers do involves words and language, which is an area that LLM-based AI technology can now help with.

Beyond improving the productivity of the Public Service, the real value lies in AI’s broader ability to transform our business and operating models to deliver greater impact. In driving adoption, we want to encourage public officers to experiment with different approaches to figure out where we can create new value by doing things differently, rather than just settle for incremental value from doing things the same old ways using new tools.

For example, we have seen how AI and automation have transformed language translation, software engineering, identity verification and border clearance. This is just the beginning and much more is possible in many other domains…(More)”.

AI helped Uncle Sam catch $1 billion of fraud in one year. And it’s just getting started


Article by Matt Egan: “The federal government’s bet on using artificial intelligence to fight financial crime appears to be paying off.

Machine learning AI helped the US Treasury Department to sift through massive amounts of data and recover $1 billion worth of check fraud in fiscal 2024 alone, according to new estimates shared first with CNN. That’s nearly triple what the Treasury recovered in the prior fiscal year.

“It’s really been transformative,” Renata Miskell, a top Treasury official, told CNN in a phone interview.

“Leveraging data has upped our game in fraud detection and prevention,” Miskell said.

The Treasury Department credited AI with helping officials prevent and recover more than $4 billion worth of fraud overall in fiscal 2024, a six-fold spike from the year before.

US officials quietly started using AI to detect financial crime in late 2022, taking a page out of what many banks and credit card companies already do to stop bad guys.

The goal is to protect taxpayer money against fraud, which spiked during the Covid-19 pandemic as the federal government scrambled to disburse emergency aid to consumers and businesses.

To be sure, Treasury is not using generative AI, the kind that has captivated users of OpenAI’s ChatGPT and Google’s Gemini by generating images, crafting song lyrics and answering complex questions (even though it still sometimes struggles with simple queries)…(More)”.

The Number


Article by John Lanchester: “…The other pieces published in this series have human protagonists. This one doesn’t: The main character of this piece is not a person but a number. Like all the facts and numbers cited above, it comes from the federal government. It’s a very important number, which has for a century described economic reality, shaped political debate and determined the fate of presidents: the consumer price index.

The CPI is crucial for multiple reasons, and one of them is not because of what it is but what it represents. The gathering of data exemplifies our ambition for a stable, coherent society. The United States is an Enlightenment project based on the supremacy of reason; on the idea that things can be empirically tested; that there are self-evident truths; that liberty, progress and constitutional government walk arm in arm and together form the recipe for the ideal state. Statistics — numbers created by the state to help it understand itself and ultimately to govern itself — are not some side effect of that project but a central part of what government is and does…(More)”.

WikiProject AI Cleanup


Article by Emanuel Maiberg: “A group of Wikipedia editors have formed WikiProject AI Cleanup, “a collaboration to combat the increasing problem of unsourced, poorly-written AI-generated content on Wikipedia.”

The group’s goal is to protect one of the world’s largest repositories of information from the same kind of misleading AI-generated information that has plagued Google search resultsbooks sold on Amazon, and academic journals.

“A few of us had noticed the prevalence of unnatural writing that showed clear signs of being AI-generated, and we managed to replicate similar ‘styles’ using ChatGPT,” Ilyas Lebleu, a founding member of WikiProject AI Cleanup, told me in an email. “Discovering some common AI catchphrases allowed us to quickly spot some of the most egregious examples of generated articles, which we quickly wanted to formalize into an organized project to compile our findings and techniques.”…(More)”.