The Global A.I. Divide


Article by Adam Satariano and Paul Mozur: “Last month, Sam Altman, the chief executive of the artificial intelligence company OpenAI, donned a helmet, work boots and a luminescent high-visibility vest to visit the construction site of the company’s new data center project in Texas.

Bigger than New York’s Central Park, the estimated $60 billion project, which has its own natural gas plant, will be one of the most powerful computing hubs ever created when completed as soon as next year.

Around the same time as Mr. Altman’s visit to Texas, Nicolás Wolovick, a computer science professor at the National University of Córdoba in Argentina, was running what counts as one of his country’s most advanced A.I. computing hubs. It was in a converted room at the university, where wires snaked between aging A.I. chips and server computers.

“Everything is becoming more split,” Dr. Wolovick said. “We are losing.”

Artificial intelligence has created a new digital divide, fracturing the world between nations with the computing power for building cutting-edge A.I. systems and those without. The split is influencing geopolitics and global economics, creating new dependencies and prompting a desperate rush to not be excluded from a technology race that could reorder economies, drive scientific discovery and change the way that people live and work.

The biggest beneficiaries by far are the United States, China and the European Union. Those regions host more than half of the world’s most powerful data centers, which are used for developing the most complex A.I. systems, according to data compiled by Oxford University researchers. Only 32 countries, or about 16 percent of nations, have these large facilities filled with microchips and computers, giving them what is known in industry parlance as “compute power.”..(More)”.

ChatGPT Has Already Polluted the Internet So Badly That It’s Hobbling Future AI Development


Article by Frank Landymore: “The rapid rise of ChatGPT — and the cavalcade of competitors’ generative models that followed suit — has polluted the internet with so much useless slop that it’s already kneecapping the development of future AI models.

As the AI-generated data clouds the human creations that these models are so heavily dependent on amalgamating, it becomes inevitable that a greater share of what these so-called intelligences learn from and imitate is itself an ersatz AI creation. 

Repeat this process enough, and AI development begins to resemble a maximalist game of telephone in which not only is the quality of the content being produced diminished, resembling less and less what it’s originally supposed to be replacing, but in which the participants actively become stupider. The industry likes to describe this scenario as AI “model collapse.”

As a consequence, the finite amount of data predating ChatGPT’s rise becomes extremely valuable. In a new featureThe Register likens this to the demand for “low-background steel,” or steel that was produced before the detonation of the first nuclear bombs, starting in July 1945 with the US’s Trinity test. 

Just as the explosion of AI chatbots has irreversibly polluted the internet, so did the detonation of the atom bomb release radionuclides and other particulates that have seeped into virtually all steel produced thereafter. That makes modern metals unsuitable for use in some highly sensitive scientific and medical equipment. And so, what’s old is new: a major source of low-background steel, even today, is WW1 and WW2 era battleships, including a huge naval fleet that was scuttled by German Admiral Ludwig von Reuter in 1919…(More)”.

National engagement on public trust in data use for single patient record and GP health record published


HTN Article: “A large-scale public engagement report commissioned by NHSE on building and maintaining public trust in data use across health and care has been published, focusing on the approach to creating a single patient record and the secondary use of GP data.

It noted “relief” and “enthusiasm” from participants around not having to repeat their health history when interacting with different parts of the health and care system, and highlighted concerns about data accuracy, privacy, and security.

120 participants were recruited for tier one, with 98 remaining by the end, for 15 hours of deliberation over three days in locations including Liverpool, Leicester, Portsmouth, and South London. Inclusive engagement for tier two recruited 76 people from “seldom heard groups” such as those with health needs or socially marginalised groups for interviews and small group sessions. A nationally representative ten-minute online survey with 2,000 people was also carried out in tier three.

“To start with, the concept of a single patient record was met with relief and enthusiasm across Tier 1 and Tier 2 participants,” according to the report….

When it comes to GP data, participants were “largely unaware” of secondary uses, but initially expressed comfort in the idea of it being used for saving lives, improving care, prevention, and efficiency in delivery of services. Concerns were broadly similar to those about the single patient record: concerns about data breaches, incorrect data, misuse, sensitivity of data being shared, bias against individuals, and the potential for re-identification. Some participants felt GP data should be treated differently because “it is likely to contain more intimate information”, offering greater risk to the individual patient if data were to be misused. Others felt it should be included alongside secondary care data to ensure a “comprehensive dataset”.

Participants were “reassured” overall by safeguards in place such as de-identification, staff training in data handling and security, and data regulation such as GDPR and the Data Protection Act. “There was a widespread feeling among Tier 1 and Tier 2 participants that the current model of the GP being the data controller for both direct care and secondary uses placed too much of a burden on GPs when it came to how data is used for secondary purposes,” findings show. “They wanted to see a new model which would allow for greater consistency of approach, transparency, and accountability.” Tier one participants suggested this could be a move to national or regional decision-making on secondary use. Tier three participants who only engaged with the topic online were “more resistant” to moving away from GPs as sole data controllers, with the report stating: “This greater reluctance to change demonstrates the need for careful communication with the public about this topic as changes are made, and continued involvement of the public.”..(More)”.

Disappearing people: A global demographic data crisis threatens public policy


Article by Jessica M. Espey, Andrew J. Tatem, and Dana R. Thomson: “Every day, decisions that affect our lives—such as where to locate hospitals and how to allocate resources for schools—depend on knowing how many people live where and who they are; for example, their ages, occupations, living conditions, and needs. Such core demographic data in most countries come from a census, a count of the population usually conducted every 10 years. But something alarming is happening to many of these critical data sources. As widely discussed at the United Nations (UN) Statistical Commission meeting in New York in March, fewer countries have managed to complete a census in recent years. And even when they are conducted, censuses have been shown to undercount members of certain groups in important ways. Redressing this predicament requires investment and technological solutions alongside extensive political outreach, citizen engagement, and new partnerships…(More)”

Why PeaceTech must be the next frontier of innovation and investment


Article by Stefaan Verhulst and Artur Kluz: “…amidst this frenzy, a crucial question is being left unasked: Can technology be used not just to win wars, but to prevent them and save people’s lives?

There is an emerging field that dares to pose this question—PeaceTech. It is the use of technology to save human lives, prevent conflict, de-escalate violence, rebuild fractured communities, and secure fragile peace in post-conflict environments.

From early warning systems that predict outbreaks of violence, to platforms ensuring aid transparency, and mobile tools connecting refugees to services: PeaceTech is real, it works—and it is radically underfunded.

Unlike the vast sums pouring into defense startups, peace building efforts, including PeaceTech organizations and ventures, struggle for scraps. The United Nations Secretary-General released in 2020 its ambitious goal to fundraise $1.5 billion in peacebuilding support over a total of seven years. In contrast, private investment in defense tech crossed $34 billion in 2023 alone. 

Why is PeaceTech so neglected?

One reason PeaceTech is so neglected is cultural: in the tech world, “peace” can seem abstract or idealistic—soft power in a world of hard tech. In reality, peace is not soft; it is among the hardest, most complex challenges of our time. Peace requires systemic thinking, early intervention, global coordination, and a massive infrastructure of care, trust, and monitoring. Maintaining peace in a hyper-polarized, technologically complex world is a feat of engineering, diplomacy, and foresight.

And it’s a business opportunity. According to the Institute for Economics and Peace, violence costs the global economy over $17 trillion per year—about 13% of global GDP. Even modest improvements in peace would unlock billions in economic value.

Consider the peace dividend from predictive analytics that can help governments or international organizations intervene or mediate before conflict breaks out, or AI-powered verification tools to enforce ceasefires and disinformation controls. PeaceTech, if scaled, could become a multibillion dollar market—and a critical piece of the security architecture of the future…(More)”. ..See also Kluz Prize for PeaceTech (Applications Open)

DeepSeek Inside: Origins, Technology, and Impact


Article by Michael A. Cusumano: “The release of DeepSeek V3 and R1 in January 2025 caused steep declines in the stock prices of companies that provide generative artificial intelligence (GenAI) infrastructure technology and datacenter services. These two large language models (LLMs) came from a little-known Chinese startup with approximately 200 employees versus at least 3,500 for industry-leader OpenAI. DeepSeek seemed to have developed this powerful technology much more cheaply than previously thought possible. If true, DeepSeek had the potential to disrupt the economics of the entire GenAI ecosystem and the dominance of U.S. companies ranging from OpenAI to Nvidia.

DeepSeek-R1 defines itself as “an artificial intelligence language model developed by OpenAI, specifically based on the generative pre-trained transformer (GPT) architecture.” Here, DeepSeek acknowledges that the transformer researchers (who published their landmark paper while at Google in 2017) and OpenAI developed its basic technology. Nonetheless, V3 and R1 display impressive skills in neural-network system design, engineering, and optimization, and DeepSeek’s publications provide rare insights into how the technology actually works. This column reviews, for the non-expert reader, what we know about DeepSeek’s origins, technology, and impact so far…(More)”.

The war over the peace business


Article by Tekendra Parmar: “At the second annual AI+ Expo in Washington, DC, in early June, war is the word of the day.

As a mix of Beltway bureaucrats, military personnel, and Washington’s consultant class peruse the expansive Walter E. Washington Convention Center, a Palantir booth showcases its latest in data-collection suites for “warfighters.” Lockheed Martin touts the many ways it is implementing AI throughout its weaponry systems. On the soundstage, the defense tech darling Mach Industries is selling its newest uncrewed aerial vehicles. “We’re living in a world with great-power competition,” the presenter says. “We can’t rule out the possibility of war — but the best way to prevent a war is deterrence,” he says, flanked by videos of drones flying through what looked like the rugged mountains and valleys of Kandahar.

Hosted by the Special Competitive Studies Project, a think tank led by former Google CEO Eric Schmidt, the expo says it seeks to bridge the gap between Silicon Valley entrepreneurs and Washington policymakers to “strengthen” America and its allies’ “competitiveness in critical technologies.”

One floor below, a startup called Anadyr Horizon is making a very different sales pitch, for software that seeks to prevent war rather than fight it: “Peace tech,” as the company’s cofounder Arvid Bell calls it. Dressed in white khakis and a black pinstripe suit jacket with a dove and olive branch pinned to his lapel (a gift from his husband), the former Harvard political scientist begins by noting that Russia’s all-out invasion of Ukraine had come as a surprise to many political scientists. But his AI software, he says, could predict it.

Long the domain of fantasy and science fiction, the idea of forecasting conflict has now become a serious pursuit. In Isaac Asimov’s 1950s “Foundation” series, the main character develops an algorithm that allows him to predict the decline of the Galactic Empire, angering its rulers and forcing him into exile. During the coronavirus pandemic, the US State Department experimented with AI fed with Twitter data to predict “COVID cases” and “violent events.” In its AI audit two years ago, the State Department revealed that it started training AI on “open-source political, social, and economic datasets” to predict “mass civilian killings.” The UN is also said to have experimented with AI to model the war in Gaza…(More)”… ..See also Kluz Prize for PeaceTech (Applications Open)

AI is supercharging war. Could it also help broker peace?


Article by Tina Amirtha: “Can we measure what is in our hearts and minds, and could it help us end wars any sooner? These are the questions that consume entrepreneur Shawn Guttman, a Canadian émigré who recently gave up his yearslong teaching position in Israel to accelerate a path to peace—using an algorithm.

Living some 75 miles north of Tel Aviv, Guttman is no stranger to the uncertainties of conflict. Over the past few months, miscalculated drone strikes and imprecise missile targets—some intended for larger cities—have occasionally landed dangerously close to his town, sending him to bomb shelters more than once.

“When something big happens, we can point to it and say, ‘Right, that happened because five years ago we did A, B, and C, and look at its effect,’” he says over Google Meet from his office, following a recent trip to the shelter. Behind him, souvenirs from the 1979 Egypt-Israel and 1994 Israel-Jordan peace treaties are visible. “I’m tired of that perspective.”

The startup he cofounded, Didi, is taking a different approach. Its aim is to analyze data across news outlets, political discourse, and social media to identify opportune moments to broker peace. Inspired by political scientist I. William Zartman’s “ripeness” theory, the algorithm—called the Ripeness Index—is designed to tell negotiators, organizers, diplomats, and nongovernmental organizations (NGOs) exactly when conditions are “ripe” to initiate peace negotiations, build coalitions, or launch grassroots campaigns.

During ongoing U.S.-led negotiations over the war in Gaza, both Israel and Hamas have entrenched themselves in opposing bargaining positions. Meanwhile, Israel’s traditional allies, including the U.S., have expressed growing frustration over the war and the dire humanitarian conditions in the enclave, where the threat of famine looms.

In Israel, Didi’s data is already informing grassroots organizations as they strategize which media outlets to target and how to time public actions, such as protests, in coordination with coalition partners. Guttman and his collaborators hope that eventually negotiators will use the model’s insights to help broker lasting peace.

Guttman’s project is part of a rising wave of so-called PeaceTech—a movement using technology to make negotiations more inclusive and data-driven. This includes AI from Hala Systems, which uses satellite imagery and data fusion to monitor ceasefires in Yemen and Ukraine. Another AI startup, Remesh, has been active across the Middle East, helping organizations of all sizes canvas key stakeholders. Its algorithm clusters similar opinions, giving policymakers and mediators a clearer view of public sentiment and division.

A range of NGOs and academic researchers have also developed digital tools for peacebuilding. The nonprofit Computational Democracy Project created Pol.is, an open-source platform that enables citizens to crowdsource outcomes to public debates. Meanwhile, the Futures Lab at the Center for Strategic and International Studies built a peace agreement simulator, complete with a chart to track how well each stakeholder’s needs are met.

Guttman knows it’s an uphill battle. In addition to the ethical and privacy concerns of using AI to interpret public sentiment, PeaceTech also faces financial hurdles. These companies must find ways to sustain themselves amid shrinking public funding and a transatlantic surge in defense spending, which has pulled resources away from peacebuilding initiatives.

Still, Guttman and his investors remain undeterred. One way to view the opportunity for PeaceTech is by looking at the economic toll of war. In its Global Peace Index 2024, the Institute for Economics and Peace’s Vision of Humanity platform estimated that economic disruption due to violence and the fear of violence cost the world $19.1 trillion in 2023, or about 13 percent of global GDP. Guttman sees plenty of commercial potential in times of peace as well.

“Can we make billions of dollars,” Guttman asks, “and save the world—and create peace?” ..(More)”….See also Kluz Prize for PeaceTech (Applications Open)

The Reenchanted World: On finding mystery in the digital age


Essay by Karl Ove Knausgaard: “…When Karl Marx and Friedrich Engels wrote about alienation in the 1840s—that’s nearly two hundred years ago—they were describing workers’ relationship with their work, but the consequences of alienation spread into their analysis to include our relationship to nature and to existence as such. One term they used was “loss of reality.” Society at that time was incomparably more brutal, the machines incomparably coarser, but problems such as economic inequality and environmental destruction have continued into our own time. If anything, alienation as Marx and Engels defined it has only increased.

Or has it? The statement “people are more alienated now than ever before in history” sounds false, like applying an old concept to a new condition. That is not really what we are, is it? If there is something that characterizes our time, isn’t it the exact opposite, that nothing feels alien?

Alienation involves a distance from the world, a lack of connection between it and us. What technology does is compensate for the loss of reality with a substitute. Technology calibrates all differences, fills in every gap and crack with images and voices, bringing everything close to us in order to restore the connection between ourselves and the world. Even the past, which just a few generations ago was lost forever, can be retrieved and brought back…(More)”.

Fixing the US statistical infrastructure


Article by Nancy Potok and Erica L. Groshen: “Official government statistics are critical infrastructure for the information age. Reliable, relevant, statistical information helps businesses to invest and flourish; governments at the local, state, and national levels to make critical decisions on policy and public services; and individuals and families to invest in their futures. Yet surrounded by all manner of digitized data, one can still feel inadequately informed. A major driver of this disconnect in the US context is delayed modernization of the federal statistical system. The disconnect will likely worsen in coming months as the administration shrinks statistical agencies’ staffing, terminates programs (notably for health and education statistics), and eliminates unpaid external advisory groups. Amid this upheaval, might the administration’s appetite for disruption be harnessed to modernize federal statistics?

Federal statistics, one of the United States’ premier public goods, differ from privately provided data because they are privacy protected, aggregated to address relevant questions for decision-makers, constructed transparently, and widely available without a subscription. The private sector cannot be expected to adequately supply such statistical infrastructure. Yes, some companies collect and aggregate some economic data, such as credit card purchases and payroll information. But without strong underpinnings of a modern, federal information infrastructure, there would be large gaps in nationally consistent, transparent, trustworthy data. Furthermore, most private providers rely on public statistics for their internal analytics, to improve their products. They are among the many data users asking for more from statistical agencies…(More)”.