Paper by Konstantin F. Pilz, James Sanders, Robi Rahman, and Lennart Heim: “Frontier AI development relies on powerful AI supercomputers, yet analysis of these systems is limited. We create a dataset of 500 AI supercomputers from 2019 to 2025 and analyze key trends in performance, power needs, hardware cost, ownership, and global distribution. We find that the computational performance of AI supercomputers has doubled every nine months, while hardware acquisition cost and power needs both doubled every year. The leading system in March 2025, xAI’s Colossus, used 200,000 AI chips, had a hardware cost of $7B, and required 300 MW of power, as much as 250,000 households. As AI supercomputers evolved from tools for science to industrial machines, companies rapidly expanded their share of total AI supercomputer performance, while the share of governments and academia diminished. Globally, the United States accounts for about 75% of total performance in our dataset, with China in second place at 15%. If the observed trends continue, the leading AI supercomputer in 2030 will achieve 2×1022 16-bit FLOP/s, use two million AI chips, have a hardware cost of $200 billion, and require 9 GW of power. Our analysis provides visibility into the AI supercomputer landscape, allowing policymakers to assess key AI trends like resource needs, ownership, and national competitiveness…(More)”.
AGI vs. AAI: Grassroots Ingenuity and Frugal Innovation Will Shape the Future
Article by Akash Kapur: “Step back from the day-to-day flurry surrounding AI, and a global divergence in narratives is becoming increasingly clear. In Silicon Valley, New York, and London, the conversation centers on the long-range pursuit of artificial general intelligence (AGI)—systems that might one day equal or surpass humans at almost everything. This is the moon-shot paradigm, fueled by multi-billion-dollar capital expenditure and almost metaphysical ambition.
In contrast, much of the Global South is converging on something more grounded: the search for near-term, proven use cases that can be deployed with today’s hardware, and limited budgets and bandwidth. Call it Applied AI, or AAI. This quest for applicability—and relevance—is more humble than AGI. Its yardstick for success is more measured, and certainly less existential. Rather than pose profound questions about the nature of consciousness and humanity, Applied AI asks questions like: Does the model fix a real-world problem? Can it run on patchy 4G, a mid-range GPU, or a refurbished phone? What new yield can it bring to farmers or fishermen, or which bureaucratic bottleneck can it cut?
One way to think of AAI is as intelligence that ships. Vernacular chatbots, offline crop-disease detectors, speech-to-text tools for courtrooms: examples of similar applications and products, tailored and designed for specific sectors, are growing fast. In Africa, PlantVillage Nuru helps Kenyan farmers diagnose crop diseases entirely offline; South-Africa-based Lelapa AI is training “small language models” for at least 13 African languages; and Nigeria’s EqualyzAI runs chatbots that are trained to provide Hausa and Yoruba translations for customers…(More)”.
Tech: When Silicon Valley Remakes the World
Book by Olivier Alexandre: “Sometimes only an outsider can show how an industry works—and how that industry works upon the world. In Tech, sociologist Olivier Alexandre takes us on a revealing tour of Silicon Valley’s prominent personalities and vibrant networks to capture the way its denizens live, think, relate, and innovate, and how they shape the very code and conduct of business itself.
Even seasoned observers will gain insight into the industry’s singular milieu from Alexandre’s piercing eye. He spends as much time with Silicon Valley’s major players as with those who fight daily to survive within a system engineered for disruption. Embedded deep within the community, Alexandre accesses rooms shut tight to the public and reports back on the motivations, ambitions, and radical vision guiding tech companies. From the conquest of space to quantum computing, engineers have recast the infinitely large and small. Some scientists predict the end of death and the replacement of human beings with machines. But at what cost? Alexandre sees a shadow hanging over the Valley, jeopardizing its future and the economy made in its image. Critical yet fair, Tech illuminates anew a world of perpetual revolution…(More)”.
A.I. Is Starting to Wear Down Democracy
Article by Steven Lee Myers and Stuart A. Thompson: “Since the explosion of generative artificial intelligence over the last two years, the technology has demeaned or defamed opponents and, for the first time, officials and experts said, begun to have an impact on election results.
Free and easy to use, A.I. tools have generated a flood of fake photos and videos of candidates or supporters saying things they did not or appearing in places they were not — all spread with the relative impunity of anonymity online.
The technology has amplified social and partisan divisions and bolstered antigovernment sentiment, especially on the far right, which has surged in recent elections in Germany, Poland and Portugal.
In Romania, a Russian influence operation using A.I. tainted the first round of last year’s presidential election, according to government officials. A court there nullified that result, forcing a new vote last month and bringing a new wave of fabrications. It was the first major election in which A.I. played a decisive role in the outcome. It is unlikely to be the last.
As the technology improves, officials and experts warn, it is undermining faith in electoral integrity and eroding the political consensus necessary for democratic societies to function.
Madalina Botan, a professor at the National University of Political Studies and Public Administration in Romania’s capital, Bucharest, said there was no question that the technology was already “being used for obviously malevolent purposes” to manipulate voters.
“These mechanics are so sophisticated that they truly managed to get a piece of content to go very viral in a very limited amount of time,” she said. “What can compete with this?”
In the unusually concentrated wave of elections that took place in 2024, A.I. was used in more than 80 percent, according to the International Panel on the Information Environment, an independent organization of scientists based in Switzerland.
It documented 215 instances of A.I. in elections that year, based on government statements, research and news reports. Already this year, A.I. has played a role in at least nine more major elections, from Canada to Australia…(More)”.
AI Scraping Bots Are Breaking Open Libraries, Archives, and Museums
Article by Emanuel Maiberg: “The report, titled “Are AI Bots Knocking Cultural Heritage Offline?” was written by Weinberg of the GLAM-E Lab, a joint initiative between the Centre for Science, Culture and the Law at the University of Exeter and the Engelberg Center on Innovation Law & Policy at NYU Law, which works with smaller cultural institutions and community organizations to build open access capacity and expertise. GLAM is an acronym for galleries, libraries, archives, and museums. The report is based on a survey of 43 institutions with open online resources and collections in Europe, North America, and Oceania. Respondents also shared data and analytics, and some followed up with individual interviews. The data is anonymized so institutions could share information more freely, and to prevent AI bot operators from undermining their counter measures.
Of the 43 respondents, 39 said they had experienced a recent increase in traffic. Twenty-seven of those 39 attributed the increase in traffic to AI training data bots, with an additional seven saying the AI bots could be contributing to the increase.
“Multiple respondents compared the behavior of the swarming bots to more traditional online behavior such as Distributed Denial of Service (DDoS) attacks designed to maliciously drive unsustainable levels of traffic to a server, effectively taking it offline,” the report said. “Like a DDoS incident, the swarms quickly overwhelm the collections, knocking servers offline and forcing administrators to scramble to implement countermeasures. As one respondent noted, ‘If they wanted us dead, we’d be dead.’”…(More)”
Sustainable Development Report 2025
Report by the UN Sustainable Development Solutions Network (SDSN): “Ten years after the adoption of the Sustainable Development Goals (SDGs), progress remains alarmingly off-track, with less than 20% of targets projected to be achieved by 2030…The SDR includes the SDG Index and Dashboards, which rank all UN Member States on their performance across the 17 Goals, and this year’s report features a new Index (SDGi), which focuses on 17 headline indicators to track overall SDG progress over time…This year’s SDR highlights five key findings:
The Global Financial Architecture (GFA) must be urgently reformed to finance global public goods and achieve sustainable development. Roughly half the world’s population resides in countries that cannot adequately invest in sustainable development due to unsustainable debt burdens and limited access to affordable, long-term capital. Sustainable development is a high-return investment, yet the GFA continues to direct capital toward high-income countries instead of EMDEs, which offer stronger growth prospects and higher returns. Global public goods also remain significantly underfinanced. The upcoming Ff4D offers a critical opportunity for UN Member States to reform this system and ensure that international financing flows at scale to EMDEs to achieve sustainable development…
At the global level, SDG progress has stalled; none of the 17 Global Goals are on track, and only 17% of the SDG targets are on track to be achieved by 2030. Conflicts, structural vulnerabilities, and limited fiscal space continue to hinder progress, especially in emerging and developing economies (EMDEs). The five targets showing significant reversal in progress since 2015 include: obesity rate (SDG 2), press freedom (SDG 16), sustainable nitrogen management (SDG 2), the red list index (SDG 15), and the corruption perception index (SDG 16). Conversely, many countries have made notable progress in expanding access to basic services and infrastructure, including: mobile broadband use (SDG 9), access to electricity (SDG 7), internet use (SDG 9), under-5 mortality rate (SDG 3), and neonatal mortality (SDG 3). However, future progress on many of these indicators, including health-related outcomes, is threatened by global tensions and the decline in international development finance.
Barbados leads again in UN-based multilateralism commitment, while the U.S. ranks last. The SDR 2025’s Index of countries’ support to UN-based multilateralism (UN-Mi) ranks countries based on their support for and engagement with the UN system. The top three countries most committed to UN multilateralism are: Barbados (#1), Jamaica (#2), and Trinidad and Tobago (#3). Among G20 nations, Brazil (#25) ranks highest, while Chile (#7) leads among OECD countries. In contrast, the U.S., which recently withdrew from the Paris Climate Agreement and the World Health Organization (WHO) and formally declared its opposition to the SDGs and the 2030 Agenda, ranks last (#193) for the second year in a row…(More)”
The Global A.I. Divide
Article by Adam Satariano and Paul Mozur: “Last month, Sam Altman, the chief executive of the artificial intelligence company OpenAI, donned a helmet, work boots and a luminescent high-visibility vest to visit the construction site of the company’s new data center project in Texas.
Bigger than New York’s Central Park, the estimated $60 billion project, which has its own natural gas plant, will be one of the most powerful computing hubs ever created when completed as soon as next year.
Around the same time as Mr. Altman’s visit to Texas, Nicolás Wolovick, a computer science professor at the National University of Córdoba in Argentina, was running what counts as one of his country’s most advanced A.I. computing hubs. It was in a converted room at the university, where wires snaked between aging A.I. chips and server computers.
“Everything is becoming more split,” Dr. Wolovick said. “We are losing.”
Artificial intelligence has created a new digital divide, fracturing the world between nations with the computing power for building cutting-edge A.I. systems and those without. The split is influencing geopolitics and global economics, creating new dependencies and prompting a desperate rush to not be excluded from a technology race that could reorder economies, drive scientific discovery and change the way that people live and work.
The biggest beneficiaries by far are the United States, China and the European Union. Those regions host more than half of the world’s most powerful data centers, which are used for developing the most complex A.I. systems, according to data compiled by Oxford University researchers. Only 32 countries, or about 16 percent of nations, have these large facilities filled with microchips and computers, giving them what is known in industry parlance as “compute power.”..(More)”.
The war over the peace business
Article by Tekendra Parmar: “At the second annual AI+ Expo in Washington, DC, in early June, war is the word of the day.
As a mix of Beltway bureaucrats, military personnel, and Washington’s consultant class peruse the expansive Walter E. Washington Convention Center, a Palantir booth showcases its latest in data-collection suites for “warfighters.” Lockheed Martin touts the many ways it is implementing AI throughout its weaponry systems. On the soundstage, the defense tech darling Mach Industries is selling its newest uncrewed aerial vehicles. “We’re living in a world with great-power competition,” the presenter says. “We can’t rule out the possibility of war — but the best way to prevent a war is deterrence,” he says, flanked by videos of drones flying through what looked like the rugged mountains and valleys of Kandahar.
Hosted by the Special Competitive Studies Project, a think tank led by former Google CEO Eric Schmidt, the expo says it seeks to bridge the gap between Silicon Valley entrepreneurs and Washington policymakers to “strengthen” America and its allies’ “competitiveness in critical technologies.”
One floor below, a startup called Anadyr Horizon is making a very different sales pitch, for software that seeks to prevent war rather than fight it: “Peace tech,” as the company’s cofounder Arvid Bell calls it. Dressed in white khakis and a black pinstripe suit jacket with a dove and olive branch pinned to his lapel (a gift from his husband), the former Harvard political scientist begins by noting that Russia’s all-out invasion of Ukraine had come as a surprise to many political scientists. But his AI software, he says, could predict it.
Long the domain of fantasy and science fiction, the idea of forecasting conflict has now become a serious pursuit. In Isaac Asimov’s 1950s “Foundation” series, the main character develops an algorithm that allows him to predict the decline of the Galactic Empire, angering its rulers and forcing him into exile. During the coronavirus pandemic, the US State Department experimented with AI fed with Twitter data to predict “COVID cases” and “violent events.” In its AI audit two years ago, the State Department revealed that it started training AI on “open-source political, social, and economic datasets” to predict “mass civilian killings.” The UN is also said to have experimented with AI to model the war in Gaza…(More)”… ..See also Kluz Prize for PeaceTech (Applications Open)
Fixing the US statistical infrastructure
Article by Nancy Potok and Erica L. Groshen: “Official government statistics are critical infrastructure for the information age. Reliable, relevant, statistical information helps businesses to invest and flourish; governments at the local, state, and national levels to make critical decisions on policy and public services; and individuals and families to invest in their futures. Yet surrounded by all manner of digitized data, one can still feel inadequately informed. A major driver of this disconnect in the US context is delayed modernization of the federal statistical system. The disconnect will likely worsen in coming months as the administration shrinks statistical agencies’ staffing, terminates programs (notably for health and education statistics), and eliminates unpaid external advisory groups. Amid this upheaval, might the administration’s appetite for disruption be harnessed to modernize federal statistics?
Federal statistics, one of the United States’ premier public goods, differ from privately provided data because they are privacy protected, aggregated to address relevant questions for decision-makers, constructed transparently, and widely available without a subscription. The private sector cannot be expected to adequately supply such statistical infrastructure. Yes, some companies collect and aggregate some economic data, such as credit card purchases and payroll information. But without strong underpinnings of a modern, federal information infrastructure, there would be large gaps in nationally consistent, transparent, trustworthy data. Furthermore, most private providers rely on public statistics for their internal analytics, to improve their products. They are among the many data users asking for more from statistical agencies…(More)”.
A New Paradigm for Fueling AI for the Public Good
Article by Kevin T. Frazier: “Imagine receiving this email in the near future: “Thank you for sharing data with the American Data Collective on May 22, 2025. After first sharing your workout data with SprintAI, a local startup focused on designing shoes for differently abled athletes, your data donation was also sent to an artificial intelligence research cluster hosted by a regional university. Your donation is on its way to accelerate artificial intelligence innovation and support researchers and innovators addressing pressing public needs!”
That is exactly the sort of message you could expect to receive if we made donations of personal data akin to blood donations—a pro-social behavior that may not immediately serve a donor’s individual needs but may nevertheless benefit the whole of the community. This vision of a future where data flow toward the public good is not science fiction—it is a tangible possibility if we address a critical bottleneck faced by innovators today.
Creating the data equivalent of blood banks may not seem like a pressing need or something that people should voluntarily contribute to, given widespread concerns about a few large artificial intelligence (AI) companies using data for profit-driven and, arguably, socially harmful ends. This narrow conception of the AI ecosystem fails to consider the hundreds of AI research initiatives and startups that have a desperate need for high-quality data. I was fortunate enough to meet leaders of those nascent AI efforts at Meta’s Open Source AI Summit in Austin, Texas. For example, I met with Matt Schwartz, who leads a startup that leans on AI to glean more diagnostic information from colonoscopies. I also connected with Edward Chang, a professor of neurological surgery at the University of California, San Francisco Weill Institute for Neurosciences, who relies on AI tools to discover new information on how and why our brains work. I also got to know Corin Wagen, whose startup is helping companies “find better molecules faster.” This is a small sample of the people leveraging AI for objectively good outcomes. They need your help. More specifically, they need your data.
A tragic irony shapes our current data infrastructure. Most of us share mountains of data with massive and profitable private parties—smartwatch companies, diet apps, game developers, and social media companies. Yet, AI labs, academic researchers, and public interest organizations best positioned to leverage our data for the common good are often those facing the most formidable barriers to acquiring the necessary quantity, quality, and diversity of data. Unlike OpenAI, they are not going to use bots to scrape the internet for data. Unlike Google and Meta, they cannot rely on their own social media platforms and search engines to act as perpetual data generators. And, unlike Anthropic, they lack the funds to license data from media outlets. So, while commercial entities amass vast datasets, frequently as a byproduct of consumer services and proprietary data acquisition strategies, mission-driven AI initiatives dedicated to public problems find themselves in a state of chronic data scarcity. This is not merely a hurdle—it is a systemic bottleneck choking off innovation where society needs it most, delaying or even preventing the development of AI tools that could significantly improve lives.
Individuals are, quite rightly, increasingly hesitant to share their personal information, with concerns about privacy, security, and potential misuse being both rampant and frequently justified by past breaches and opaque practices. Yet, in a striking contradiction, troves of deeply personal data are continuously siphoned by app developers, by tech platforms, and, often opaquely, by an extensive network of data brokers. This practice often occurs with minimal transparency and without informed consent concerning the full lifecycle and downstream uses of that data. This lack of transparency extends to how algorithms trained on this data make decisions that can impact individuals’ lives—from loan applications to job prospects—often without clear avenues for recourse or understanding, potentially perpetuating existing societal biases embedded in historical data…(More)”.