Sudden loss of key US satellite data could send hurricane forecasting back ‘decades’


Article by Eric Holthaus: “A critical US atmospheric data collection program will be halted by Monday, giving weather forecasters just days to prepare, according to a public notice sent this week. Scientists that the Guardian spoke with say the change could set hurricane forecasting back “decades”, just as this year’s season ramps up.

In a National Oceanic and Atmospheric Administration (Noaa) message sent on Wednesday to its scientists, the agency said that “due to recent service changes” the Defense Meteorological Satellite Program (DMSP) will “discontinue ingest, processing and distribution of all DMSP data no later than June 30, 2025”.

Due to their unique characteristics and ability to map the entire world twice a day with extremely high resolution, the three DMSP satellites are a primary source of information for scientists to monitor Arctic sea ice and hurricane development. The DMSP partners with Noaa to make weather data collected from the satellites publicly available.

The reasons for the changes, and which agency was driving them, were not immediately clear. Noaa said they would not affect the quality of forecasting.

However, the Guardian spoke with several scientists inside and outside of the US government whose work depends on the DMSP, and all said there are no other US programs that can form an adequate replacement for its data.

“We’re a bit blind now,” said Allison Wing, a hurricane researcher at Florida State University. Wing said the DMSP satellites are the only ones that let scientists see inside the clouds of developing hurricanes, giving them a critical edge in forecasting that now may be jeopardized.

“Before these types of satellites were present, there would often be situations where you’d wake up in the morning and have a big surprise about what the hurricane looked like,” said Wing. “Given increases in hurricane intensity and increasing prevalence towards rapid intensification in recent years, it’s not a good time to have less information.”..(More)”.

AI companies start winning the copyright fight


Article by Blake Montgomery: “…tech companies notched several victories in the fight over their use of copyrighted text to create artificial intelligence products.

Anthropic: A US judge has ruled that Anthropic, maker of the Claude chatbot, use of books to train its artificial intelligence system – without permission of the authors – did not breach copyright law. Judge William Alsup compared the Anthropic model’s use of books to a “reader aspiring to be a writer.”

And the next day, Meta: The US district judge Vince Chhabria, in San Francisco, said in his decision on the Meta case that the authors had not presented enough evidence that the technology company’s AI would cause “market dilution” by flooding the market with work similar to theirs.

The same day that Meta received its favorable ruling, a group of writers sued Microsoft, alleging copyright infringement in the creation of that company’s Megatron text generator. Judging by the rulings in favor of Meta and Anthropic, the authors are facing an uphill battle.

These three cases are skirmishes in the wider legal war over copyrighted media, which rages on. Three weeks ago, Disney and NBCUniversal sued Midjourney, alleging that the company’s namesake AI image generator and forthcoming video generator made illegal use of the studios’ iconic characters like Darth Vader and the Simpson family. The world’s biggest record labels – Sony, Universal and Warner – have sued two companies that make AI-powered music generators, Suno and Udio. On the textual front, the New York Times’ suit against OpenAI and Microsoft is ongoing.

The lawsuits over AI-generated text were filed first, and, as their rulings emerge, the next question in the copyright fight is whether decisions about one type of media will apply to the next.

“The specific media involved in the lawsuit – written works versus images versus videos versus audio – will certainly change the fair-use analysis in each case,” said John Strand, a trademark and copyright attorney with the law firm Wolf Greenfield. “The impact on the market for the copyrighted works is becoming a key factor in the fair-use analysis, and the market for books is different than that for movies.”…(More)”.

Community Engagement Is Crucial for Successful State Data Efforts


Resource by the Data Quality Campaign: “Engaging communities is a critical step toward ensuring that data efforts work for their intended audiences. People, including state policymakers, school leaders, families, college administrators, employers, and the public, should have a say in how their state provides access to education and workforce data. And as state leaders build robust statewide longitudinal data systems (SLDSs) or move other data efforts forward, they must deliberately create consistent opportunities for communities to weigh in. This resource explores how states can meaningfully engage with communities to build trust and improve data efforts by ensuring that systems, tools, and resources are valuable to the people who use them…(More)”.

Why AI hardware needs to be open


Article by Ayah Bdeir: “Once again, the future of technology is being engineered in secret by a handful of people and delivered to the rest of us as a sealed, seamless, perfect device. When technology is designed in secrecy and sold to us as a black box, we are reduced to consumers. We wait for updates. We adapt to features. We don’t shape the tools; they shape us. 

This is a problem. And not just for tinkerers and technologists, but for all of us.

We are living through a crisis of disempowerment. Children are more anxious than ever; the former US surgeon general described a loneliness epidemic; people are increasingly worried about AI eroding education. The beautiful devices we use have been correlated with many of these trends. Now AI—arguably the most powerful technology of our era—is moving off the screen and into physical space. 

The timing is not a coincidence. Hardware is having a renaissance. Every major tech company is investing in physical interfaces for AI. Startups are raising capital to build robots, glasses, wearables that are going to track our every move. The form factor of AI is the next battlefield. Do we really want our future mediated entirely through interfaces we can’t open, code we can’t see, and decisions we can’t influence? 

This moment creates an existential opening, a chance to do things differently. Because away from the self-centeredness of Silicon Valley, a quiet, grounded sense of resistance is reactivating. I’m calling it the revenge of the makers. 

In 2007, as the iPhone emerged, the maker movement was taking shape. This subculture advocates for learning-through-making in social environments like hackerspaces and libraries. DIY and open hardware enthusiasts gathered in person at Maker Faires—large events where people of all ages tinkered and shared their inventions in 3D printing, robotics, electronics, and more. Motivated by fun, self-fulfillment, and shared learning, the movement birthed companies like MakerBot, Raspberry Pi, Arduino, and (my own education startup) littleBits from garages and kitchen tables. I myself wanted to challenge the notion that technology had to be intimidating or inaccessible, creating modular electronic building blocks designed to put the power of invention in the hands of everyone…(More)”

Trends in AI Supercomputers


Paper by Konstantin F. Pilz, James Sanders, Robi Rahman, and Lennart Heim: “Frontier AI development relies on powerful AI supercomputers, yet analysis of these systems is limited. We create a dataset of 500 AI supercomputers from 2019 to 2025 and analyze key trends in performance, power needs, hardware cost, ownership, and global distribution. We find that the computational performance of AI supercomputers has doubled every nine months, while hardware acquisition cost and power needs both doubled every year. The leading system in March 2025, xAI’s Colossus, used 200,000 AI chips, had a hardware cost of $7B, and required 300 MW of power, as much as 250,000 households. As AI supercomputers evolved from tools for science to industrial machines, companies rapidly expanded their share of total AI supercomputer performance, while the share of governments and academia diminished. Globally, the United States accounts for about 75% of total performance in our dataset, with China in second place at 15%. If the observed trends continue, the leading AI supercomputer in 2030 will achieve 2×1022 16-bit FLOP/s, use two million AI chips, have a hardware cost of $200 billion, and require 9 GW of power. Our analysis provides visibility into the AI supercomputer landscape, allowing policymakers to assess key AI trends like resource needs, ownership, and national competitiveness…(More)”.

AGI vs. AAI: Grassroots Ingenuity and Frugal Innovation Will Shape the Future


Article by Akash Kapur: “Step back from the day-to-day flurry surrounding AI, and a global divergence in narratives is becoming increasingly clear. In Silicon Valley, New York, and London, the conversation centers on the long-range pursuit of artificial general intelligence (AGI)—systems that might one day equal or surpass humans at almost everything. This is the moon-shot paradigm, fueled by multi-billion-dollar capital expenditure and almost metaphysical ambition.

In contrast, much of the Global South is converging on something more grounded: the search for near-term, proven use cases that can be deployed with today’s hardware, and limited budgets and bandwidth. Call it Applied AI, or AAI. This quest for applicability—and relevance—is more humble than AGI. Its yardstick for success is more measured, and certainly less existential. Rather than pose profound questions about the nature of consciousness and humanity, Applied AI asks questions like: Does the model fix a real-world problem? Can it run on patchy 4G, a mid-range GPU, or a refurbished phone? What new yield can it bring to farmers or fishermen, or which bureaucratic bottleneck can it cut?

One way to think of AAI is as intelligence that ships. Vernacular chatbots, offline crop-disease detectors, speech-to-text tools for courtrooms: examples of similar applications and products, tailored and designed for specific sectors, are growing fast. In Africa, PlantVillage Nuru helps Kenyan farmers diagnose crop diseases entirely offline; South-Africa-based Lelapa AI is training “small language models” for at least 13 African languages; and Nigeria’s EqualyzAI runs chatbots that are trained to provide Hausa and Yoruba translations for customers…(More)”.

Tech: When Silicon Valley Remakes the World


Book by Olivier Alexandre: “Sometimes only an outsider can show how an industry works—and how that industry works upon the world. In Tech, sociologist Olivier Alexandre takes us on a revealing tour of Silicon Valley’s prominent personalities and vibrant networks to capture the way its denizens live, think, relate, and innovate, and how they shape the very code and conduct of business itself.
 
Even seasoned observers will gain insight into the industry’s singular milieu from Alexandre’s piercing eye. He spends as much time with Silicon Valley’s major players as with those who fight daily to survive within a system engineered for disruption. Embedded deep within the community, Alexandre accesses rooms shut tight to the public and reports back on the motivations, ambitions, and radical vision guiding tech companies. From the conquest of space to quantum computing, engineers have recast the infinitely large and small. Some scientists predict the end of death and the replacement of human beings with machines. But at what cost? Alexandre sees a shadow hanging over the Valley, jeopardizing its future and the economy made in its image. Critical yet fair, Tech illuminates anew a world of perpetual revolution…(More)”.

A.I. Is Starting to Wear Down Democracy


Article by Steven Lee Myers and Stuart A. Thompson: “Since the explosion of generative artificial intelligence over the last two years, the technology has demeaned or defamed opponents and, for the first time, officials and experts said, begun to have an impact on election results.

Free and easy to use, A.I. tools have generated a flood of fake photos and videos of candidates or supporters saying things they did not or appearing in places they were not — all spread with the relative impunity of anonymity online.

The technology has amplified social and partisan divisions and bolstered antigovernment sentiment, especially on the far right, which has surged in recent elections in Germany, Poland and Portugal.

In Romania, a Russian influence operation using A.I. tainted the first round of last year’s presidential election, according to government officials. A court there nullified that result, forcing a new vote last month and bringing a new wave of fabrications. It was the first major election in which A.I. played a decisive role in the outcome. It is unlikely to be the last.

As the technology improves, officials and experts warn, it is undermining faith in electoral integrity and eroding the political consensus necessary for democratic societies to function.

Madalina Botan, a professor at the National University of Political Studies and Public Administration in Romania’s capital, Bucharest, said there was no question that the technology was already “being used for obviously malevolent purposes” to manipulate voters.

“These mechanics are so sophisticated that they truly managed to get a piece of content to go very viral in a very limited amount of time,” she said. “What can compete with this?”

In the unusually concentrated wave of elections that took place in 2024, A.I. was used in more than 80 percent, according to the International Panel on the Information Environment, an independent organization of scientists based in Switzerland.

It documented 215 instances of A.I. in elections that year, based on government statements, research and news reports. Already this year, A.I. has played a role in at least nine more major elections, from Canada to Australia…(More)”.

AI Scraping Bots Are Breaking Open Libraries, Archives, and Museums


Article by Emanuel Maiberg: “The report, titled “Are AI Bots Knocking Cultural Heritage Offline?” was written by Weinberg of the GLAM-E Lab, a joint initiative between the Centre for Science, Culture and the Law at the University of Exeter and the Engelberg Center on Innovation Law & Policy at NYU Law, which works with smaller cultural institutions and community organizations to build open access capacity and expertise. GLAM is an acronym for galleries, libraries, archives, and museums. The report is based on a survey of 43 institutions with open online resources and collections in Europe, North America, and Oceania. Respondents also shared data and analytics, and some followed up with individual interviews. The data is anonymized so institutions could share information more freely, and to prevent AI bot operators from undermining their counter measures.  

Of the 43 respondents, 39 said they had experienced a recent increase in traffic. Twenty-seven of those 39 attributed the increase in traffic to AI training data bots, with an additional seven saying the AI bots could be contributing to the increase. 

“Multiple respondents compared the behavior of the swarming bots to more traditional online behavior such as Distributed Denial of Service (DDoS) attacks designed to maliciously drive unsustainable levels of traffic to a server, effectively taking it offline,” the report said. “Like a DDoS incident, the swarms quickly overwhelm the collections, knocking servers offline and forcing administrators to scramble to implement countermeasures. As one respondent noted, ‘If they wanted us dead, we’d be dead.’”…(More)”

The Global A.I. Divide


Article by Adam Satariano and Paul Mozur: “Last month, Sam Altman, the chief executive of the artificial intelligence company OpenAI, donned a helmet, work boots and a luminescent high-visibility vest to visit the construction site of the company’s new data center project in Texas.

Bigger than New York’s Central Park, the estimated $60 billion project, which has its own natural gas plant, will be one of the most powerful computing hubs ever created when completed as soon as next year.

Around the same time as Mr. Altman’s visit to Texas, Nicolás Wolovick, a computer science professor at the National University of Córdoba in Argentina, was running what counts as one of his country’s most advanced A.I. computing hubs. It was in a converted room at the university, where wires snaked between aging A.I. chips and server computers.

“Everything is becoming more split,” Dr. Wolovick said. “We are losing.”

Artificial intelligence has created a new digital divide, fracturing the world between nations with the computing power for building cutting-edge A.I. systems and those without. The split is influencing geopolitics and global economics, creating new dependencies and prompting a desperate rush to not be excluded from a technology race that could reorder economies, drive scientific discovery and change the way that people live and work.

The biggest beneficiaries by far are the United States, China and the European Union. Those regions host more than half of the world’s most powerful data centers, which are used for developing the most complex A.I. systems, according to data compiled by Oxford University researchers. Only 32 countries, or about 16 percent of nations, have these large facilities filled with microchips and computers, giving them what is known in industry parlance as “compute power.”..(More)”.