Bad data leads to bad policy


Article by Georges-Simon Ulrich: “When the UN created a Statistical Commission in 1946, the world was still recovering from the devastation of the second world war. Then, there was broad consensus that only reliable, internationally comparable data could prevent conflict, combat poverty and anchor global co-operation. Nearly 80 years later, this insight remains just as relevant, but the context has changed dramatically…

This erosion of institutional capacity could not come at a more critical moment. The UN is unable to respond adequately as it is facing a staffing shortfall itself. Due to ongoing austerity measures at the UN, many senior positions remain vacant, and the director of the UN Statistics Division has retired, with no successor appointed. This comes at a time when bold and innovative initiatives — such as a newly envisioned Trusted Data Observatory — are urgently needed to make official statistics more accessible and machine-readable.

Meanwhile, the threat of targeted disinformation is growing. On social media, distorted or manipulated content spreads at unprecedented speed. Emerging tools like AI chatbots exacerbate the problem. These systems rely on web content, not verified data, and are not built to separate truth from falsehood. Making matters worse, many governments cannot currently make their data usable for AI because it is not standardised, not machine-readable, or not openly accessible. The space for sober, evidence-based discourse is shrinking.

This trend undermines public trust in institutions, strips policymaking of its legitimacy, and jeopardises the UN Sustainable Development Goals (SDGs). Without reliable data, governments will be flying blind — or worse: they will be deliberately misled.

When countries lose control of their own data, or cannot integrate it into global decision-making processes, they become bystanders to their own development. Decisions about their economies, societies and environments are then outsourced to AI systems trained on skewed, unrepresentative data. The global south is particularly at risk, with many countries lacking access to quality data infrastructures. In countries such as Ethiopia, unverified information spreading rapidly on social media has fuelled misinformation-driven violence.

The Covid-19 pandemic demonstrated that strong data systems enable better crisis response. To counter these risks, the creation of a global Trusted Data Observatory (TDO) is essential. This UN co-ordinated, democratically governed platform would help catalogue and make accessible trusted data around the world — while fully respecting national sovereignty…(More)”

Cloudflare Introduces Default Blocking of A.I. Data Scrapers


Article by Natallie Rocha: “Data for A.I. systems has become an increasingly contentious issue. OpenAI, Anthropic, Google and other companies building A.I. systems have amassed reams of information from across the internet to train their A.I. models. High-quality data is particularly prized because it helps A.I. models become more proficient in generating accurate answers, videos and images.

But website publishers, authors, news organizations and other content creators have accused A.I. companies of using their material without permission and payment. Last month, Reddit sued Anthropic, saying the start-up had unlawfully used the data of its more than 100 million daily users to train its A.I. systems. In 2023, The New York Times sued OpenAI and its partner, Microsoft, accusing them of copyright infringement of news content related to A.I. systems. OpenAI and Microsoft have denied those claims.

Some publishers have struck licensing deals with A.I. companies to receive compensation for their content. In May, The Times agreed to license its editorial content to Amazon for use in the tech giant’s A.I. platforms. Axel Springer, Condé Nast and News Corp have also entered into agreements with A.I. companies to receive revenue for the use of their material.

Mark Howard, the chief operating officer of Time, said he welcomed Cloudflare’s move. Data scraping by A.I. companies threatens anyone who creates content, he said, adding that news publishers like Time deserved fair compensation for what they published…(More)”.

Enjoy TikTok Explainers? These Old-Fashioned Diagrams Are A Whole Lot Smarter


Article by Jonathon Keats: “In the aftermath of Hiroshima, many of the scientists who built the atomic bomb changed the way they reckoned time. Their conception of the future was published on the cover of The Bulletin of the Atomic Scientists, which portrayed a clock set at seven minutes to midnight. In subsequent months and years, the clock sometimes advanced. Other times, the hands fell back. With this simple indication, the timepiece tracked the likelihood of nuclear annihilation.

Although few of the scientists who worked on the Manhattan Project are still alive, the Doomsday Clock remains operational, steadfastly translating risk into units of hours and minutes. Over time, the diagram has become iconic, and not only for subscribers to The Bulletin. It’s now so broadly recognizable that we may no longer recognize what makes it radical.

12 - Fondazione Prada_Diagrams
John Auldjo. Map of Vesuvius showing the direction of the streams of lava in the eruptions from 1631 to 1831, 1832. Exhibition copy from a printed book In John Auldjo, Sketches of Vesuvius: with Short Accounts of Its Principal Eruptions from the Commencement of the Christian Era to the Present Time (Napoli: George Glass, 1832). Olschki 53, plate before p. 27, Biblioteca Nazionale Centrale di Firenze, Firenze. Courtesy Ministero della Cultura – Biblioteca Nazionale Centrale di Firenze. Any unauthorized reproduction by any means whatsoever is prohibited.Biblioteca Nazionale Centrale di Firenze

A thrilling new exhibition at the Fondazione Prada brings the Doomsday Clock back into focus. Featuring hundreds of diagrams from the past millennium, ranging from financial charts to maps of volcanic eruptions, the exhibition provides the kind of survey that brings definition to an entire category of visual communication. Each work benefits from its association with others that are manifestly different in form and function…(More)”.

Why AI hardware needs to be open


Article by Ayah Bdeir: “Once again, the future of technology is being engineered in secret by a handful of people and delivered to the rest of us as a sealed, seamless, perfect device. When technology is designed in secrecy and sold to us as a black box, we are reduced to consumers. We wait for updates. We adapt to features. We don’t shape the tools; they shape us. 

This is a problem. And not just for tinkerers and technologists, but for all of us.

We are living through a crisis of disempowerment. Children are more anxious than ever; the former US surgeon general described a loneliness epidemic; people are increasingly worried about AI eroding education. The beautiful devices we use have been correlated with many of these trends. Now AI—arguably the most powerful technology of our era—is moving off the screen and into physical space. 

The timing is not a coincidence. Hardware is having a renaissance. Every major tech company is investing in physical interfaces for AI. Startups are raising capital to build robots, glasses, wearables that are going to track our every move. The form factor of AI is the next battlefield. Do we really want our future mediated entirely through interfaces we can’t open, code we can’t see, and decisions we can’t influence? 

This moment creates an existential opening, a chance to do things differently. Because away from the self-centeredness of Silicon Valley, a quiet, grounded sense of resistance is reactivating. I’m calling it the revenge of the makers. 

In 2007, as the iPhone emerged, the maker movement was taking shape. This subculture advocates for learning-through-making in social environments like hackerspaces and libraries. DIY and open hardware enthusiasts gathered in person at Maker Faires—large events where people of all ages tinkered and shared their inventions in 3D printing, robotics, electronics, and more. Motivated by fun, self-fulfillment, and shared learning, the movement birthed companies like MakerBot, Raspberry Pi, Arduino, and (my own education startup) littleBits from garages and kitchen tables. I myself wanted to challenge the notion that technology had to be intimidating or inaccessible, creating modular electronic building blocks designed to put the power of invention in the hands of everyone…(More)”

5 Ways Cooperatives Can Shape the Future of AI


Article by Trebor Scholz and Stefano Tortorici: “Today, AI development is controlled by a small cadre of firms. Companies like OpenAI, Alphabet, Amazon, Meta, and Microsoft dominate through vast computational resources, massive proprietary datasets, deep pools of technical talent, extractive data practices, low-cost labor, and capital that enables continuous experimentation and rapid deployment. Even open-source challengers like DeepSeek run on vast computational muscle and industrial training pipelines.

This domination brings problems: privacy violation and cost-minimizing labor strategies, high environmental costs from data centers, and evident biases in models that can reinforce discrimination in hiring, healthcare, credit scoring, policing, and beyond. These problems tend to affect the people who are already too often left out. AI’s opaque algorithms don’t just sidestep democratic control and transparency—they shape who gets heard, who’s watched, and who’s quietly pushed aside.

Yet, as companies consider using this technology, it can seem that there are few other options. As such, it can seem that they are locked into these compromises.

A different model is taking shape, however, with little fanfare, but with real potential. AI cooperatives—organizations developing or governing AI technologies based on cooperative principles—offer a promising alternative. The cooperative movement, with its global footprint and diversity of models, has been successful from banking and agriculture to insurance and manufacturing. Cooperatives enterprises, which are owned and governed by their members, have long managed infrastructure for the public good.

A handful of AI cooperatives offer early examples of how democratic governance and shared ownership could shape more accountable and community-centered uses of the technology. Most are large agricultural cooperatives that are putting AI to use in their day-to-day operations, such as IFFCO’s DRONAI program (AI for fertilization), FrieslandCampina (dairy quality control), and Fonterra (milk production analytics). Cooperatives must urgently organize to challenge AI’s dominance or remain on the sidelines of critical political and technological developments.​

There is undeniably potential here, for both existing cooperatives and companies that might want to partner with them. The $589 billion drop in Nvidia’s market cap DeepSeek triggered shows how quickly open-source innovation can shift the landscape. But for cooperative AI labs to do more than signal intent, they need public infrastructure, civic partnerships, and serious backing…(More)”.

AGI vs. AAI: Grassroots Ingenuity and Frugal Innovation Will Shape the Future


Article by Akash Kapur: “Step back from the day-to-day flurry surrounding AI, and a global divergence in narratives is becoming increasingly clear. In Silicon Valley, New York, and London, the conversation centers on the long-range pursuit of artificial general intelligence (AGI)—systems that might one day equal or surpass humans at almost everything. This is the moon-shot paradigm, fueled by multi-billion-dollar capital expenditure and almost metaphysical ambition.

In contrast, much of the Global South is converging on something more grounded: the search for near-term, proven use cases that can be deployed with today’s hardware, and limited budgets and bandwidth. Call it Applied AI, or AAI. This quest for applicability—and relevance—is more humble than AGI. Its yardstick for success is more measured, and certainly less existential. Rather than pose profound questions about the nature of consciousness and humanity, Applied AI asks questions like: Does the model fix a real-world problem? Can it run on patchy 4G, a mid-range GPU, or a refurbished phone? What new yield can it bring to farmers or fishermen, or which bureaucratic bottleneck can it cut?

One way to think of AAI is as intelligence that ships. Vernacular chatbots, offline crop-disease detectors, speech-to-text tools for courtrooms: examples of similar applications and products, tailored and designed for specific sectors, are growing fast. In Africa, PlantVillage Nuru helps Kenyan farmers diagnose crop diseases entirely offline; South-Africa-based Lelapa AI is training “small language models” for at least 13 African languages; and Nigeria’s EqualyzAI runs chatbots that are trained to provide Hausa and Yoruba translations for customers…(More)”.

A.I. Is Starting to Wear Down Democracy


Article by Steven Lee Myers and Stuart A. Thompson: “Since the explosion of generative artificial intelligence over the last two years, the technology has demeaned or defamed opponents and, for the first time, officials and experts said, begun to have an impact on election results.

Free and easy to use, A.I. tools have generated a flood of fake photos and videos of candidates or supporters saying things they did not or appearing in places they were not — all spread with the relative impunity of anonymity online.

The technology has amplified social and partisan divisions and bolstered antigovernment sentiment, especially on the far right, which has surged in recent elections in Germany, Poland and Portugal.

In Romania, a Russian influence operation using A.I. tainted the first round of last year’s presidential election, according to government officials. A court there nullified that result, forcing a new vote last month and bringing a new wave of fabrications. It was the first major election in which A.I. played a decisive role in the outcome. It is unlikely to be the last.

As the technology improves, officials and experts warn, it is undermining faith in electoral integrity and eroding the political consensus necessary for democratic societies to function.

Madalina Botan, a professor at the National University of Political Studies and Public Administration in Romania’s capital, Bucharest, said there was no question that the technology was already “being used for obviously malevolent purposes” to manipulate voters.

“These mechanics are so sophisticated that they truly managed to get a piece of content to go very viral in a very limited amount of time,” she said. “What can compete with this?”

In the unusually concentrated wave of elections that took place in 2024, A.I. was used in more than 80 percent, according to the International Panel on the Information Environment, an independent organization of scientists based in Switzerland.

It documented 215 instances of A.I. in elections that year, based on government statements, research and news reports. Already this year, A.I. has played a role in at least nine more major elections, from Canada to Australia…(More)”.

AI Scraping Bots Are Breaking Open Libraries, Archives, and Museums


Article by Emanuel Maiberg: “The report, titled “Are AI Bots Knocking Cultural Heritage Offline?” was written by Weinberg of the GLAM-E Lab, a joint initiative between the Centre for Science, Culture and the Law at the University of Exeter and the Engelberg Center on Innovation Law & Policy at NYU Law, which works with smaller cultural institutions and community organizations to build open access capacity and expertise. GLAM is an acronym for galleries, libraries, archives, and museums. The report is based on a survey of 43 institutions with open online resources and collections in Europe, North America, and Oceania. Respondents also shared data and analytics, and some followed up with individual interviews. The data is anonymized so institutions could share information more freely, and to prevent AI bot operators from undermining their counter measures.  

Of the 43 respondents, 39 said they had experienced a recent increase in traffic. Twenty-seven of those 39 attributed the increase in traffic to AI training data bots, with an additional seven saying the AI bots could be contributing to the increase. 

“Multiple respondents compared the behavior of the swarming bots to more traditional online behavior such as Distributed Denial of Service (DDoS) attacks designed to maliciously drive unsustainable levels of traffic to a server, effectively taking it offline,” the report said. “Like a DDoS incident, the swarms quickly overwhelm the collections, knocking servers offline and forcing administrators to scramble to implement countermeasures. As one respondent noted, ‘If they wanted us dead, we’d be dead.’”…(More)”

The Global A.I. Divide


Article by Adam Satariano and Paul Mozur: “Last month, Sam Altman, the chief executive of the artificial intelligence company OpenAI, donned a helmet, work boots and a luminescent high-visibility vest to visit the construction site of the company’s new data center project in Texas.

Bigger than New York’s Central Park, the estimated $60 billion project, which has its own natural gas plant, will be one of the most powerful computing hubs ever created when completed as soon as next year.

Around the same time as Mr. Altman’s visit to Texas, Nicolás Wolovick, a computer science professor at the National University of Córdoba in Argentina, was running what counts as one of his country’s most advanced A.I. computing hubs. It was in a converted room at the university, where wires snaked between aging A.I. chips and server computers.

“Everything is becoming more split,” Dr. Wolovick said. “We are losing.”

Artificial intelligence has created a new digital divide, fracturing the world between nations with the computing power for building cutting-edge A.I. systems and those without. The split is influencing geopolitics and global economics, creating new dependencies and prompting a desperate rush to not be excluded from a technology race that could reorder economies, drive scientific discovery and change the way that people live and work.

The biggest beneficiaries by far are the United States, China and the European Union. Those regions host more than half of the world’s most powerful data centers, which are used for developing the most complex A.I. systems, according to data compiled by Oxford University researchers. Only 32 countries, or about 16 percent of nations, have these large facilities filled with microchips and computers, giving them what is known in industry parlance as “compute power.”..(More)”.

ChatGPT Has Already Polluted the Internet So Badly That It’s Hobbling Future AI Development


Article by Frank Landymore: “The rapid rise of ChatGPT — and the cavalcade of competitors’ generative models that followed suit — has polluted the internet with so much useless slop that it’s already kneecapping the development of future AI models.

As the AI-generated data clouds the human creations that these models are so heavily dependent on amalgamating, it becomes inevitable that a greater share of what these so-called intelligences learn from and imitate is itself an ersatz AI creation. 

Repeat this process enough, and AI development begins to resemble a maximalist game of telephone in which not only is the quality of the content being produced diminished, resembling less and less what it’s originally supposed to be replacing, but in which the participants actively become stupider. The industry likes to describe this scenario as AI “model collapse.”

As a consequence, the finite amount of data predating ChatGPT’s rise becomes extremely valuable. In a new featureThe Register likens this to the demand for “low-background steel,” or steel that was produced before the detonation of the first nuclear bombs, starting in July 1945 with the US’s Trinity test. 

Just as the explosion of AI chatbots has irreversibly polluted the internet, so did the detonation of the atom bomb release radionuclides and other particulates that have seeped into virtually all steel produced thereafter. That makes modern metals unsuitable for use in some highly sensitive scientific and medical equipment. And so, what’s old is new: a major source of low-background steel, even today, is WW1 and WW2 era battleships, including a huge naval fleet that was scuttled by German Admiral Ludwig von Reuter in 1919…(More)”.