These Startups Are Building Advanced AI Models Without Data Centers


Article by Will Knight: “Researchers have trained a new kind of large language model (LLM) using GPUs dotted across the world and fed private as well as public data—a move that suggests that the dominant way of building artificial intelligence could be disrupted.

Article by Will Knight: “Flower AI and Vana, two startups pursuing unconventional approaches to building AI, worked together to create the new model, called Collective-1.

Flower created techniques that allow training to be spread across hundreds of computers connected over the internet. The company’s technology is already used by some firms to train AI models without needing to pool compute resources or data. Vana provided sources of data including private messages from X, Reddit, and Telegram.

Collective-1 is small by modern standards, with 7 billion parameters—values that combine to give the model its abilities—compared to hundreds of billions for today’s most advanced models, such as those that power programs like ChatGPTClaude, and Gemini.

Nic Lane, a computer scientist at the University of Cambridge and cofounder of Flower AI, says that the distributed approach promises to scale far beyond the size of Collective-1. Lane adds that Flower AI is partway through training a model with 30 billion parameters using conventional data, and plans to train another model with 100 billion parameters—close to the size offered by industry leaders—later this year. “It could really change the way everyone thinks about AI, so we’re chasing this pretty hard,” Lane says. He says the startup is also incorporating images and audio into training to create multimodal models.

Distributed model-building could also unsettle the power dynamics that have shaped the AI industry…(More)”

Digital Public Infrastructure Could Make a Better Internet


Essay by Akash Kapur: “…The advent of AI has intensified geopolitical rivalries, and with them the risks of fragmentation, exclusion, and hyper-concentration that are already so prevalent. The prospects of a “Splinternet” have never appeared more real. The old dream of a global digital commons seems increasingly quaint; we are living amid what Yanis Varoufakis, the former Greek finance minister, calls “technofeudalism.”

DPI suggests it doesn’t have to be this way. The approach’s emphasis on loosening chokeholds, fostering collaboration, and reclaiming space from monopolies represents an effort to recuperate some of the internet’s original promise. At its most aspirational, DPI offers the potential for a new digital social contract: a rebalancing of public and private interests, a reorientation of the network so that it advances broad social goals even while fostering entrepreneurship and innovation. How fitting it would be if this new model were to emerge not from the entrenched powers that have so long guided the network, but from a handful of nations long confined to the periphery—now determined to take their seats at the table of global technology…(More)”.

How to save a billion dollars


Essay by Ann Lewis: “The persistent pattern of billion-dollar technology modernization failures in government stems not from a lack of good intentions, but from structural misalignments in incentives, expertise, and decision-making authority. When large budgets meet urgency, limited in-house technical capacity, and rigid, compliance-driven procurement processes, projects become over-scoped and detached from the needs of users and mission outcomes. This undermines service delivery, wastes taxpayer dollars, and adds unnecessary risk to critical systems supporting national security and public safety.

We know what causes failure, we know what works, and we’ve proven it before. It isn’t easy and shortcuts don’t work — but success is entirely achievable, and that should be the expectation. The solution is not simply to spend more, or cancel contracts, or fire people, but to fundamentally rethink how public institutions build and manage technology, and rethink how public-private partnerships are structured. Government services underpinned by technology should be funded as ongoing capabilities rather than one-time investments, IT procurement processes should embed experienced technical leadership where key decisions are made, and all implementation projects should adopt iterative, outcomes-driven approaches. 

Proven examples—from VA.gov to SSA’s recent CCaaS success—show that when governments align incentives, prioritize real user needs, and invest in internal capacity, they can build services faster, for less money, and with dramatically better results…(More)”.

Brazil’s AI-powered social security app is wrongly rejecting claims


Article by Gabriel Daros: “Brazil’s social security institute, known as INSS, added AI to its app in 2018 in an effort to cut red tape and speed up claims. The office, known for its long lines and wait times, had around 2 million pending requests for everything from doctor’s appointments to sick pay to pensions to retirement benefits at the time. While the AI-powered tool has since helped process thousands of basic claims, it has also rejected requests from hundreds of people like de Brito — who live in remote areas and have little digital literacy — for minor errors.

The government is right to digitize its systems to improve efficiency, but that has come at a cost, Edjane Rodrigues, secretary for social policies at the National Confederation of Workers in Agriculture, told Rest of World.

“If the government adopts this kind of service to speed up benefits for the people, this is good. We are not against it,” she said. But, particularly among farm workers, claims can be complex because of the nature of their work, she said, referring to cases that require additional paperwork, such as when a piece of land is owned by one individual but worked by a group of families. “There are many peculiarities in agriculture, and rural workers are being especially harmed” by the app, according to Rodrigues.

“Each automated decision is based on specified legal criteria, ensuring that the standards set by the social security legislation are respected,” a spokesperson for INSS told Rest of World. “Automation does not work in an arbitrary manner. Instead, it follows clear rules and regulations, mirroring the expected standards applied in conventional analysis.”

Governments across Latin America have been introducing AI to improve their processes. Last year, Argentina began using ChatGPT to draft court rulings, a move that officials said helped cut legal costs and reduce processing times. Costa Rica has partnered with Microsoft to launch an AI tool to optimize tax data collection and check for fraud in digital tax receipts. El Salvador recently set up an AI lab to develop tools for government services.

But while some of these efforts have delivered promising results, experts have raised concerns about the risk of officials with little tech know-how applying these tools with no transparency or workarounds…(More)”.

Exit to Open


Article by Jim Fruchterman and Steve Francis: “What happens when a nonprofit program or an entire organization needs to shut down? The communities being served, and often society as a whole, are the losers. What if it were possible to mitigate some of that damage by sharing valuable intellectual property assets of the closing effort for longer term benefit? Organizations in these tough circumstances must give serious thought to a responsible exit for their intangible assets.

At the present moment of unparalleled disruption, the entire nonprofit sector is rethinking everything: language to describe their work, funding sources, partnerships, and even their continued existence. Nonprofit programs and entire charities will be closing, or being merged out of existence. Difficult choices are being made. Who will fill the role of witness and archivist to preserve the knowledge of these organizations, their writings, media, software, and data, for those who carry on, either now or in the future?

We believe leaders in these tough days should consider a model we’re calling Exit to Open (E2O) and related exit concepts to safeguard these assets going forward…

Exit to Open (E2O) exploits three elements:

  1. We are in an era where the cost of digital preservation is low; storing a few more bytes for a long time is cheap.
  2. It’s far more effective for an organization’s staff to isolate and archive critical content than an outsider with limited knowledge attempting to do so later.
  3. These resources are of greatest use if there is a human available to interpret them, and a deliberate archival process allows for the identification of these potential interpreters…(More)”.

Hundreds of scholars say U.S. is swiftly heading toward authoritarianism


Article by Frank Langfitt: “A survey of more than 500 political scientists finds that the vast majority think the United States is moving swiftly from liberal democracy toward some form of authoritarianism.

In the benchmark survey, known as Bright Line Watch, U.S.-based professors rate the performance of American democracy on a scale from zero (complete dictatorship) to 100 (perfect democracy). After President Trump’s election in November, scholars gave American democracy a rating of 67. Several weeks into Trump’s second term, that figure plummeted to 55.

“That’s a precipitous drop,” says John Carey, a professor of government at Dartmouth and co-director of Bright Line Watch. “There’s certainly consensus: We’re moving in the wrong direction.”…Not all political scientists view Trump with alarm, but many like Carey who focus on democracy and authoritarianism are deeply troubled by Trump’s attempts to expand executive power over his first several months in office.

“We’ve slid into some form of authoritarianism,” says Steven Levitsky, a professor of government at Harvard, and co-author of How Democracies Die. “It is relatively mild compared to some others. It is certainly reversible, but we are no longer living in a liberal democracy.”…Kim Lane Scheppele, a Princeton sociologist who has spent years tracking Hungary, is also deeply concerned: “We are on a very fast slide into what’s called competitive authoritarianism.”

When these scholars use the term “authoritarianism,” they aren’t talking about a system like China’s, a one-party state with no meaningful elections. Instead, they are referring to something called “competitive authoritarianism,” the kind scholars say they see in countries such as Hungary and Turkey.

In a competitive authoritarian system, a leader comes to power democratically and then erodes the system of checks and balances. Typically, the executive fills the civil service and key appointments — including the prosecutor’s office and judiciary — with loyalists. He or she then attacks the media, universities and nongovernmental organizations to blunt public criticism and tilt the electoral playing field in the ruling party’s favor…(More)”.

How to Survive the A.I. Revolution


Essay by John Cassidy: “It isn’t clear where the term “Luddite” originated. Some accounts trace it to Ned Ludd, a textile worker who reportedly smashed a knitting frame in 1779. Others suggest that it may derive from folk memories of King Ludeca, a ninth-century Anglo-Saxon monarch who died in battle. Whatever the source, many machine breakers identified “General Ludd” as their leader. A couple of weeks after the Rawfolds attack, William Horsfall, another mill owner, was shot dead. A letter sent after Horsfall’s assassination—which hailed “the avenging of the death of the two brave youths who fell at the siege of Rawfolds”—began “By Order of General Ludd.”

The British government, at war with Napoleon, regarded the Luddites as Jacobin insurrectionists and responded with brutal suppression. But this reaction stemmed from a fundamental misinterpretation. Far from being revolutionary, Luddism was a defensive response to the industrial capitalism that was threatening skilled workers’ livelihoods. The Luddites weren’t mindless opponents of technology but had a clear logic to their actions—an essentially conservative one. Since they had no political representation—until 1867, the British voting franchise excluded the vast majority—they concluded that violent protest was their only option. “The burning of Factorys or setting fire to the property of People we know is not right, but Starvation forces Nature to do that which he would not,” one Yorkshire cropper wrote. “We have tried every effort to live by Pawning our Cloaths and Chattles, so we are now on the brink for the last struggle.”

As alarm about artificial intelligence has gone global, so has a fascination with the Luddites. The British podcast “The Ned Ludd Radio Hour” describes itself as “your weekly dose of tech skepticism, cynicism, and absurdism.” Kindred themes are explored in the podcast “This Machine Kills,” co-hosted by the social theorist Jathan Sadowski, whose new book, “The Mechanic and the Luddite,” argues that the fetishization of A.I. and other digital technologies obscures their role in disciplining labor and reinforcing a profit-driven system. “Luddites want technology—the future—to work for all of us,” he told the Guardian.The technology journalist Brian Merchant makes a similar case in “Blood in the Machine: The Origins of the Rebellion Against Big Tech” (2023). Blending a vivid account of the original Luddites with an indictment of contemporary tech giants like Amazon and Uber, Merchant portrays the current wave of automation as part of a centuries-long struggle over labor and power. “Working people are staring down entrepreneurs, tech monopolies, and venture capital firms that are hunting for new forms of labor-saving tech—be it AI, robotics, or software automation—to replace them,” Merchant writes. “They are again faced with losing their jobs to the machine.”..(More)”.

The Future is Coded: How AI is Rewriting the Rules of Decision Theaters


Essay by Mark Esposito and David De Cremer: “…These advances are not happening in isolation on engineers’ laptops; they are increasingly playing out in “decision theaters” – specialized environments (physical or virtual) designed for interactive, collaborative problem-solving. A decision theater is typically a space equipped with high-resolution displays, simulation engines, and data visualization tools where stakeholders can convene to explore complex scenarios. Originally pioneered at institutions like Arizona State University, the concept of a decision theater has gained traction as a way to bring together diverse expertise – economists, scientists, community leaders, government officials, and now AI systems – under one roof. By visualizing possible futures (say, the spread of a wildfire or the regional impact of an economic policy) in an engaging, shared format, these theaters make foresight a participatory exercise rather than an academic one. In the age of generative AI, decision theaters are evolving into hubs for human-AI collaboration. Picture a scenario where city officials are debating a climate adaptation policy. Inside a decision theater, an AI model might project several climate futures for the city (varying rainfall, extreme heat incidents, flood patterns) on large screens. Stakeholders can literally see the potential impacts on maps and graphs. They can then ask the AI to adjust assumptions – “What if we add more green infrastructure in this district?” – and within seconds, watch a new projection unfold. This real-time interaction allows for an iterative dialogue between human ideas and AI-generated outcomes. Participants can inject local knowledge or voice community values, and the AI will incorporate that input to revise the scenario. The true power of generative AI in a decision theater lies in this collaboration.

Such interactive environments enhance learning and consensus-building. When stakeholders jointly witness how certain choices lead to undesirable futures (for instance, a policy leading to water shortages in a simulation), it can galvanize agreement on preventative action. Moreover, the theater setup encourages asking “What if?” in a safe sandbox, including ethically fraught questions. Because the visualizations make outcomes concrete, they naturally prompt ethical deliberation: If one scenario shows economic growth but high social inequality, is that future acceptable? If not, how can we tweak inputs to produce a more equitable outcome? In this way, decision theaters embed ethical and social considerations into high-tech planning, ensuring that the focus isn’t just on what is likely or profitable but on what is desirable for communities. This participatory approach helps balance technological possibilities with human values and cultural sensitivities. It’s one thing for an AI to suggest an optimal solution on paper; it’s another to have community representatives in the room, engaging with that suggestion and shaping it to fit local norms and needs.

Equally important, decision theaters democratize foresight. They open up complex decision-making processes to diverse stakeholders, not just technical experts. City planners, elected officials, citizens’ groups, and subject matter specialists can all contribute in real time, aided by AI. This inclusive model guards against the risk of AI becoming an opaque oracle controlled by a few. Instead, the AI’s insights are put on display for all to scrutinize and question. By doing so, the process builds trust in the tools and the decisions that come out of them. When people see that an AI’s recommendation emerged from transparent, interactive exploration – rather than a mysterious black box – they may be more likely to trust and accept the outcome. As one policy observer noted, it’s essential to bring ideas from across sectors and disciplines into these AI-assisted discussions so that solutions “work for people, not just companies.” If designed well, decision theaters operationalize that principle…(More)”.

Who Owns Science?


Article by Lisa Margonelli: “Only a few months into 2025, the scientific enterprise is reeling from a series of shocks—mass firings of the scientific workforce across federal agencies, cuts to federal research budgets, threats to indirect costs for university research, proposals to tax endowments, termination of federal science advisory committees, and research funds to prominent universities held hostage over political conditions. Amid all this, the public has not shown much outrage at—or even interest in—the dismantling of the national research project that they’ve been bankrolling for the past 75 years.

Some evidence of a disconnect from the scientific establishment was visible in confirmation hearings of administration appointees. During his Senate nomination hearing to head the department of Health and Human Services, Robert F. Kennedy Jr. promised a reorientation of research from infectious disease toward chronic conditions, along with “radical transparency” to rebuild trust in science. While his fans applauded, he insisted that he was not anti-vaccine, declaring, “I am pro-safety.”

But lack of public reaction to funding cuts need not be pinned on distrust of science; it could simply be that few citizens see the $200-billion-per-year, envy-of-the-world scientific enterprise as their own. On March 15, Alabama meteorologist James Spann took to Facebook to narrate the approach of 16 tornadoes in the state, taking note that people didn’t seem to care about the president’s threat to close the National Weather Service. “People say, ‘Well, if they shut it down, I’ll just use my app,’” Spann told Inside Climate News. “Well, where do you think the information on your app comes from? It comes from computer model output that’s run by the National Weather Service.” The public has paid for those models for generations, but only a die-hard weather nerd can find the acronyms for the weather models that signal that investment on these apps…(More)”.

UAE set to use AI to write laws in world first


Article by Chloe Cornish: “The United Arab Emirates aims to use AI to help write new legislation and review and amend existing laws, in the Gulf state’s most radical attempt to harness a technology into which it has poured billions.

The plan for what state media called “AI-driven regulation” goes further than anything seen elsewhere, AI researchers said, while noting that details were scant. Other governments are trying to use AI to become more efficient, from summarising bills to improving public service delivery, but not to actively suggest changes to current laws by crunching government and legal data.

“This new legislative system, powered by artificial intelligence, will change how we create laws, making the process faster and more precise,” said Sheikh Mohammad bin Rashid Al Maktoum, the Dubai ruler and UAE vice-president, quoted by state media.

Ministers last week approved the creation of a new cabinet unit, the Regulatory Intelligence Office, to oversee the legislative AI push. 

Rony Medaglia, a professor at Copenhagen Business School, said the UAE appeared to have an “underlying ambition to basically turn AI into some sort of co-legislator”, and described the plan as “very bold”.

Abu Dhabi has bet heavily on AI and last year opened a dedicated investment vehicle, MGX, which has backed a $30bn BlackRock AI-infrastructure fund among other investments. MGX has also added an AI observer to its own board.

The UAE plans to use AI to track how laws affect the country’s population and economy by creating a massive database of federal and local laws, together with public sector data such as court judgments and government services.

The AI will “regularly suggest updates to our legislation,” Sheikh Mohammad said, according to state media. The government expects AI to speed up lawmaking by 70 per cent, according to the cabinet meeting readout…(More)”