AI is supercharging war. Could it also help broker peace?


Article by Tina Amirtha: “Can we measure what is in our hearts and minds, and could it help us end wars any sooner? These are the questions that consume entrepreneur Shawn Guttman, a Canadian émigré who recently gave up his yearslong teaching position in Israel to accelerate a path to peace—using an algorithm.

Living some 75 miles north of Tel Aviv, Guttman is no stranger to the uncertainties of conflict. Over the past few months, miscalculated drone strikes and imprecise missile targets—some intended for larger cities—have occasionally landed dangerously close to his town, sending him to bomb shelters more than once.

“When something big happens, we can point to it and say, ‘Right, that happened because five years ago we did A, B, and C, and look at its effect,’” he says over Google Meet from his office, following a recent trip to the shelter. Behind him, souvenirs from the 1979 Egypt-Israel and 1994 Israel-Jordan peace treaties are visible. “I’m tired of that perspective.”

The startup he cofounded, Didi, is taking a different approach. Its aim is to analyze data across news outlets, political discourse, and social media to identify opportune moments to broker peace. Inspired by political scientist I. William Zartman’s “ripeness” theory, the algorithm—called the Ripeness Index—is designed to tell negotiators, organizers, diplomats, and nongovernmental organizations (NGOs) exactly when conditions are “ripe” to initiate peace negotiations, build coalitions, or launch grassroots campaigns.

During ongoing U.S.-led negotiations over the war in Gaza, both Israel and Hamas have entrenched themselves in opposing bargaining positions. Meanwhile, Israel’s traditional allies, including the U.S., have expressed growing frustration over the war and the dire humanitarian conditions in the enclave, where the threat of famine looms.

In Israel, Didi’s data is already informing grassroots organizations as they strategize which media outlets to target and how to time public actions, such as protests, in coordination with coalition partners. Guttman and his collaborators hope that eventually negotiators will use the model’s insights to help broker lasting peace.

Guttman’s project is part of a rising wave of so-called PeaceTech—a movement using technology to make negotiations more inclusive and data-driven. This includes AI from Hala Systems, which uses satellite imagery and data fusion to monitor ceasefires in Yemen and Ukraine. Another AI startup, Remesh, has been active across the Middle East, helping organizations of all sizes canvas key stakeholders. Its algorithm clusters similar opinions, giving policymakers and mediators a clearer view of public sentiment and division.

A range of NGOs and academic researchers have also developed digital tools for peacebuilding. The nonprofit Computational Democracy Project created Pol.is, an open-source platform that enables citizens to crowdsource outcomes to public debates. Meanwhile, the Futures Lab at the Center for Strategic and International Studies built a peace agreement simulator, complete with a chart to track how well each stakeholder’s needs are met.

Guttman knows it’s an uphill battle. In addition to the ethical and privacy concerns of using AI to interpret public sentiment, PeaceTech also faces financial hurdles. These companies must find ways to sustain themselves amid shrinking public funding and a transatlantic surge in defense spending, which has pulled resources away from peacebuilding initiatives.

Still, Guttman and his investors remain undeterred. One way to view the opportunity for PeaceTech is by looking at the economic toll of war. In its Global Peace Index 2024, the Institute for Economics and Peace’s Vision of Humanity platform estimated that economic disruption due to violence and the fear of violence cost the world $19.1 trillion in 2023, or about 13 percent of global GDP. Guttman sees plenty of commercial potential in times of peace as well.

“Can we make billions of dollars,” Guttman asks, “and save the world—and create peace?” ..(More)”….See also Kluz Prize for PeaceTech (Applications Open)

Sentinel Cities for Public Health


Article by Jesse Rothman, Paromita Hore & Andrew McCartor: “In 2017, a New York City health inspector visited the home of a 5-year-old child with an elevated blood lead level. With no sign of lead paint—the usual suspect in such cases—the inspector discovered dangerous levels of lead in a bright yellow container of “Georgian Saffron,” a spice obtained in the family’s home country. It was not the first case associated with the use of lead-containing Georgian spices—the NYC Health Department shared their findings with authorities in Georgia, which catalyzed a survey of children’s blood lead levels in Georgia, and led to increased regulatory enforcement and education. Significant declines in spice lead levels in the country have had ripple effects in NYC also: not only a drop in spice samples from Georgia containing detectable lead but also a significant reduction in blood lead levels among NYC children of Georgian ancestry.

This wasn’t a lucky break—it was the result of a systematic approach to transform local detection into global impact. Findings from local NYC surveillance are, of course, not limited to Georgian spices. Surveillance activities have identified a variety of lead-containing consumer products from around the world, from cosmetics and medicines to ceramics and other goods. Routinely surveying local stores for lead-containing products has resulted in the removal of over 30,000 hazardous consumer products from NYC store shelves since 2010.

How can we replicate and scale up NYC’s model to address the global crisis of lead poisoning?…(More)”.

The path for AI in poor nations does not need to be paved with billions


Editorial in Nature: “Coinciding with US President Donald Trump’s tour of Gulf states last week, Saudi Arabia announced that it is embarking on a large-scale artificial intelligence (AI) initiative. The proposed venture will have state backing and considerable involvement from US technology firms. It is the latest move in a global expansion of AI ambitions beyond the existing heartlands of the United States, China and Europe. However, as Nature India, Nature Africa and Nature Middle East report in a series of articles on AI in low- and middle-income countries (LMICs) published on 21 May (see go.nature.com/45jy3qq), the path to home-grown AI doesn’t need to be paved with billions, or even hundreds of millions, of dollars, or depend exclusively on partners in Western nations or China…, as a News Feature that appears in the series makes plain (see go.nature.com/3yrd3u2), many initiatives in LMICs aren’t focusing on scaling up, but on ‘scaling right’. They are “building models that work for local users, in their languages, and within their social and economic realities”.

More such local initiatives are needed. Some of the most popular AI applications, such as OpenAI’s ChatGPT and Google Gemini, are trained mainly on data in European languages. That would mean that the model is less effective for users who speak Hindi, Arabic, Swahili, Xhosa and countless other languages. Countries are boosting home-grown apps by funding start-up companies, establishing AI education programmes, building AI research and regulatory capacity and through public engagement.

Those LMICs that have started investing in AI began by establishing an AI strategy, including policies for AI research. However, as things stand, most of the 55 member states of the African Union and of the 22 members of the League of Arab States have not produced an AI strategy. That must change…(More)”.

Indiana Faces a Data Center Backlash


Article by Matthew Zeitlin: “Indiana has power. Indiana has transmission. Indiana has a business-friendly Republican government. Indiana is close to Chicago but — crucially — not in Illinois. All of this has led to a huge surge of data center development in the “Crossroads of America.” It has also led to an upswell of local opposition.

There are almost 30 active data center proposals in Indiana, plus five that have already been rejected in the past year, according to data collected by the environmentalist group Citizens Action Coalition. GoogleAmazon, and Meta have all announced projects in the state since the beginning of 2024.

Nipsco, one of the state’s utilities, has projected 2,600 megawatts worth of new load by the middle of the next decade as its base scenario, mostly attributable to “large economic development projects.” In a more aggressive scenario, it sees 3,200 megawatts of new load — that’s three large nuclear reactors’ worth — by 2028 and 8,600 megawatts by 2035. While short of, say, the almost 36,500 megawatts worth of load growth planned in Georgia for the next decade, it’s still a vast range of outcomes that requires some kind of advanced planning.

That new electricity consumption will likely be powered by fossil fuels. Projected load growth in the state has extended a lifeline to Indiana’s coal-fired power plants, with retirement dates for some of the fleet being pushed out to late in the 2030s. It’s also created a market for new natural gas-fired plants that utilities say are necessary to power the expected new load.

State and local political leaders have greeted these new data center projects with enthusiasm, Ben Inskeep, the program director at CAC, told me. “Economic development is king here,” he said. “That is what all the politicians and regulators say their number one concern is: attracting economic development.”..(More)”.

Technical Tiers: A New Classification Framework for Global AI Workforce Analysis


Report by Siddhi Pal, Catherine Schneider and Ruggero Marino Lazzaroni: “… introduces a novel three-tiered classification system for global AI talent that addresses significant methodological limitations in existing workforce analyses, by distinguishing between different skill categories within the existing AI talent pool. By distinguishing between non-technical roles (Category 0), technical software development (Category 1), and advanced deep learning specialization (Category 2), our framework enables precise examination of AI workforce dynamics at a pivotal moment in global AI policy.

Through our analysis of a sample of 1.6 million individuals in the AI talent pool across 31 countries, we’ve uncovered clear patterns in technical talent distribution that significantly impact Europe’s AI ambitions. Asian nations hold an advantage in specialized AI expertise, with South Korea (27%), Israel (23%), and Japan (20%) maintaining the highest proportions of Category 2 talent. Within Europe, Poland and Germany stand out as leaders in specialized AI talent. This may be connected to their initiatives to attract tech companies and investments in elite research institutions, though further research is needed to confirm these relationships.

Our data also reveals a shifting landscape of global talent flows. Research shows that countries employing points-based immigration systems attract 1.5 times more high-skilled migrants than those using demand-led approaches. This finding takes on new significance in light of recent geopolitical developments affecting scientific research globally. As restrictive policies and funding cuts create uncertainty for researchers in the United States, one of the big destinations for European AI talent, the way nations position their regulatory environments, scientific freedoms, and research infrastructure will increasingly determine their ability to attract and retain specialized AI talent.

The gender analysis in our study illuminates another dimension of competitive advantage. Contrary to the overall AI talent pool, EU countries lead in female representation in highly technical roles (Category 2), occupying seven of the top ten global rankings. Finland, Czechia, and Italy have the highest proportion of female representation in Category 2 roles globally (39%, 31%, and 28%, respectively). This gender diversity represents not merely a social achievement but a potential strategic asset in AI innovation, particularly as global coalitions increasingly emphasize the importance of diverse perspectives in AI development…(More)”

Hundreds of scholars say U.S. is swiftly heading toward authoritarianism


Article by Frank Langfitt: “A survey of more than 500 political scientists finds that the vast majority think the United States is moving swiftly from liberal democracy toward some form of authoritarianism.

In the benchmark survey, known as Bright Line Watch, U.S.-based professors rate the performance of American democracy on a scale from zero (complete dictatorship) to 100 (perfect democracy). After President Trump’s election in November, scholars gave American democracy a rating of 67. Several weeks into Trump’s second term, that figure plummeted to 55.

“That’s a precipitous drop,” says John Carey, a professor of government at Dartmouth and co-director of Bright Line Watch. “There’s certainly consensus: We’re moving in the wrong direction.”…Not all political scientists view Trump with alarm, but many like Carey who focus on democracy and authoritarianism are deeply troubled by Trump’s attempts to expand executive power over his first several months in office.

“We’ve slid into some form of authoritarianism,” says Steven Levitsky, a professor of government at Harvard, and co-author of How Democracies Die. “It is relatively mild compared to some others. It is certainly reversible, but we are no longer living in a liberal democracy.”…Kim Lane Scheppele, a Princeton sociologist who has spent years tracking Hungary, is also deeply concerned: “We are on a very fast slide into what’s called competitive authoritarianism.”

When these scholars use the term “authoritarianism,” they aren’t talking about a system like China’s, a one-party state with no meaningful elections. Instead, they are referring to something called “competitive authoritarianism,” the kind scholars say they see in countries such as Hungary and Turkey.

In a competitive authoritarian system, a leader comes to power democratically and then erodes the system of checks and balances. Typically, the executive fills the civil service and key appointments — including the prosecutor’s office and judiciary — with loyalists. He or she then attacks the media, universities and nongovernmental organizations to blunt public criticism and tilt the electoral playing field in the ruling party’s favor…(More)”.

UAE set to use AI to write laws in world first


Article by Chloe Cornish: “The United Arab Emirates aims to use AI to help write new legislation and review and amend existing laws, in the Gulf state’s most radical attempt to harness a technology into which it has poured billions.

The plan for what state media called “AI-driven regulation” goes further than anything seen elsewhere, AI researchers said, while noting that details were scant. Other governments are trying to use AI to become more efficient, from summarising bills to improving public service delivery, but not to actively suggest changes to current laws by crunching government and legal data.

“This new legislative system, powered by artificial intelligence, will change how we create laws, making the process faster and more precise,” said Sheikh Mohammad bin Rashid Al Maktoum, the Dubai ruler and UAE vice-president, quoted by state media.

Ministers last week approved the creation of a new cabinet unit, the Regulatory Intelligence Office, to oversee the legislative AI push. 

Rony Medaglia, a professor at Copenhagen Business School, said the UAE appeared to have an “underlying ambition to basically turn AI into some sort of co-legislator”, and described the plan as “very bold”.

Abu Dhabi has bet heavily on AI and last year opened a dedicated investment vehicle, MGX, which has backed a $30bn BlackRock AI-infrastructure fund among other investments. MGX has also added an AI observer to its own board.

The UAE plans to use AI to track how laws affect the country’s population and economy by creating a massive database of federal and local laws, together with public sector data such as court judgments and government services.

The AI will “regularly suggest updates to our legislation,” Sheikh Mohammad said, according to state media. The government expects AI to speed up lawmaking by 70 per cent, according to the cabinet meeting readout…(More)”

So You Want to Be a Dissident?


Essay by Julia Angwin and Ami Fields-Meyer: “…Heimans points to an increasingly hostile digital landscape as one barrier to effective grassroots campaigns. At the dawn of the digital era, in the two-thousands, e-mail transformed the field of political organizing, enabling groups like MoveOn.org to mobilize huge campaigns against the Iraq War, and allowing upstart candidates like Howard Dean and Barack Obama to raise money directly from people instead of relying on Party infrastructure. But now everyone’s e-mail inboxes are overflowing. The tech oligarchs who control the social-media platforms are less willing to support progressive activism. Globally, autocrats have more tools to surveil and disrupt digital campaigns. And regular people are burned out on actions that have failed to remedy fundamental problems in society.

It’s not clear what comes next. Heimans hopes that new tactics will be developed, such as, perhaps, a new online platform that would help organizing, or the strengthening of a progressive-media ecosystem that will engage new participants. “Something will emerge that kind of revitalizes the space.”

There’s an oft-told story about Andrei Sakharov, the celebrated twentieth-century Soviet activist. Sakharov made his name working as a physicist on the development of the U.S.S.R.’s hydrogen bomb, at the height of the Cold War, but shot to global prominence after Leonid Brezhnev’s regime punished him for speaking publicly about the dangers of those weapons, and also about Soviet repression.

When an American friend was visiting Sakharov and his wife, the activist Yelena Bonner, in Moscow, the friend referred to Sakharov as a dissident. Bonner corrected him: “My husband is a physicist, not a dissident.”

This is a fundamental tension of building a principled dissident culture—it risks wrapping people up in a kind of negative identity, a cloak of what they are not. The Soviet dissidents understood their work as a struggle to uphold the laws and rights that were enshrined in the Soviet constitution, not as a fight against a regime.

“They were fastidious about everything they did being consistent with Soviet law,” Benjamin Nathans, a history professor at the University of Pennsylvania and the author of a book on Soviet dissidents, said. “I call it radical civil obedience.”

An affirmative vision of what the world should be is the inspiration for many of those who, in these tempestuous early months of Trump 2.0, have taken meaningful risks—acts of American dissent.

Consider Mariann Budde, the Episcopal bishop who used her pulpit before Trump on Inauguration Day to ask the President’s “mercy” for two vulnerable groups for whom he has reserved his most visceral disdain. For her sins, a congressional ally of the President called for the pastor to be “added to the deportation list.”..(More)”.

Artificial Intelligence and National Security


CRS Report: “Artificial intelligence (AI) is a rapidly growing field of technology with potentially significant implications for national security. As such, the U.S. Department of Defense (DOD) and other nations are developing AI applications for a range of military functions. AI research is underway in the fields of intelligence collection and analysis, logistics, cyber operations, information operations, command and control, and in a variety of semiautonomous and autonomous vehicles.

Already, AI has been incorporated into military operations in Iraq and Syria. Congressional action has the potential to shape the technology’s development further, with budgetary and legislative decisions influencing the growth of military applications as well as the pace of their adoption.

AI technologies present unique challenges for military integration, particularly because the bulk of AI development is happening in the commercial sector. Although AI is not unique in this regard, the defense acquisition process may need to be adapted for acquiring emerging technologies like AI. In addition, many commercial AI applications must undergo significant modification prior to being functional for the military.

A number of cultural issues also challenge AI acquisition, as some commercial AI companies are averse to partnering with DOD due to ethical concerns, and even within the department, there can be resistance to incorporating AI technology into existing weapons systems and processes.

Potential international rivals in the AI market are creating pressure for the United States to compete for innovative military AI applications. China is a leading competitor in this regard, releasing a plan in 2017 to capture the global lead in AI development by 2030. Currently, China is primarily focused on using AI to make faster and more well-informed decisions, as well as on developing a variety of autonomous military vehicles. Russia is also active in military AI development, with a primary focus on robotics.

Although AI has the potential to impart a number of advantages in the military context, it may also introduce distinct challenges. AI technology could, for example, facilitate autonomous operations, lead to more informed military decisionmaking, and increase the speed and scale of military action. However, it may also be unpredictable or vulnerable to unique forms of manipulation. As a result of these factors, analysts hold a broad range of opinions on how influential AI will be in future combat operations. While a small number of analysts believe that the technology will have minimal impact, most believe that AI will have at least an evolutionary—if not revolutionary—effect….(More)”.

Why Data Is Not the New Oil


Blogpost by Alec Stapp: “Data is the new oil,” said Jaron Lanier in a recent op-ed for The New York Times. Lanier’s use of this metaphor is only the latest instance of what has become the dumbest meme in tech policy. As the digital economy becomes more prominent in our lives, it is not unreasonable to seek to understand one of its most important inputs. But this analogy to the physical economy is fundamentally flawed. Worse, introducing regulations premised upon faulty assumptions like this will likely do far more harm than good. Here are seven reasons why “data is the new oil” misses the mark:

1. Oil is rivalrous; data is non-rivalrous

If someone uses a barrel of oil, it can’t be consumed again. But, as Alan McQuinn, a senior policy analyst at the Information Technology and Innovation Foundation, noted, “when consumers ‘pay with data’ to access a website, they still have the same amount of data after the transaction as before. As a result, users have an infinite resource available to them to access free online services.” Imposing restrictions on data collection makes this infinite resource finite. 

2. Oil is excludable; data is non-excludable

Oil is highly excludable because, as a physical commodity, it can be stored in ways that prevent use by non-authorized parties. However, as my colleagues pointed out in a recent comment to the FTC: “While databases may be proprietary, the underlying data usually is not.” They go on to argue that this can lead to under-investment in data collection:

[C]ompanies that have acquired a valuable piece of data will struggle both to prevent their rivals from obtaining the same data as well as to derive competitive advantage from the data. For these reasons, it also  means that firms may well be more reluctant to invest in data generation than is socially optimal. In fact, to the extent this is true there is arguably more risk of companies under-investing in data  generation than of firms over-investing in order to create data troves with which to monopolize a market. This contrasts with oil, where complete excludability is the norm.

3. Oil is fungible; data is non-fungible

Oil is a commodity, so, by definition, one barrel of oil of a given grade is equivalent to any other barrel of that grade. Data, on the other hand, is heterogeneous. Each person’s data is unique and may consist of a practically unlimited number of different attributes that can be collected into a profile. This means that oil will follow the law of one price, while a dataset’s value will be highly contingent on its particular properties and commercialization potential.

4. Oil has positive marginal costs; data has zero marginal costs

There is a significant expense to producing and distributing an additional barrel of oil (as low as $5.49 per barrel in Saudi Arabia; as high as $21.66 in the U.K.). Data is merely encoded information (bits of 1s and 0s), so gathering, storing, and transferring it is nearly costless (though, to be clear, setting up systems for collecting and processing can be a large fixed cost). Under perfect competition, the market clearing price is equal to the marginal cost of production (hence why data is traded for free services and oil still requires cold, hard cash)….(More)”.