Why PeaceTech must be the next frontier of innovation and investment


Article by Stefaan Verhulst and Artur Kluz: “…amidst this frenzy, a crucial question is being left unasked: Can technology be used not just to win wars, but to prevent them and save people’s lives?

There is an emerging field that dares to pose this question—PeaceTech. It is the use of technology to save human lives, prevent conflict, de-escalate violence, rebuild fractured communities, and secure fragile peace in post-conflict environments.

From early warning systems that predict outbreaks of violence, to platforms ensuring aid transparency, and mobile tools connecting refugees to services: PeaceTech is real, it works—and it is radically underfunded.

Unlike the vast sums pouring into defense startups, peace building efforts, including PeaceTech organizations and ventures, struggle for scraps. The United Nations Secretary-General released in 2020 its ambitious goal to fundraise $1.5 billion in peacebuilding support over a total of seven years. In contrast, private investment in defense tech crossed $34 billion in 2023 alone. 

Why is PeaceTech so neglected?

One reason PeaceTech is so neglected is cultural: in the tech world, “peace” can seem abstract or idealistic—soft power in a world of hard tech. In reality, peace is not soft; it is among the hardest, most complex challenges of our time. Peace requires systemic thinking, early intervention, global coordination, and a massive infrastructure of care, trust, and monitoring. Maintaining peace in a hyper-polarized, technologically complex world is a feat of engineering, diplomacy, and foresight.

And it’s a business opportunity. According to the Institute for Economics and Peace, violence costs the global economy over $17 trillion per year—about 13% of global GDP. Even modest improvements in peace would unlock billions in economic value.

Consider the peace dividend from predictive analytics that can help governments or international organizations intervene or mediate before conflict breaks out, or AI-powered verification tools to enforce ceasefires and disinformation controls. PeaceTech, if scaled, could become a multibillion dollar market—and a critical piece of the security architecture of the future…(More)”. ..See also Kluz Prize for PeaceTech (Applications Open)

DeepSeek Inside: Origins, Technology, and Impact


Article by Michael A. Cusumano: “The release of DeepSeek V3 and R1 in January 2025 caused steep declines in the stock prices of companies that provide generative artificial intelligence (GenAI) infrastructure technology and datacenter services. These two large language models (LLMs) came from a little-known Chinese startup with approximately 200 employees versus at least 3,500 for industry-leader OpenAI. DeepSeek seemed to have developed this powerful technology much more cheaply than previously thought possible. If true, DeepSeek had the potential to disrupt the economics of the entire GenAI ecosystem and the dominance of U.S. companies ranging from OpenAI to Nvidia.

DeepSeek-R1 defines itself as “an artificial intelligence language model developed by OpenAI, specifically based on the generative pre-trained transformer (GPT) architecture.” Here, DeepSeek acknowledges that the transformer researchers (who published their landmark paper while at Google in 2017) and OpenAI developed its basic technology. Nonetheless, V3 and R1 display impressive skills in neural-network system design, engineering, and optimization, and DeepSeek’s publications provide rare insights into how the technology actually works. This column reviews, for the non-expert reader, what we know about DeepSeek’s origins, technology, and impact so far…(More)”.

The war over the peace business


Article by Tekendra Parmar: “At the second annual AI+ Expo in Washington, DC, in early June, war is the word of the day.

As a mix of Beltway bureaucrats, military personnel, and Washington’s consultant class peruse the expansive Walter E. Washington Convention Center, a Palantir booth showcases its latest in data-collection suites for “warfighters.” Lockheed Martin touts the many ways it is implementing AI throughout its weaponry systems. On the soundstage, the defense tech darling Mach Industries is selling its newest uncrewed aerial vehicles. “We’re living in a world with great-power competition,” the presenter says. “We can’t rule out the possibility of war — but the best way to prevent a war is deterrence,” he says, flanked by videos of drones flying through what looked like the rugged mountains and valleys of Kandahar.

Hosted by the Special Competitive Studies Project, a think tank led by former Google CEO Eric Schmidt, the expo says it seeks to bridge the gap between Silicon Valley entrepreneurs and Washington policymakers to “strengthen” America and its allies’ “competitiveness in critical technologies.”

One floor below, a startup called Anadyr Horizon is making a very different sales pitch, for software that seeks to prevent war rather than fight it: “Peace tech,” as the company’s cofounder Arvid Bell calls it. Dressed in white khakis and a black pinstripe suit jacket with a dove and olive branch pinned to his lapel (a gift from his husband), the former Harvard political scientist begins by noting that Russia’s all-out invasion of Ukraine had come as a surprise to many political scientists. But his AI software, he says, could predict it.

Long the domain of fantasy and science fiction, the idea of forecasting conflict has now become a serious pursuit. In Isaac Asimov’s 1950s “Foundation” series, the main character develops an algorithm that allows him to predict the decline of the Galactic Empire, angering its rulers and forcing him into exile. During the coronavirus pandemic, the US State Department experimented with AI fed with Twitter data to predict “COVID cases” and “violent events.” In its AI audit two years ago, the State Department revealed that it started training AI on “open-source political, social, and economic datasets” to predict “mass civilian killings.” The UN is also said to have experimented with AI to model the war in Gaza…(More)”… ..See also Kluz Prize for PeaceTech (Applications Open)

AI is supercharging war. Could it also help broker peace?


Article by Tina Amirtha: “Can we measure what is in our hearts and minds, and could it help us end wars any sooner? These are the questions that consume entrepreneur Shawn Guttman, a Canadian émigré who recently gave up his yearslong teaching position in Israel to accelerate a path to peace—using an algorithm.

Living some 75 miles north of Tel Aviv, Guttman is no stranger to the uncertainties of conflict. Over the past few months, miscalculated drone strikes and imprecise missile targets—some intended for larger cities—have occasionally landed dangerously close to his town, sending him to bomb shelters more than once.

“When something big happens, we can point to it and say, ‘Right, that happened because five years ago we did A, B, and C, and look at its effect,’” he says over Google Meet from his office, following a recent trip to the shelter. Behind him, souvenirs from the 1979 Egypt-Israel and 1994 Israel-Jordan peace treaties are visible. “I’m tired of that perspective.”

The startup he cofounded, Didi, is taking a different approach. Its aim is to analyze data across news outlets, political discourse, and social media to identify opportune moments to broker peace. Inspired by political scientist I. William Zartman’s “ripeness” theory, the algorithm—called the Ripeness Index—is designed to tell negotiators, organizers, diplomats, and nongovernmental organizations (NGOs) exactly when conditions are “ripe” to initiate peace negotiations, build coalitions, or launch grassroots campaigns.

During ongoing U.S.-led negotiations over the war in Gaza, both Israel and Hamas have entrenched themselves in opposing bargaining positions. Meanwhile, Israel’s traditional allies, including the U.S., have expressed growing frustration over the war and the dire humanitarian conditions in the enclave, where the threat of famine looms.

In Israel, Didi’s data is already informing grassroots organizations as they strategize which media outlets to target and how to time public actions, such as protests, in coordination with coalition partners. Guttman and his collaborators hope that eventually negotiators will use the model’s insights to help broker lasting peace.

Guttman’s project is part of a rising wave of so-called PeaceTech—a movement using technology to make negotiations more inclusive and data-driven. This includes AI from Hala Systems, which uses satellite imagery and data fusion to monitor ceasefires in Yemen and Ukraine. Another AI startup, Remesh, has been active across the Middle East, helping organizations of all sizes canvas key stakeholders. Its algorithm clusters similar opinions, giving policymakers and mediators a clearer view of public sentiment and division.

A range of NGOs and academic researchers have also developed digital tools for peacebuilding. The nonprofit Computational Democracy Project created Pol.is, an open-source platform that enables citizens to crowdsource outcomes to public debates. Meanwhile, the Futures Lab at the Center for Strategic and International Studies built a peace agreement simulator, complete with a chart to track how well each stakeholder’s needs are met.

Guttman knows it’s an uphill battle. In addition to the ethical and privacy concerns of using AI to interpret public sentiment, PeaceTech also faces financial hurdles. These companies must find ways to sustain themselves amid shrinking public funding and a transatlantic surge in defense spending, which has pulled resources away from peacebuilding initiatives.

Still, Guttman and his investors remain undeterred. One way to view the opportunity for PeaceTech is by looking at the economic toll of war. In its Global Peace Index 2024, the Institute for Economics and Peace’s Vision of Humanity platform estimated that economic disruption due to violence and the fear of violence cost the world $19.1 trillion in 2023, or about 13 percent of global GDP. Guttman sees plenty of commercial potential in times of peace as well.

“Can we make billions of dollars,” Guttman asks, “and save the world—and create peace?” ..(More)”….See also Kluz Prize for PeaceTech (Applications Open)

Sharing trustworthy AI models with privacy-enhancing technologies


OECD Report: “Privacy-enhancing technologies (PETs) are critical tools for building trust in the collaborative development and sharing of artificial intelligence (AI) models while protecting privacy, intellectual property, and sensitive information. This report identifies two key types of PET use cases. The first is enhancing the performance of AI models through confidential and minimal use of input data, with technologies like trusted execution environments, federated learning, and secure multi-party computation. The second is enabling the confidential co-creation and sharing of AI models using tools such as differential privacy, trusted execution environments, and homomorphic encryption. PETs can reduce the need for additional data collection, facilitate data-sharing partnerships, and help address risks in AI governance. However, they are not silver bullets. While combining different PETs can help compensate for their individual limitations, balancing utility, efficiency, and usability remains challenging. Governments and regulators can encourage PET adoption through policies, including guidance, regulatory sandboxes, and R&D support, which would help build sustainable PET markets and promote trustworthy AI innovation…(More)”.

Understanding the Impacts of Generative AI Use on Children


Primer by The Alan Turing Institute and LEGO Foundation: “There is a growing body of research looking at the potential positive and negative impacts of generative AI and its associated risks. However, there is a lack of research that considers the potential impacts of these technologies on children, even though generative AI is already being deployed within many products and systems that children engage with, from games to educational platforms. Children have particular needs and rights that must be accounted for when designing, developing, and rolling out new technologies, and more focus on children’s rights is needed. While children are the group that may be most impacted by the widespread deployment of generative AI, they are simultaneously the group least represented in decision-making processes relating to the design, development, deployment or governance of AI. The Alan Turing Institute’s Children and AI and AI for Public Services teams explored the perspectives of children, parents, carers and teachers on generative AI technologies. Their research is guided by the ‘Responsible Innovation in Technology for Children’ (RITEC) framework for digital technology, play and children’s wellbeing established by UNICEF and funded by the LEGO Foundation and seeks to examine the potential impacts of generative AI on children’s wellbeing. The utility of the RITEC framework is that it allows for the qualitative analysis of wellbeing to take place by foregrounding more specific factors such as identity and creativity, which are further explored in each of the work packages.

The project provides unique and much needed insights into impacts of generative AI on children through combining quantitative and qualitative research methods…(More)”.

The Reenchanted World: On finding mystery in the digital age


Essay by Karl Ove Knausgaard: “…When Karl Marx and Friedrich Engels wrote about alienation in the 1840s—that’s nearly two hundred years ago—they were describing workers’ relationship with their work, but the consequences of alienation spread into their analysis to include our relationship to nature and to existence as such. One term they used was “loss of reality.” Society at that time was incomparably more brutal, the machines incomparably coarser, but problems such as economic inequality and environmental destruction have continued into our own time. If anything, alienation as Marx and Engels defined it has only increased.

Or has it? The statement “people are more alienated now than ever before in history” sounds false, like applying an old concept to a new condition. That is not really what we are, is it? If there is something that characterizes our time, isn’t it the exact opposite, that nothing feels alien?

Alienation involves a distance from the world, a lack of connection between it and us. What technology does is compensate for the loss of reality with a substitute. Technology calibrates all differences, fills in every gap and crack with images and voices, bringing everything close to us in order to restore the connection between ourselves and the world. Even the past, which just a few generations ago was lost forever, can be retrieved and brought back…(More)”.

Comparative evaluation of behavioral epidemic models using COVID-19 data


Paper by Nicolò Gozzi, Nicola Perra, and Alessandro Vespignani: “Characterizing the feedback linking human behavior and the transmission of infectious diseases (i.e., behavioral changes) remains a significant challenge in computational and mathematical epidemiology. Existing behavioral epidemic models often lack real-world data calibration and cross-model performance evaluation in both retrospective analysis and forecasting. In this study, we systematically compare the performance of three mechanistic behavioral epidemic models across nine geographies and two modeling tasks during the first wave of COVID-19, using various metrics. The first model, a Data-Driven Behavioral Feedback Model, incorporates behavioral changes by leveraging mobility data to capture variations in contact patterns. The second and third models are Analytical Behavioral Feedback Models, which simulate the feedback loop either through the explicit representation of different behavioral compartments within the population or by utilizing an effective nonlinear force of infection. Our results do not identify a single best model overall, as performance varies based on factors such as data availability, data quality, and the choice of performance metrics. While the Data-Driven Behavioral Feedback Model incorporates substantial real-time behavioral information, the Analytical Compartmental Behavioral Feedback Model often demonstrates superior or equivalent performance in both retrospective fitting and out-of-sample forecasts. Overall, our work offers guidance for future approaches and methodologies to better integrate behavioral changes into the modeling and projection of epidemic dynamics…(More)”.

The Hypocrisy Trap: How Changing What We Criticize Can Improve Our Lives


Book by Michael Hallsworth: “In our increasingly distrusting and polarized nations, accusations of hypocrisy are everywhere. But the strange truth is that our attempts to stamp out hypocrisy often backfire, creating what Michael Hallsworth calls The Hypocrisy Trap. In this groundbreaking book, he shows how our relentless drive to expose inconsistency between words and deeds can actually breed more hypocrisy or, worse, cynicism that corrodes democracy itself.

Through engaging stories and original research, Hallsworth shows that not all hypocrisy is equal. While some forms genuinely destroy trust and create harm, others reflect the inevitable compromises of human nature and complex societies. The Hypocrisy Trap offers practical solutions: ways to increase our own consistency, navigate accusations wisely, and change how we judge others’ actions. Hallsworth shows vividly that we can improve our politics, businesses, and personal relationships if we rethink hypocrisy—soon…(More)”.

Five dimensions of scaling democratic deliberation: With and beyond AI


Paper by Sammy McKinney and Claudia Chwalisz: “In the study and practice of deliberative democracy, academics and practitioners are increasingly exploring the role that Artificial Intelligence (AI) can play in scaling democratic deliberation. From claims by leading deliberative democracy scholars that AI can bring deliberation to the ‘mass’, or ‘global’, scale, to cutting-edge innovations from technologists aiming to support scalability in practice, AI’s role in scaling deliberation is capturing the energy and imagination of many leading thinkers and practitioners.

There are many reasons why people may be interested in ‘scaling deliberation’. One is that there is evidence that deliberation has numerous benefits for the people involved in deliberations – strengthening their individual and collective agency, political efficacy, and trust in one another and in institutions. Another is that the decisions and actions that result are arguably higher-quality and more legitimate. Because the benefits of deliberation are so great, there is significant interest around how we could scale these benefits to as many people and decisions as possible.

Another motivation stems from the view that one weakness of small-scale deliberative processes results from their size. Increasing the sheer numbers involved is perceived as a source of legitimacy for some. Others argue that increasing the numbers will also increase the quality of the outputs and outcome.

Finally, deliberative processes that are empowered and/or institutionalised are able to shift political power. Many therefore want to replicate the small-scale model of deliberation in more places, with an emphasis on redistributing power and influencing decision-making.

When we consider how to leverage technology for deliberation, we emphasise that we should not lose sight of the first-order goals of strengthening collective agency. Today there are deep geo-political shifts; in many places, there is a movement towards authoritarian measures, a weakening of civil society, and attacks on basic rights and freedoms. We see the debate about how to ‘scale deliberation’ through this political lens, where our goals are focused on how we can enable a citizenry that is resilient to the forces of autocracy – one that feels and is more powerful and connected, where people feel heard and empathise with others, where citizens have stronger interpersonal and societal trust, and where public decisions have greater legitimacy and better alignment with collective values…(More)”