Why PeaceTech must be the next frontier of innovation and investment


Article by Stefaan Verhulst and Artur Kluz: “…amidst this frenzy, a crucial question is being left unasked: Can technology be used not just to win wars, but to prevent them and save people’s lives?

There is an emerging field that dares to pose this question—PeaceTech. It is the use of technology to save human lives, prevent conflict, de-escalate violence, rebuild fractured communities, and secure fragile peace in post-conflict environments.

From early warning systems that predict outbreaks of violence, to platforms ensuring aid transparency, and mobile tools connecting refugees to services: PeaceTech is real, it works—and it is radically underfunded.

Unlike the vast sums pouring into defense startups, peace building efforts, including PeaceTech organizations and ventures, struggle for scraps. The United Nations Secretary-General released in 2020 its ambitious goal to fundraise $1.5 billion in peacebuilding support over a total of seven years. In contrast, private investment in defense tech crossed $34 billion in 2023 alone. 

Why is PeaceTech so neglected?

One reason PeaceTech is so neglected is cultural: in the tech world, “peace” can seem abstract or idealistic—soft power in a world of hard tech. In reality, peace is not soft; it is among the hardest, most complex challenges of our time. Peace requires systemic thinking, early intervention, global coordination, and a massive infrastructure of care, trust, and monitoring. Maintaining peace in a hyper-polarized, technologically complex world is a feat of engineering, diplomacy, and foresight.

And it’s a business opportunity. According to the Institute for Economics and Peace, violence costs the global economy over $17 trillion per year—about 13% of global GDP. Even modest improvements in peace would unlock billions in economic value.

Consider the peace dividend from predictive analytics that can help governments or international organizations intervene or mediate before conflict breaks out, or AI-powered verification tools to enforce ceasefires and disinformation controls. PeaceTech, if scaled, could become a multibillion dollar market—and a critical piece of the security architecture of the future…(More)”.

Sharing trustworthy AI models with privacy-enhancing technologies


OECD Report: “Privacy-enhancing technologies (PETs) are critical tools for building trust in the collaborative development and sharing of artificial intelligence (AI) models while protecting privacy, intellectual property, and sensitive information. This report identifies two key types of PET use cases. The first is enhancing the performance of AI models through confidential and minimal use of input data, with technologies like trusted execution environments, federated learning, and secure multi-party computation. The second is enabling the confidential co-creation and sharing of AI models using tools such as differential privacy, trusted execution environments, and homomorphic encryption. PETs can reduce the need for additional data collection, facilitate data-sharing partnerships, and help address risks in AI governance. However, they are not silver bullets. While combining different PETs can help compensate for their individual limitations, balancing utility, efficiency, and usability remains challenging. Governments and regulators can encourage PET adoption through policies, including guidance, regulatory sandboxes, and R&D support, which would help build sustainable PET markets and promote trustworthy AI innovation…(More)”.

2025 State of the Digital Decade


Report by The European Commission: “…assessed the EU’s progress along the four target areas for the EU’s digital transformation by 2030, highlighting achievements and gaps in the areas of digital infrastructure, digitalisation of businesses, digital skills, and digitalisation of public service.

Digital Decade logo

The report shows that although there are certain advancements, the rollout of connectivity infrastructure, such as fibre and 5G stand-alone networks, is still lagging. More companies are adopting Artificial Intelligence (AI), cloud and big data, but adoption needs to accelerate. Just over half of Europeans (55.6%) have a basic level of digital skills, while the availability of ICT specialists with advanced skills remains low and with a stark gender divide, hindering progress in key sectors, such as cybersecurity and AI. In 2024, the EU made steady progress in digitalising key public services, but a substantial portion of governmental digital infrastructure continues to depend on service providers outside the EU.

The data shows persisting challenges, such as fragmented markets, overly complex regulations, security and strategic dependence. Further public and private investment and easier access to venture capital for EU companies would accelerate innovation and scale up…(More)”.

Comparative evaluation of behavioral epidemic models using COVID-19 data


Paper by Nicolò Gozzi, Nicola Perra, and Alessandro Vespignani: “Characterizing the feedback linking human behavior and the transmission of infectious diseases (i.e., behavioral changes) remains a significant challenge in computational and mathematical epidemiology. Existing behavioral epidemic models often lack real-world data calibration and cross-model performance evaluation in both retrospective analysis and forecasting. In this study, we systematically compare the performance of three mechanistic behavioral epidemic models across nine geographies and two modeling tasks during the first wave of COVID-19, using various metrics. The first model, a Data-Driven Behavioral Feedback Model, incorporates behavioral changes by leveraging mobility data to capture variations in contact patterns. The second and third models are Analytical Behavioral Feedback Models, which simulate the feedback loop either through the explicit representation of different behavioral compartments within the population or by utilizing an effective nonlinear force of infection. Our results do not identify a single best model overall, as performance varies based on factors such as data availability, data quality, and the choice of performance metrics. While the Data-Driven Behavioral Feedback Model incorporates substantial real-time behavioral information, the Analytical Compartmental Behavioral Feedback Model often demonstrates superior or equivalent performance in both retrospective fitting and out-of-sample forecasts. Overall, our work offers guidance for future approaches and methodologies to better integrate behavioral changes into the modeling and projection of epidemic dynamics…(More)”.

The Hypocrisy Trap: How Changing What We Criticize Can Improve Our Lives


Book by Michael Hallsworth: “In our increasingly distrusting and polarized nations, accusations of hypocrisy are everywhere. But the strange truth is that our attempts to stamp out hypocrisy often backfire, creating what Michael Hallsworth calls The Hypocrisy Trap. In this groundbreaking book, he shows how our relentless drive to expose inconsistency between words and deeds can actually breed more hypocrisy or, worse, cynicism that corrodes democracy itself.

Through engaging stories and original research, Hallsworth shows that not all hypocrisy is equal. While some forms genuinely destroy trust and create harm, others reflect the inevitable compromises of human nature and complex societies. The Hypocrisy Trap offers practical solutions: ways to increase our own consistency, navigate accusations wisely, and change how we judge others’ actions. Hallsworth shows vividly that we can improve our politics, businesses, and personal relationships if we rethink hypocrisy—soon…(More)”.

Five dimensions of scaling democratic deliberation: With and beyond AI


Paper by Sammy McKinney and Claudia Chwalisz: “In the study and practice of deliberative democracy, academics and practitioners are increasingly exploring the role that Artificial Intelligence (AI) can play in scaling democratic deliberation. From claims by leading deliberative democracy scholars that AI can bring deliberation to the ‘mass’, or ‘global’, scale, to cutting-edge innovations from technologists aiming to support scalability in practice, AI’s role in scaling deliberation is capturing the energy and imagination of many leading thinkers and practitioners.

There are many reasons why people may be interested in ‘scaling deliberation’. One is that there is evidence that deliberation has numerous benefits for the people involved in deliberations – strengthening their individual and collective agency, political efficacy, and trust in one another and in institutions. Another is that the decisions and actions that result are arguably higher-quality and more legitimate. Because the benefits of deliberation are so great, there is significant interest around how we could scale these benefits to as many people and decisions as possible.

Another motivation stems from the view that one weakness of small-scale deliberative processes results from their size. Increasing the sheer numbers involved is perceived as a source of legitimacy for some. Others argue that increasing the numbers will also increase the quality of the outputs and outcome.

Finally, deliberative processes that are empowered and/or institutionalised are able to shift political power. Many therefore want to replicate the small-scale model of deliberation in more places, with an emphasis on redistributing power and influencing decision-making.

When we consider how to leverage technology for deliberation, we emphasise that we should not lose sight of the first-order goals of strengthening collective agency. Today there are deep geo-political shifts; in many places, there is a movement towards authoritarian measures, a weakening of civil society, and attacks on basic rights and freedoms. We see the debate about how to ‘scale deliberation’ through this political lens, where our goals are focused on how we can enable a citizenry that is resilient to the forces of autocracy – one that feels and is more powerful and connected, where people feel heard and empathise with others, where citizens have stronger interpersonal and societal trust, and where public decisions have greater legitimacy and better alignment with collective values…(More)”

Generative AI Outlook Report


Outlook report, prepared by the European Commission’s Joint Research Centre (JRC): “…examines the transformative role of Generative AI (GenAI) with a specific emphasis on the European Union. It highlights the potential of GenAI for innovation, productivity, and societal change. GenAI is a disruptive technology due to its capability of producing human-like content at an unprecedented scale. As such, it holds multiple opportunities for advancements across various sectors, including healthcare, education, science, and creative industries. At the same time, GenAI also presents significant challenges, including the possibility to amplify misinformation, bias, labour disruption, and privacy concerns. All those issues are cross-cutting and therefore, the rapid development of GenAI requires a multidisciplinary approach to fully understand its implications. Against this context, the Outlook report begins with an overview of the technological aspects of GenAI, detailing their current capabilities and outlining emerging trends. It then focuses on economic implications, examining how GenAI can transform industry dynamics and necessitate adaptation of skills and strategies. The societal impact of GenAI is also addressed, with focus on both the opportunities for inclusivity and the risks of bias and over-reliance. Considering these challenges, the regulatory framework section outlines the EU’s current legislative framework, such as the AI Act and horizontal Data legislation to promote trustworthy and transparent AI practices. Finally, sector-specific ‘deep dives’ examine the opportunities and challenges that GenAI presents. This section underscores the need for careful management and strategic policy interventions to maximize its potential benefits while mitigating the risks. The report concludes that GenAI has the potential to bring significant social and economic impact in the EU, and that a comprehensive and nuanced policy approach is needed to navigate the challenges and opportunities while ensuring that technological developments are fully aligned with democratic values and EU legal framework…(More)”.

AI alone cannot solve the productivity puzzle


Article by Carl Benedikt Frey: “Each time fears of AI-driven job losses flare up, optimists reassure us that artificial intelligence is a productivity tool that will help both workers and the economy. Microsoft chief Satya Nadella thinks autonomous AI agents will allow users to name their goal while the software plans, executes and learns across every system. A dream tool — if efficiency alone was enough to solve the productivity problem.

History says it is not. Over the past half-century we have filled offices and pockets with ever-faster computers, yet labour-productivity growth in advanced economies has slowed from roughly 2 per cent a year in the 1990s to about 0.8 per cent in the past decade. Even China’s once-soaring output per worker has stalled.

The shotgun marriage of the computer and the internet promised more than enhanced office efficiency — it envisioned a golden age of discovery. By placing the world’s knowledge in front of everyone and linking global talent, breakthroughs should have multiplied. Yet research productivity has sagged. The average scientist now produces fewer breakthrough ideas per dollar than their 1960s counterpart.

What went wrong? As economist Gary Becker once noted, parents face a quality-versus-quantity trade-off: the more children they have, the less they can invest in each child. The same might be said for innovation.

Large-scale studies of inventive output confirm the result: researchers juggling more projects are less likely to deliver breakthrough innovations. Over recent decades, scientific papers and patents have become increasingly incremental. History’s greats understood why. Isaac Newton kept a single problem “constantly before me . . . till the first dawnings open slowly, by little and little, into a full and clear light”. Steve Jobs concurred: “Innovation is saying no to a thousand things.”

Human ingenuity thrives where precedent is thin. Had the 19th century focused solely on better looms and ploughs, we would enjoy cheap cloth and abundant grain — but there would be no antibiotics, jet engines or rockets. Economic miracles stem from discovery, not repeating tasks at greater speed.

Large language models gravitate towards the statistical consensus. A model trained before Galileo would have parroted a geocentric universe; fed 19th-century texts it would have proved human flight impossible before the Wright brothers succeeded. A recent Nature review found that while LLMs lightened routine scientific chores, the decisive leaps of insight still belonged to humans. Even Demis Hassabis, whose team at Google DeepMind produced AlphaFold — a model that can predict the shape of a protein and is arguably AI’s most celebrated scientific feat so far — admits that achieving genuine artificial general intelligence systems that can match or surpass humans across the full spectrum of cognitive tasks may require “several more innovations”…(More)”.

Manipulation: What It Is, Why It’s Bad, What to Do About It


Book by Cass Sunstein: “New technologies are offering companies, politicians, and others unprecedented opportunity to manipulate us. Sometimes we are given the illusion of power – of freedom – through choice, yet the game is rigged, pushing us in specific directions that lead to less wealth, worse health, and weaker democracy. In, Manipulation, nudge theory pioneer and New York Times bestselling author, Cass Sunstein, offers a new definition of manipulation for the digital age, explains why it is wrong; and shows what we can do about it. He reveals how manipulation compromises freedom and personal agency, while threatening to reduce our well-being; he explains the difference between manipulation and unobjectionable forms of influence, including ‘nudges’; and he lifts the lid on online manipulation and manipulation by artificial intelligence, algorithms, and generative AI, as well as threats posed by deepfakes, social media, and ‘dark patterns,’ which can trick people into giving up time and money. Drawing on decades of groundbreaking research in behavioral science, this landmark book outlines steps we can take to counteract manipulation in our daily lives and offers guidance to protect consumers, investors, and workers…(More)”.

Participatory Approaches to Responsible Data Reuse and Establishing a Social License


Chapter by Stefaan Verhulst, Andrew J. Zahuranec & Adam Zable in Global Public Goods Communication (edited by Sónia Pedro Sebastião and Anne-Marie Cotton): “… examines innovative participatory processes for establishing a social license for reusing data as a global public good. While data reuse creates societal value, it can raise concerns and reinforce power imbalances when individuals and communities lack agency over how their data is reused. To address this, the chapter explores participatory approaches that go beyond traditional consent mechanisms. By engaging data subjects and stakeholders, these approaches aim to build trust and ensure data reuse benefits all parties involved.

The chapter presents case studies of participatory approaches to data reuse from various sectors. This includes The GovLab’s New York City “Data Assembly,” which engaged citizens to set conditions for reusing cell phone data during the COVID-19 response. These examples highlight both the potential and challenges of citizen engagement, such as the need to invest in data literacy and other resources to support meaningful public input. The chapter concludes by considering whether participatory processes for data reuse can foster digital self-determination…(More)”.