Stefaan Verhulst
Article by Phil Willon: “California Gov. Gavin Newsom on Sunday announced a new digital democracy initiative that will attempt to connect residents directly with government officials in times of disaster and allow them to express their concerns about matters affecting their day-to-day lives.
The web-based initiative, called Engaged California, will go live with a focus on aiding victims of the deadly wildfires in Pacific Palisades and Altadena who are struggling to recover. For example, comments shared via the online forum could potentially prompt government action regarding insurance coverage, building standards or efforts to require utilities to bury power lines underground.
In a written statement, Newsom described the pilot program as “a town hall for the modern era — where Californians share their perspectives, concerns, and ideas geared toward finding real solutions.”
“We’re starting this effort by more directly involving Californians in the LA firestorm response and recovery,” he added. “As we recover, reimagine, and rebuild Los Angeles, we will do it together.”
The Democrat’s administration has ambitious plans for the effort that go far beyond the wildfires. Engaged California is modeled after a program in Taiwan that became an essential bridge between the public and the government at the height of the COVID-19 pandemic. The Taiwanese government has relied on it to combat online political disinformation as well…(More)”.
Article by Anirudh Suri: “This paper explores the question of whether India specifically will be able to compete and lead in AI or whether it will remain relegated to a minor role in this global competition. The paper argues that if India is to meet its larger stated ambition of becoming a global leader in AI, it will need to fill significant gaps in at least three areas urgently: talent, data, and research. Putting these three missing pieces in place can help position India extremely well to compete in the global AI race.
India’s national AI mission (NAIM), also known as the IndiaAI Mission, was launched in 2024 and rightly notes that success in the AI race requires multiple pieces of the AI puzzle to be in place.3 Accordingly, it has laid out a plan across seven elements of the “AI stack”: computing/AI infrastructure, data, talent, research and development (R&D), capital, algorithms, and applications.4
However, the focus thus far has practically been on only two elements: ensuring the availability of AI-focused hardware/compute and, to some extent, building Indic language models. India has not paid enough attention to, acted toward, and put significant resources behind three other key enabling elements of AI competitiveness, namely data, talent, and R&D…(More)”.
OECD Report: “This report examines how actors in Portugal, Spain and the Netherlands interact and work together to contribute to the development of emerging technologies for citizen participation. Through in-depth research and analysis of actors’ motivations, experiences, challenges, and enablers in this nascent but promising field, this paper presents a unique cross-national perspective on innovation ecosystems for citizen participation using emerging technology. It includes lessons and concrete proposals for policymakers, innovators, and researchers seeking to develop technology-based citizen participation initiatives…(More)”.
Paper by Masanori Arita: “There are ethical, legal, and governance challenges surrounding data, particularly in the context of digital sequence information (DSI) on genetic resources. I focus on the shift in the international framework, as exemplified by the CBD-COP15 decision on benefit-sharing from DSI and discuss the growing significance of data sovereignty in the age of AI and synthetic biology. Using the example of the COVID-19 pandemic, the tension between open science principles and data control rights is explained. This opinion also highlights the importance of inclusive and equitable data sharing frameworks that respect both privacy and sovereign data rights, stressing the need for international cooperation and equitable access to data to reduce global inequalities in scientific and technological advancement…(More)”.
Article by Phanish Puranam: “When Google’s CEO Sundar Pichai recently revealed that 25 percent of the company’s software is now machine-generated, it underscored how quickly artificial intelligence is reshaping the workplace.
What does this mean for how we organise and manage? Will there still be room for humans in tomorrow’s organisations? And what might their work conditions look like? I tackle these questions in my new book “Re-Humanize: How to Build Human-Centric Organizations in the Age of Algorithms”.
The answers are not a given. They will depend on what we choose to do – what kinds of organisations we design. I make the case that successful organisation designs will have to pursue both goal-centricity (i.e. achieving objectives) and human-centricity (i.e. creating social environments that people find attractive). A myopic focus on only one or the other will not bode well for us.
The dual purpose of organisations
Why focus on organisations at a time when technology seems to be making such exciting strides? This was the very first question that INSEAD alumna Joanna Gordon asked me in a recent digital@INSEAD webinar.
My answer: Homo sapiens’s most impressive accomplishments, from building the pyramids to developing Covid-19 vaccines, are not individual achievements. They were possible only because many people worked together effectively. “How to organise groups to attain goals” is our oldest general-purpose technology (GPT!).
But there is more. To humans, organisations don’t just help accomplish goals. We are a species that has evolved to survive and thrive in groups, and organisations (i.e. groups with goals) are the natural habitat of Homo sapiens. They provide us with a sense of community and, as research has shown, help us strike a balance between our needs for social connection, individual autonomy and feeling capable and effective…(More)”.
Book by Rob Kitchin: “Critical Data Studies has come of age as a vibrant, interdisciplinary field of study. Taking data as its primary analytical focus, the field theorises the nature of data; examines how data are produced, managed, governed and shared; investigates how they are used to make sense of the world and to perform practical action; and explores whose agenda data-driven systems serve.
This book is the first comprehensive A-Z guide to the concepts and methods of Critical Data Studies, providing succinct definitions and descriptions of over 400 key terms, along with suggested further reading. The book enables readers to quickly navigate and improve their comprehension of the field, while also acting as a guide for discovering ideas and methods that will be of value in their own studies…(More)”
Chapter by Philipp Hacker, Andreas Engel, Sarah Hammer and Brent Mittelstadt: “… introduces The Oxford Handbook of the Foundations and Regulation of Generative AI, outlining the key themes and questions surrounding the technical development, regulatory governance, and societal implications of generative AI. It highlights the historical context of generative AI, distinguishes it from traditional AI, and explores its diverse applications across multiple domains, including text, images, music, and scientific discovery. The discussion critically assesses whether generative AI represents a paradigm shift or a temporary hype. Furthermore, the chapter extensively surveys both emerging and established regulatory frameworks, including the EU AI Act, the GDPR, privacy and personality rights, and copyright, as well as global legal responses. We conclude that, for now, the “Old Guard” of legal frameworks regulates generative AI more tightly and effectively than the “Newcomers,” but that may change as the new laws fully kick in. The chapter concludes by mapping the structure of the Handbook…(More)”
Paper by Sara Marcucci and Stefaan Verhulst: “The increasing complexity of global challenges, such as climate change, public health crises, and socioeconomic inequalities, underscores the need for a more sophisticated and adaptive policymaking approach. Evidence-Informed Decision-Making (EIDM) has emerged as a critical framework, leveraging data and research to guide policy design, implementation, and impact assessment. However, traditional evidence-based approaches, such as reliance on Randomized Controlled Trials (RCTs) and systematic reviews, face limitations, including resource intensity, contextual constraints, and difficulty in addressing real-time challenges. Artificial Intelligence offers transformative potential to enhance EIDM by enabling large-scale data analysis, pattern recognition, predictive modeling, and stakeholder engagement across the policy cycle. While generative AI has attracted significant attention, this paper emphasizes the broader spectrum of AI applications (beyond Generative AI) —such as natural language processing (NLP), decision trees, and basic machine learning algorithms—that continue to play a critical role in evidence-informed policymaking. These models, often more transparent and resource-efficient, remain highly relevant in supporting data analysis, policy simulations, and decision-support.
This paper explores AI’s role in three key phases of the policy cycle: (1) problem identification, where AI can support issue framing, trend detection, and scenario creation; (2) policy design, where AI-driven simulations and decision-support tools can improve solution alignment with real-world contexts; and (3) policy implementation and impact assessment, where AI can enhance monitoring, evaluation, and adaptive decision-making. Despite its promise, AI adoption in policymaking remains limited due to challenges such as algorithmic bias, lack of explainability, resource demands, and ethical concerns related to data privacy and environmental impact. To ensure responsible and effective AI integration, this paper highlights key recommendations: prioritizing augmentation over automation, embedding human oversight throughout AI-driven processes, facilitating policy iteration, and combining AI with participatory governance models…(More)”.

Article by Nithya Ramanathan & Jim Fruchterman: “Recent milestones in generative AI have sent nonprofits, social enterprises, and funders alike scrambling to understand how these innovations can be harnessed for global good. Along with this enthusiasm, there is also warranted concern that AI will greatly increase the digital divide and fail to improve the lives of 90 percent of the people on our planet. The current focus on funding AI intelligently and strategically in the social sector is critical, and it will help ensure that money has the largest impact.
So how can the social sector meet the current moment?
AI is already good at a lot of things. Plenty of social impact organizations are using AI right now, with positive results. Great resources exist for developing a useful understanding of the current landscape and how existing AI tech can serve your mission, including this report from Stanford HAI and Project Evident and this AI Treasure Map for Nonprofits from Tech Matters.
While some tech-for-good companies are creating AI and thriving—Digital Green, Khan Academy, and Jacaranda Health, among many—most social sector companies are not ready to build AI solutions. But even organizations that don’t have AI on their radar need to be thinking about how to address one of the biggest challenges to harnessing AI to solve social sector problems: insufficient data…(More)”.
Announcement by Alex Diaz: “Floods continue to devastate communities worldwide, and many are pursuing advancements in AI-driven flood forecasting, enabling faster, more efficient detection and response. Over the past few years, Google Research has focused on harnessing AI modeling and satellite imagery to dramatically accelerate the reliability of flood forecasting — while working with partners to expand coverage for people in vulnerable communities around the world.
Today, we’re rolling out new advanced features in Flood Hub designed to allow experts to understand flood risk in a given region via inundation history maps, and to understand how a given flood forecast on Flood Hub might propagate throughout a river basin. With the inundation history maps, Flood Hub expert users can view flood risk areas in high resolution over the map regardless of a current flood event. This is useful for cases where our flood forecasting does not include real time inundation maps or for pre-planning of humanitarian work. You can find more explanations about the inundation history maps and more in the Flood Hub Help Center…(More)”.