Stefaan Verhulst
Article by Lauren Leek: ” I needed a restaurant recommendation, so I did what every normal person would do: I scraped every single restaurant in Greater London and built a machine-learning model.
It started as a very reasonable problem. I was tired of doom-scrolling Google Maps, trying to disentangle genuinely good food from whatever the algorithm had decided to push at me that day. Somewhere along the way, the project stopped being about dinner and became about something slightly more unhinged: how digital platforms quietly redistribute economic survival across cities.
Because once you start looking at London’s restaurant scene through data, you stop seeing all those cute independents and hot new openings. You start seeing an algorithmic market – one where visibility compounds, demand snowballs, and who gets to survive is increasingly decided by code.
Google Maps Is Not a Directory. It’s a Market Maker.
The public story of Google Maps is that it passively reflects “what people like.” More stars, more reviews, better food. But that framing obscures how the platform actually operates. Google Maps is not just indexing demand – it is actively organising it through a ranking system built on a small number of core signals that Google itself has publicly acknowledged: relevance, distance, and prominence.
“Relevance” is inferred from text matching between your search query and business metadata. “Distance” is purely spatial. But “prominence” is where the political economy begins. Google defines prominence using signals such as review volume, review velocity, average rating, brand recognition, and broader web visibility. In other words, it is not just what people think of a place – it is how often people interact with it, talk about it, and already recognise it…(More)”.
Article by Cosima Lenz, Stefaan Verhulst, and Roshni Singh: “Women’s health has long been constrained not simply by a lack of research or investment, but by the absence of a clear, collectively defined set of priorities. Across the field, the most urgent questions – those that reflect women’s lived experiences, diverse needs and evolving health challenges – are too often unidentified, under-articulated or overshadowed by legacy agendas. Consequently, decision-makers struggle to allocate resources effectively, researchers work without a shared compass and innovation efforts risk overlooking the areas of greatest impact.
That’s why identifying and prioritising the questions that matter is essential. Questions shape what gets studied, what gets measured and whose needs are addressed. Yet historically, these questions have been selected in fragmented or opaque ways, driven by the interests of a limited set of stakeholders. This has left major evidence gaps, particularly in areas that disproportionately affect women, and has perpetuated inconsistencies in diagnosis, treatment and care…
The GovLab’s 100 Questions Initiative is such a global, multi-phase process that seeks to collectively identify and prioritise the ten most consequential questions in a given field. In partnership with CEPS, and with support from the Gates-funded Research & Innovation (R&I) project, this methodology was applied to women’s health innovation to define the Top 10 Questions guiding future research and innovation.
The process combined topic mapping with experts across research, policy, technology and advocacy, followed by collecting and refining candidate questions to address gaps in evidence, practice and lived experience. More than 70 global domain and data experts contributed, followed by a public voting phase that prioritised questions seen as both urgent and actionable. The full methodology is detailed in a pre-publication report on SSRN.
The Top 10 Questions are diagnostic tools, revealing evidence gaps, system failures and persistent assumptions shaping women’s health. By asking better questions, the initiative creates the conditions for more relevant research and improved outcomes…(More)”.
Article by Anirudh Dinesh: “Only last week, the state government of Madhya Pradesh signed(opens in new window) a partnership with Digital India’s Bhashini Division(opens in new window) (DIBD) to integrate multilingual AI tools across the state’s digital governance platforms. The agreement, formalized at a regional AI conference in the capital city Bhopal, aims to enable citizens to interact with public services in their own languages rather than defaulting to English or Hindi.
While most mainstream AI systems are trained primarily on English and optimized for English-speaking contexts, the Bhashini division is building infrastructure specifically designed for linguistic diversity – treating language access not as an afterthought but as foundational to building inclusive AI. This is part of a broader movement in India around building Digital Public Infrastructure (DPI) shaped(opens in new window) by openness, accessibility, and inclusion.
Bhashini(opens in new window), short for “BHASHa INterface for India”, was launched by Prime Minister Modi in 2022. It is a platform that treats language access as public infrastructure exposed through standardized APIs, combining translation across all 22 constitutionally recognized Indian languages, speech recognition, and text-to-speech tools that developers, governments, and nonprofits can use freely.

Crowdsourced Translations: BhashaDaan
Modern AI learns from vast quantities of data. But for many Indian languages, such as Konkani and Bodo, that data does not exist digitally. BhashaDaan(opens in new window) means “Language Donation.”..(More)”
Report by OneData: “Data is the lifeblood of the 21 st century economy: powering decisions, fueling economic growth, shaping algorithms, and spurring scientific innovation.
But vast inequalities have emerged, and not just in financial wealth. Access to data—and the ability to extract meaning from it—are deeply unequal. As is the frequency, quality, and availability of answers to basic questions in large parts of the world. Take the world’s public finance data, for instance. It is fragmented and often outdated, hard to find and use, and locked in formats and products that don’t reach the people who need them. As a result, the “last mile” from data to usable insight is weak, forcing decision-makers and advocates to frequently rely on partial evidence, outdated snapshots, or anecdotes.
A growing set of initiatives point to a different model, one that moves away from static reports toward a living evidence infrastructure, with shared data backbones, reusable analytical workflows, and applications designed around real policy tasks. This paper describes the problem and this emerging approach, and highlights ONE Data as one example of how organisations are attempting to reduce friction between credible evidence and day-to-day policy, media, and advocacy work…(More)”.
Press Release (Canada): “The People’s Consultation on AI launches today, a collaborative civil society initiative created in response to the federal government’s failure to provide a meaningful consultation process during the development of a national AI strategy in October 2025.
In 2025, the government hastily assembled a task force to develop its AI national strategy. It was heavily skewed towards industry with very few participants able to speak to the broader ethical, social and political implications of the technology.
An accompanying public consultation allowed just 30 days for feedback, impeding those most impacted by AI from effectively participating. The government’s consultation survey questions prioritized economic benefits over AI’s many negative impacts and responses are now being assessed by AI rather than public officials.
These and other shortcomings were detailed in an open letter signed by over 160 civil society organizations and experts in October 2025, protesting the government’s “national sprint” on AI and documenting the many negative impacts that are already occurring as AI technologies become embedded in every aspect of Canadian society.
The People’s Consultation on AI offers a meaningful alternative. Beginning today, public interest groups, academics, impacted communities, and people across Canada have a meaningful opportunity to have their say on whether–and how–AI should be adopted and governed in Canada.
Designed for broad participation, the People’s Consultation on AI welcomes everything from the results of neighbourhood discussions about AI’s everyday impacts to in-depth expert analyses. The consultation website provides resources on current implications of AI alongside general guidance and community facilitation tools to help people craft submissions collaboratively…(More)”.
Paper by Eszter Kovács Szitkay, Dániel Oross, and Alexandra Kiss: “The institutionalization of democratic innovations has been the focus of considerable debate in academic literature, particularly regarding whether it is necessary and, if so, what form it should optimally take. However, the present research—which uses the concept of institutions of citizen participation (ICPs) instead of democratic innovation due to its enhanced applicability in the present research context—goes one step further. Beyond the scrutiny of institutionalization, it also examines the democratic quality of ICPs. It argues that institutionalization alone does not guarantee the effective functioning of the related institutions. Hence, the article examines the institutionalization of ICPs in Hungary, evaluating its degree, impact, and potential in an illiberal and centralized environment by posing the following research question: How does the degree of institutionalization affect the quality of ICPs in a hybrid regime? The methodology is built on document analysis and applies a three-step assessment framework consisting of an institutionalization assessment of Hungarian ICPs, the use of an evaluation framework developed for a quality analysis, and, lastly, an analysis of the correlation between the degree of fulfillment of the institutionalization criteria and the impact on policy-making. Being embedded in the context of Hungary, the article defines the contours of how ICPs operate and have effects in a backsliding democracy. The article assesses five Hungarian ICPs, including open primaries, referenda, national consultations, public hearings, and citizens’ assemblies. The findings demonstrate that institutionalization in itself is not sufficient to ensure the quality of these institutions, and provide insight into the functioning of Hungary’s hybrid regime, which is based on the logic of “ruling by cheating.”..(More)”.
Book edited by Mike Cundall and Liz Sills: “… an introductory, interdisciplinary project that brings together authors from a wide variety of disciplines to discuss and explore issues surrounding the popular culture phenomenon known as memes. This anthology looks at the cultural, ethical, philosophical, and societal influence that memes have. Questions as to the nature of memes as art, the rhetorical influence of memes, how memes operate at a psychological and cultural level, and why memes are important across a variety of areas in our lives are discussed in various ways throughout the chapters. Attention has been paid by the authors to avoid jargon and make their text as accessible as possible, encouraging readers to explore and think further on the issues. The text includes vocabulary terms and learning objectives for easy navigation…(More)”.
Book by Richard R. Khan: “…Crafted with precision and purpose, each entry goes beyond a definition by linking concepts, visuals, and real-world applications to show how the pieces fit. From core AI and machine learning to quantum computing, robotics, IoT, cloud, cybersecurity, XR, and generative AI, this expanded edition illuminates the intersections that matter most. It also foregrounds ethics, governance, and bias, helping readers engage with innovation responsibly and with confidence.
Whether you’re a student beginning your journey, a professional seeking an edge, a policymaker weighing impact, or a curious mind eager to understand what’s next, this glossary is your essential companion. Read it cover to cover, browse by domain, or consult it in the moment. You’ll find clear explanations, cross-references, and context that accelerate learning and decision-making…(More)”.
Report by the Office for Statistical Regulation: “This report asks: what is trust?; who trusts?; and how do you build trust? These questions help us to understand whether the public trusts those involved in the production, and communication, of official statistics and what accounts for different levels of trust. These questions are answered through synthesising existing literature, and supported by primary analysis (described in Methodology). Thereafter, it concludes with a series of practical recommendations which can be adopted in order to increase levels of trust, improve trustworthiness and contribute to the overall vision of ensuring that official statistics serve the public good.
This report investigates levels of trust and draws together evidence exploring influencing factors. As the literature and existing studies focusing explicitly on the topic of trust in “official statistics” are relatively sparse – with obvious exceptions including the Public Confidence in Official Statistics (PCOS) survey and a small collation of commissioned surveys dedicated to this theme – this review adopts a wider approach which analyses levels of public trust more broadly. It considers studies which explore levels of public trust in actors and objects involved in the production, or communication, of official statistics. This includes the government; the Civil Service; scientists and experts; journalists and the media; research on communication platforms; and evidence more broadly.
From this broader approach to exploring trust, readers are provided with an overall picture of public trust levels. To support this aim, this review adopts a cross-disciplinary outlook drawing on psychological, sociological and political accounts of trust, and considers a range of models developed within these fields…(More)”.
Report by UNU: “…examines how artificial intelligence (AI) can be used to address some of the most pressing challenges facing humanity and the planet, including climate change, humanitarian crises, food insecurity and gaps in access to health and education. Drawing on nearly a decade of work under the International Telecommunication Union (ITU)–led AI for Good platform, the report focuses on three AI domains that have demonstrated particular relevance for the public good: robotics, geospatial artificial intelligence (GeoAI) and AI for communications networks.
Across applications ranging from healthcare and telemedicine to disaster response, biodiversity conservation and energy use optimization, the report documents how AI systems are already being applied to improve early warning, decision-making and service delivery, including in low-resource and crisis-affected settings. These applications illustrate how AI technologies beyond generative AI, when embedded in physical systems, spatial analysis and digital infrastructure, can support human well-being and planetary health.
At the same time, the report emphasizes that the benefits of AI are not automatic. Without appropriate governance, investment and capacity, AI systems risk reinforcing existing inequalities, exacerbating environmental pressures and undermining trust. Based on the analysis of case studies and consultations with experts from the AI for Good community, the report identifies five interrelated pathways that are critical for creating an enabling environment for AI for human and planetary well-being:
- Data quality, access and governance: Strengthening access to high-quality, representative and well-governed data, particularly geospatial data, to reduce bias, exclusion and fragmented decision-making.
- Digital infrastructure and access: Investing in inclusive digital infrastructure, including broadband connectivity, compute capacity and interoperable systems, to address persistent digital divides.
- AI literacy and talent: Expanding digital literacy, skills development and talent pipelines to enable institutions and societies to effectively deploy, interpret and govern AI systems.
- Responsible AI policy: Embedding safeguards related to human rights, privacy, cybersecurity, physical safety, labour impacts and environmental sustainability throughout the AI lifecycle.
- Digital ecosystem development: Fostering partnerships across governments, the United Nations system, industry, academia and civil society to translate innovation into scalable and durable public-interest outcomes.
Together, these pathways provide a practical framework for moving from isolated pilot projects towards the responsible and systemic deployment of AI systems in support of the Sustainable Development Goals. The report concludes that shaping the trajectory of AI through deliberate, inclusive and human-centred policymaking will be essential to ensuring that rapid technological advances translate into meaningful and lasting benefits for people and the planet…(More)”.