Stefaan Verhulst
Paper by Maria Carmen Lemos et al: “Impacts of climate change, such as flooding, drought, and fires, are already affecting millions of people worldwide. To mitigate and adapt to these impacts, we need climate information that: 1) is usable and used by decision-makers, and 2) is disseminated rapidly and widely, that is, information that can be scaled up. We propose three ways to accelerate usable climate knowledge through the collaboration between scientists and potential users: 1) increasing the number and diversity of people cocreating climate information that they trust and can use, 2) disseminating climate information that can be widely available to many decision-makers (e.g., through the internet), and 3) collaborating with decision-makers that make decisions that affect the public (e.g., water managers, city planners)…(More)”.
Article by Natasha Joshi: “…But what do we stand to lose when we privilege data science over human understanding?
C Thi Nguyen explains this through ‘value capture’. It is the process by which “our deepest values get captured by institutional metrics and then become diluted or twisted as a result. Academics aim at citation rates instead of real understanding; journalists aim for numbers of clicks instead of newsworthiness. In value capture, we outsource our values to large-scale institutions. Then all these impersonal, decontextualizing, de-expertizing filters get imported into our core values. And once we internalize those impersonalized values as our own, we won’t even notice what we’re overlooking.”
One such thing being overlooked is care.
Interpersonal caregiving makes no sense from a market lens. The person with power and resources voluntarily expends them to further another person’s well-being and goals. The whole idea of care is oceanic and hard to wrap one’s head around. ‘Head’ being the operative word, because we are trying to understand care with our brains, when it really exists in our bodies and is often performed by our bodies.
Data tools have only inferior ways of measuring care, and by extension designing spaces and society for it.
Outside of specific, entangled relationships of care, humans also have an amorphous ability to feel that they are part of a larger whole. We are affiliated to humanity, the planet, and indeed the universe, and feel it in our bones rather than know it to be true in any objective way.
We see micro-entrepreneurs, inventors, climate stewards, and scores of people, both rich and poor, across circumstances who engage in collective care to make the world a better place. This kind of pro-sociality doesn’t always show in ways that is tangible or immediate or measurable.
Datavism, which we seem to have learned from bazaar, has convinced capital allocators that the impact of social programmes can and should be expressed arithmetically. And, based on those calculations, acts of care can be deemed successful or unsuccessful…(More)”.
WEF Paper: “AI is transforming strategic foresight, the field where experts explore plausible futures and develop strategies to help organizations, governments and others prepare for events to come. This paper from the World Economic Forum, in collaboration with the OECD, explores how AI is reshaping this field.
Drawing on insights from 167 foresight experts in 55 countries, it reveals that experts value AI primarily for saving time and streamlining their work by handling repetitive and labour-intensive tasks. However, many respondents express concerns about the quality and trustworthiness of AI-generated content, noting its proclivity to hallucinations, and its limited capacity for inductive reasoning, as it draws from existing knowledge and struggles to embrace the forward-looking perspectives needed for strategic foresight. The paper suggests how to take advantage of AI where it can augment foresight, while limiting its pitfalls…(More)”.
Book by Ryan Calo: “Technology exerts a profound influence on contemporary society, shaping not just the tools we use but the environments in which we live. Law, uniquely among social forces, is positioned to guide and constrain the social fact of technology in the service of human flourishing. Yet, technology has proven disorienting to law: it presents itself as inevitable, makes a shell game of human responsibility, and daunts regulation. Drawing lessons from communities that critically assess emerging technologies, this book challenges the reflexive acceptance of innovation and critiques the widespread belief that technology is inevitable or ungovernable. It calls for a methodical, coherent approach to the legal analysis of technology—one capable of resisting technology’s disorienting qualities—thus equipping law to meet the demands of an increasingly technology-mediated world while helping to unify the field of law and technology itself…(More)”.
Article by Matt Prewitt: “…Markets have always required some form of protectionist intervention — like intellectual property law — to help foster innovation. In recent years, startups have innovated because of a rich symbiosis with tech giants and their monopoly-seeking investors. Startups are indeed hungry, but their hunger is not to serve consumer needs or the national interest; it is to join the happy ranks of the monopolists. The nature of technological innovation is that competitive markets, without being “managed,” do not inspire it.
Today, this may sound bizarre, heterodox and jarring. But it was once fairly mainstream opinion. In the middle part of the 20th century, many of America’s most celebrated economic minds taught that competitive markets cause technological progress to stagnate. During the neoliberal era that followed, from the 1980s to the 2010s, this idea was largely forgotten and pushed to the margins of politics and academia. But it never lost its kernel of truth…
Where to from here? Both sides of the battered American center must first face their mistakes. This will be painful, not only because their errors have resulted in profound and long-term mis-governance, but also because their blind spots are deeply entangled with old, hard-to-kick ideological habits. The center-left’s sunny techno-utopianism traces its roots back to the rationalism of the French Revolution, via Karl Marx and early 20th-century progressives. The center-right’s fervent market fundamentalism is equally a relic of bygone eras, reflecting the thought of Friedrich Hayek and Milton Friedman — idea-warriors who pitched competitive markets as a cure-all largely to one-up the utopian promises of their techno-optimistic progressive foes. Thus, today, center-right and center-left thinking both feel like artifacts from yesterday’s ideological trenches. A new form of centrism that wants to speak to 2026 would need to thoroughly clear the decks…(More)”
Article by Sarah Perez: “In a blog post, the Wikimedia Foundation, the organization that runs the popular online encyclopedia, called on AI developers to use its content “responsibly” by ensuring its contributions are properly attributed and that content is accessed through its paid product, the Wikimedia Enterprise platform.
The opt-in, paid product allows companies to use Wikipedia’s content at scale without “severely taxing Wikipedia’s servers,” the Wikimedia Foundation blog post explains. In addition, the product’s paid nature allows AI companies to support the organization’s nonprofit mission.
While the post doesn’t go so far as to threaten penalties or any sort of legal action for use of its material through scraping, Wikipedia recently noted that AI bots had been scraping its website while trying to appear human. After updating its bot-detection systems, the organization found that its unusually high traffic in May and June had come from AI bots that were trying to “evade detection.” Meanwhile, it said that “human page views” had declined 8% year-over-year.
Now Wikipedia is laying out its guidelines for AI developers and providers, saying that generative AI developers should provide attribution to give credit to the human contributors whose content it uses to create its outputs…(More)”.
Paper by Wenlan Zhang et al: “Big data has emerged as a critical instrument for urban planning, but is limited by reliability and representativeness. Its availability varies across space, time, and sociodemographics, especially in the Global South. This results in the emergence of digitally invisible groups-individuals who are unable to contribute to and benefit from data-informed decisions-thus exacerbating existing inequalities and further marginalising populations already at risk of exclusion. This study presents an example application, i.e., land use classification in a developing country, combining traditional geospatial data (satellite imagery, night lights, building footprints) with big data (geotagged Twitter, street view imagery) and compares random forest classification results across different data solutions using multiple evaluation methods. Results show residents of informal settlements in Nairobi are underrepresented in geotagged Twitter data, and inaccessible neighbourhoods lack streetview imagery. Relying on a single data source doesn’t improve urban mapping; biases can make some groups digitally invisible. Integrating multiple data sources is likely to mitigate these gaps by capturing different people and places, but this should be guided by a systematic, class-wise comparison rather than indiscriminate aggregation. We recommend primary data collection and participatory mapping to address blind spots and ensure these communities are represented in data-driven decision-making…(More)”.
OECD: “People in Latin American and the Caribbean are more optimistic than the OECD average about their governments’ ability to tackle complex global challenges, even as overall levels of trust in government remain lower according to a new OECD report.
OECD Survey on Drivers of Trust in Public Institutions in Latin America and the Caribbean: 2025 Results is the first regional initiative conducted under the Global Trust Survey Project. Covering ten Latin American and Caribbean countries*, the survey explores participants’ experiences with and expectations of their governments across key areas such as reliability, responsiveness, ability to manage long-term and global challenges, integrity, fairness, and openness.
Across the ten countries surveyed: on average, 35% of people express high or moderately high trust in the national government, while around half (48%) report low or no trust. Public trust varies significantly across countries and institutions. The armed forces, police, and media are more trusted than the judiciary, the civil service, legislatures and political parties…
Trust also varies across population groups, with trust in public institutions lower among those with financial, security and discrimination concerns, and among women and younger people. Perceptions of political voice and partisanship are more strongly associated with trust gaps than socio-economic and demographic characteristics.
People who feel they have a say and the government listens to them are three times more likely to trust their government than those who do not. Currently, only 25% of respondents feel they have a say in government decisions, and just 36% believe national governments are held accountable by legislatures…(More)”.
Book by Slavko Splichal: “…explores the evolving nature of publicness in the era of digital communication and social media saturation, arguing that the rise of the “gig public” represents a new paradigm that challenges the traditional conceptualization of the public in shaping social and political change. The gig public departs from traditional notions of publicness and the public, characterized by individuals’ spontaneous and less-structured engagement in public discourse. This engagement is often hampered by challenges in fostering sustained interaction and depth of discussion, due to the ephemeral nature of online interactions.
In particular, this monograph highlights the importance of customs, negotiations, and contracts that complement the normatively privileged public reasoning in public domains. It examines the transformations in the multifaceted nature of the public and its interrelationship with other social structures amid the shifting boundaries between public and private domains. In addition, it explores the evolution of conceptualizations of publicness and related concepts within critical theory, illustrating how contemporary shifts are redefining civic engagement and the essence of public life in a rapidly changing world. From these perspectives, the study is structured around three primary focal points: First, it analyzes how new information technologies and AI have altered human interactions within the public sphere. Second, it examines the impact of capitalist economic dynamics and governmentality strategies on reshaping the public realm, fundamentally altering the essence of the public and its democratic potential. Third, it explores how habitual and routine practices traditionally associated with the private sphere are now influencing the ongoing evolution of publicness…(More)”.
Paper by Yael Borofsky et al: “Infrastructure inequities define modern cities. This Perspective reflects the viewpoint of a transdisciplinary group of co-authors working to advance infrastructural equity in low-income urban contexts. We argue that methodological silos and data fragmentation undermine the creation of a knowledge base to support coordinated action across diverse actors. As technological advances make it possible to ‘see’ informal settlements without engaging residents, our agenda advocates for (1) the integration of diverse methodological and epistemological traditions; (2) a focus on research that informs context-specific action; and (3) a commitment to ethical standards that center affected communities in efforts to improve infrastructure access…(More)”.