Stefaan Verhulst
Essay by Patrick K. Lin: “Before the 1870s, retail goods rarely carried fixed prices. Instead, haggling was the norm. Customers and store clerks engaged in a song and dance, testing the other’s economic limits. Then, on the eve of the Philadelphia World’s Fair, businessman John Wanamaker transformed an abandoned railroad station into the Grand Depot, one of the first department stores in the United States. At the grand opening, each item in the sprawling store was affixed with a conspicuous label declaring a non-negotiable price. When millions came to the city for the fair, many had their first encounter with fixed price tags. The elimination of haggling saved both customers and clerks time, making the market significantly more efficient. Fair visitors brought the idea of the price tag home with them. Soon, businesses around the world adopted fixed prices and price transparency.
One hundred and fifty years later, the datafication of the economy is causing the retail experience to regress to a form of variable pricing far more coercive than the haggling of the past. With online shopping, social media, and data collection, modern corporations have access to more information than ever before. Retailers can view your purchase history, location, personal demographics, and much more. This has enabled businesses across a variety of sectors to engage in surveillance pricing—the practice of extracting and exploiting personal information in order to charge customers different prices for the same product or service. Today, variable pricing is back, but this time the seller knows everything about you.
The viability of surveillance pricing—its profitability, ubiquity, and exploitative nature—hinges on the presence of market failures. Severe information asymmetries are perhaps the most insidious. While corporations have access to data brokers, online behavioral advertising, and algorithms that can adjust prices in real time, consumers are more disempowered than ever…(More)”.
Paper by Alexandros Sagkriotis: “Real-world data (RWD) and real-world evidence (RWE) are increasingly used to inform regulatory decisions, health technology assessment, and health system planning. However, patients whose data underpin these activities often experience limited transparency or benefit when their information is monetised. While regulatory and HTA frameworks emphasise methodological rigor and analytical transparency, they provide limited guidance on fairness, reciprocity, and legitimacy from a patient perspective. This Policy Brief examines this governance gap and argues that evidence integrity must extend beyond technical standards to include ethical stewardship and public trust. Drawing on policy contexts from UK, EU, and North America, it proposes five pragmatic safeguards to strengthen transparency, accountability, and patient-centred governance in secondary data use, supporting the sustainability and legitimacy of RWE infrastructures as data initiatives expand…(More)”.
Article by Edward Bellamy: “When American author and journalist Edward Bellamy published his utopian novel Looking Backward: 2000–1887 in 1888, he didn’t know that it would be one of the best-selling books of the era; that it would inspire political groups around the world; or that it would influence the thinking of some of the most prominent intellectuals of the time.
All this he didn’t know. But he did know that the slums, sweatshops, and unsafe factories he observed as America industrialized in the second half of the nineteenth century, alongside the skyrocketing wealth of a handful of men, couldn’t represent the pinnacle of human society; there had to be something better.
One of the fundamental ways we misperceive the world is by believing that the ways things are is the way they have to be; that the world as it is today reflects the natural order of things. Looking Backward was Bellamy’s attempt to help people avoid falling into this cognitive quicksand. Because if the way things are is the way they have to be, what use is there trying to change them? Or, even if change is possible, you’re constrained by the “nature of things,” so there is only so much you can do. Through the novel, Bellamy imagined what the world could be in the year 2000, if humans realized their rational and moral potential, and hoped to inspire readers to work toward achieving it.
The lead character in Looking Backward is Julian West, a well-to-do thirty-something living in Boston who falls into a deep sleep in 1887 and wakes up over a century later in the year 2000. When West comes to, he discovers a utopian society, free from war and economic and social injustice, and full of community and solidarity. As the novel unfurls, West learns chapter by chapter how society has been organized in order to achieve this.
There is a guaranteed income (similar to a universal basic income); work is tied to motivation and duty as opposed to external incentives; and the good life is found through relationships rather than material consumption. And of course West is tasked with explaining to those in 2000 what things were like in the nineteenth century—they find it hard to believe society could have ever tolerated such inequality and injustice. (And like any good science fiction author, Bellamy was the first to dream up several inventions, including the clock radio and the idea of a payment card—there is a monument to the credit card in Russia with Bellamy’s name on it.)…(More)”.
Article by Shihab Jamal: “…Hemmings is part of a vast number of research-support specialists working at scientific institutions around the world, often in the shadows. As the ‘stagehands’ of science, they are mostly invisible to the audience but essential to the show. Even though their fingerprints are all over many data sets, they’re rarely recognized as full contributors on projects, as co-authors and in other ways that are formally rewarded by the scientific establishment. In publications, they often appear only in a short phrase in the acknowledgements section. Their hidden labour and expertise can be difficult to measure.
Simon Hettrick, the chair of the Hidden REF initiative, a campaign launched at the University of Southampton, UK, in 2020 to highlight and celebrate these crucial roles, says that “this lack of recognition translates into significant difficulties for people in these roles: in getting support, finding positions and progressing their careers”.
Nature’s careers team interviewed research-support professionals to hear how their work helps to shape the course of modern science, and how recognition — or the lack of it — has influenced their careers…(More)”.
Paper by Leo Ferres and Laetitia Gauvin: “Call detail records (CDR) from mobile phone networks are widely used to study human mobility however CDR data from a single mobile operator are inherently biased because the observed users do not mirror the population distribution. Using data from a major Chilean carrier in Santiago, we observe the user base is skewed by socioeconomic group, so aggregate metrics like radius of gyration are distorted by the population that is actually observed.
To correct this sampling bias, we apply multilevel regression and poststratification (MRP), a method that is not yet standard for CDR-based mobility studies. We fit a Bayesian multilevel model for individual mobility using socioeconomic status, gender, and geography, with partial pooling across comunas, and then poststratify the predictions to match census demographics. This approach reduces the naive CDR estimate of average radius of gyration by about 17%.
Importantly, a version of the model that uses only geographic information still captures much of the bias, showing that MRP can be useful even when the socioeconomic composition of users is not fully known, as long as spatial patterns of socioeconomic groups exist. This example demonstrates how MRP can provide a principled correction for non-representative CDR-derived mobility estimates, rather than treating the carrier sample as if it were a random population sample…(More)”.
Unesco Report: “Artificial intelligence (AI) is rapidly being embedded across companies’ products, services and internal operations, yet governance and disclosure are not evolving at the same speed. This report looks at corporate practice in the context of the emerging responsible AI regulatory landscape and analyses publicly available data collected by the Thomson Reuters Foundation’s AI Company Data Initiative, the largest global dataset of corporate responsible AI disclosures. As privately developed or deployed AI systems shape more of daily life, transparency must move beyond technical descriptions to show how accountability works— including who makes decisions, how ethical issues are escalated, and what remediation paths exist when things go wrong. Clear responsibility for harms or breaches should be identifiable in practice, not just in principle. Just as we expect openness and accountability from government, it is important that the private sector meets comparable transparency standards for AI that affects the public…(More)”.
Paper by Thibault Schrepel: “A digital brain, as coined by Andrej Karpathy, is a personal knowledge infrastructure built from documents you trust. It maps connections between sources, surfaces patterns and inconsistencies. It generates answers on demand with references to the underlying material. The more it is used, the more connections it builds. The output is a private, queryable Wikipedia. On demand, the system generates wiki pages on any theme covered by the corpus, and each new page feeds back into the knowledge base.
This guide documents a pipeline for building such a system from any document corpus, for research purposes. The pipeline adapts Karpathy’s methodology and adds six research-specific contributions. (1) A schema design procedure encodes authority hierarchies and surfaces gaps in the literature. (2) A centrality-weighted wiki generation procedure anchors each article around the most-connected sources. (3) A six-step research protocol produces hypotheses rather than retrieved information. (4) Claim-level extraction moves the unit of analysis from documents to propositions, which makes visible the incompatibilities that document-level graphs hide. (5) A persistent hypothesis register stores every query-generated conjecture and re-tests it as the corpus grows. (6) A complexity-theoretic diagnostic layer measures the graph’s network properties and reports how the field is structured. The pipeline is field-neutral. The two implementations documented here, one academic research corpus and one European Commission decisions dataset, are illustrative examples…(More)”.
Paper by Teodora Lalova-Spinks et al: “The reuse of health data is critical for advancing health research, yet it raises complex ethical, legal, and societal challenges. In the European Union, the recently adopted European Health Data Space (EHDS) aims to harmonize access to and reuse of health data for research and innovation, while safeguarding individual rights. However, questions remain about what patients value in data reuse and how their values can be embedded in governance frameworks. Belgium, with its strong research tradition and central role in EU policymaking, offers an important testbed for these questions…(More)”.
Article by Anusha Krishnan: “What does a global map of plant life look like, and what happens when the data behind it is incomplete?
A recent study published in Nature Communications in January 2026, describes such a map, built from field surveys, earth observation systems, and millions of observations recorded by citizen scientists around the world.
This map now offers one of the most in-depth views of how plants function across ecosystems. However, the map also exposes something else. These are large, persistent gaps in the data that scientists rely on to understand the Earth’s vegetation, which means that quite a bit of the world’s plant life is still poorly documented.
The study used 31 plant traits such as size, growth strategy, leaf characteristics, wood density, reproductive traits, and resource use to outline a global ‘plant economics’ spectrum. These characteristics, also known as functional traits, can help us understand how plant strategies change in response to climate and ecosystem stress.
Currently, most global biodiversity data only tell us what species are found where; they don’t tell us what roles they play in carbon storage and ecosystem dynamics. Mapping these traits on a global scale gives us a spectrum of characteristics spanning fast-growing, nutrient-hungry plants to slow-growing, stress-tolerant ones and how these traits support plant growth, survival, adaptation, and persistence in an ever-changing world. This is especially important for informing models on energy, nutrient, and water cycles which are increasingly being used to plan infrastructure, agricultural, and energy strategies in a world faced with climate change.
The researchers used a combination of data from detailed field surveys collected by scientists, millions of observations from citizen scientists, and environmental information derived from satellites and climate records to create this global plant trait map.
They then used machine-learning models to link the plant traits with environmental conditions like temperature, rainfall, and soil properties to predict plant traits in places where direct measurements were unavailable. The models were generated using three approaches, namely, scientific surveys only, citizen science only, and both combined…(More)”.
Resource by the AI & Democracy Foundation: “… is intended to track the capabilities, research questions, and product gaps that stand between us and deliberative democratic systems that can handle the challenges posed by AI advances.
This map builds on Democratic System Cards (ICML 2025), by providing a concrete path toward improving each of the core dimensions underlying the quality of democratic processes. The current version is intended to particularly accelerate those focused on improving representative deliberative democratic processes. It provides a map of critical ‘democratic capabilities’ across each dimension and supports prioritization about what to research, fund, build, and apply in order to have the most impact.
Our ultimate goal is that key actors making consequential decisions—especially on AI—have access to processes that are sufficiently high quality (e.g., representative, informed, substantive, deliberative, robust, and legible), whether they are governments, corporations, or transnational institutions. The deliberative processes and systems they employ will vary depending on their purpose and context and we need the toolbox and capabilities necessary to work across those contexts…(More)”.