Stefaan Verhulst
Essay by Matt Duffy: “James C. Scott writes in Seeing Like a State that governance requires simplification and compression to understand the facts of its world. It generates reductive artifacts that enable the state to grasp what it is governing. They are important proxies for local and tacit knowledge. If a state measures grain production and grain stores, they don’t need to understand how to work the land. The measures are a compressed but manageable proxy for productivity, land value, worker skill, and more.
“Data-driven” governance is not new. Rome ran censuses, tracked taxable property, and knew who was eligible for conscription. Ultimately, every civilization is data-driven. What changes is the algorithm that processes the data. Sometimes it’s the local chief’s gut instinct. Sometimes a massive bureaucracy synthesizing reports and modern data streams into executive action.
And in every society, the leaders processing that data are ultimately beholden to sentiment. Sentiment is not necessarily opinion polls, it’s the actual mood of the citizenry. It’s obvious in a democracy, but Hume tells us it’s true of autocracy as well. Viktor Orbán just lost an election in Hungary despite sixteen years of tilting the playing field in his favor. Scott Alexander recently made the point clearly: modern autocrats calibrate fraud, coercion, and institutional meddling to what the public and key elites will bear. Sentiment is the ceiling every ruler operates under. It’s also incredibly difficult to measure directly, which is why governments build elaborate information channels to approximate it. They track resources, behaviors, and a suite of outcomes as proxies for the mood that ultimately determines their legitimacy.
But despite every government’s great efforts to process information that converts into effective action, every great society has eventually declined. There are other causes, but one driver, consistently, is that every declining society loses some connection with and control over its citizenry. Formalized information channels fail. Governments falter when information is corrupted. This is easiest to see at the level of metrics, our consistent, repeatable measurements of what’s happening in the world. Every metric has something like a half-life. From the moment a metric is adopted, its relationship with the underlying condition it seeks to quantify erodes.
Formalization of a metric generates a new world condition. It alters incentives, changing the behavior of the people within the process it is measuring. It narrows the focus of governments and other organizations, to the detriment of other information that could be considered. And once a metric starts decaying, it is impossible to right the ship without redefining the metric or adopting a new one entirely. Such adjustments happen, but generally institutions are slow to make these changes, often in order to keep longitudinal comparisons in force…(More)”.
Book by Christian Sandvig et al: “Our lives are increasingly governed by automated systems influencing everything from medical care to policing to employment opportunities, but researchers and investigative journalists have proven that AI systems regularly get things wrong.
Auditing AI is a first-of-its-kind exploration of why and how to audit artificial intelligence systems. It offers a simple roadmap for using AI audits to make product and policy changes that benefit companies and the public alike. The book aims to convince readers that AI systems should be subject to robust audits to protect all of us from the dangers of these systems. Readers will come away with an understanding of what an AI audit is, why AI audits are important, key components of an audit that follows best practices, how to interpret an audit, and the available choices to act on an audit’s results.
The book is organized around canonical examples: from AI-powered drones mistakenly targeting civilians in conflict areas to false arrests triggered by facial recognition systems that misidentified people with dark skin tones to HR hiring software that prefers men. It explains these definitive cases of AI decision-making gone wrong and then highlights specific audits that have led to concrete changes in government policy and corporate practice…(More)”.
Book edited by Elisabeth B. Reynolds: “A new world order is emerging, and within it, US priorities are shifting. A reconfiguration of global supply chains. The redrawing of geopolitical lines and alliances with increasing threats of conflict. A rise in weather-related disasters. And the emergence of transformative technologies. All these factors are converging to create an environment filled with uncertainty and change—but also possibility.
For the country to flourish as well as defend and secure its interests, it must build on its decades of experience in developing frontier technologies and globally competitive industries through investments into priority technologies for the twenty-first century. This volume, edited by Elisabeth Reynolds, presents a high-level introduction to some of the key areas where the United States must excel and lead in the coming decades to ensure both national and economic security. The book provides an overview of six key priority technologies—critical minerals, semiconductors, biomanufacturing, quantum computing, drones, and advanced manufacturing—needed to build the innovation and industrial ecosystems that will keep the US secure and drive shared prosperity…(More)”.
Textbook by Alan Garfinkel and Yina Guo”…introduces statistics to beginning students in a distinctly original and non-traditional way. It assumes minimal mathematical or statistical background, yet offers substantial depth that will also engage experienced practitioners. Motivated by the growing call to move beyond the statistical practices and concepts that contributed to the current “reproducibility crisis,” the book encourages readers to rethink what statistics is, how it is used, and how it should be taught. Instead of memorizing formulas that were derived as approximations under unrealistic assumptions, modern computing enables us to simulate scenarios thousands of times in seconds and simply count outcomes.
Taking this computational approach as fundamental, the book provides thorough coverage of the material, including describing and presenting data, two-group and multi-group comparisons, correlation, regression, statistical power and Bayesianism, deliberately forgoing many standard techniques in favor of simulation-based methods. This philosophy is gaining momentum…(More)”.
Report by IPPR: “The public are understandably worried about AI and, so far, governments have struggled to articulate a clear vision for what it would mean for AI to go well.
Governments must stand ready to both protect people from the risks of AI and deliberately steer this transformation towards public value. But policy has, so far, been too timid to do so.
In this report we draw reflections from IPPR’s work so far on AI policy and highlight next steps, with recommendations for European governments seeking to demonstrate that they are intervening ambitiously in their citizens’ interests. We also introduce a how-to guide for directing AI to public value, identifying priority policies for the near term…(More)”.
Paper by Lexin Zhou et al: “Ensuring safe and effective use of artificial intelligence (AI) requires understanding and anticipating its performance on new tasks, from advanced scientific challenges to transformed workplace activities. So far, benchmarking has guided progress in AI but has offered limited explanatory and predictive power for general-purpose AI systems, attributed to limited transferability across specific tasks. Here we introduce general scales for AI evaluation that elicit demand profiles explaining what capabilities common AI benchmarks truly measure, extract ability profiles quantifying the general strengths and limits of AI systems and robustly predict AI performance for new task instances. Our fully automated methodology builds on 18 rubrics, capturing a broad range of cognitive and intellectual demands, which place different task instances on the same general scales, illustrated on 15 large language models (LLMs) and 63 tasks. Both the demand and the ability profiles on these scales bring new insights such as construct validity through benchmark sensitivity and specificity and explain conflicting claims about whether AI has reasoning capabilities. Ultimately, high predictive power at the instance level becomes possible using the general scales, providing superior estimates over strong black-box baseline predictors, especially in out-of-distribution settings (new tasks and benchmarks). The scales, rubrics, battery, techniques and results presented here constitute a solid foundation for a science of AI evaluation, underpinning the reliable deployment of AI in the years ahead…(More)”
Essay by Patrick K. Lin: “Before the 1870s, retail goods rarely carried fixed prices. Instead, haggling was the norm. Customers and store clerks engaged in a song and dance, testing the other’s economic limits. Then, on the eve of the Philadelphia World’s Fair, businessman John Wanamaker transformed an abandoned railroad station into the Grand Depot, one of the first department stores in the United States. At the grand opening, each item in the sprawling store was affixed with a conspicuous label declaring a non-negotiable price. When millions came to the city for the fair, many had their first encounter with fixed price tags. The elimination of haggling saved both customers and clerks time, making the market significantly more efficient. Fair visitors brought the idea of the price tag home with them. Soon, businesses around the world adopted fixed prices and price transparency.
One hundred and fifty years later, the datafication of the economy is causing the retail experience to regress to a form of variable pricing far more coercive than the haggling of the past. With online shopping, social media, and data collection, modern corporations have access to more information than ever before. Retailers can view your purchase history, location, personal demographics, and much more. This has enabled businesses across a variety of sectors to engage in surveillance pricing—the practice of extracting and exploiting personal information in order to charge customers different prices for the same product or service. Today, variable pricing is back, but this time the seller knows everything about you.
The viability of surveillance pricing—its profitability, ubiquity, and exploitative nature—hinges on the presence of market failures. Severe information asymmetries are perhaps the most insidious. While corporations have access to data brokers, online behavioral advertising, and algorithms that can adjust prices in real time, consumers are more disempowered than ever…(More)”.
Paper by Alexandros Sagkriotis: “Real-world data (RWD) and real-world evidence (RWE) are increasingly used to inform regulatory decisions, health technology assessment, and health system planning. However, patients whose data underpin these activities often experience limited transparency or benefit when their information is monetised. While regulatory and HTA frameworks emphasise methodological rigor and analytical transparency, they provide limited guidance on fairness, reciprocity, and legitimacy from a patient perspective. This Policy Brief examines this governance gap and argues that evidence integrity must extend beyond technical standards to include ethical stewardship and public trust. Drawing on policy contexts from UK, EU, and North America, it proposes five pragmatic safeguards to strengthen transparency, accountability, and patient-centred governance in secondary data use, supporting the sustainability and legitimacy of RWE infrastructures as data initiatives expand…(More)”.
Article by Edward Bellamy: “When American author and journalist Edward Bellamy published his utopian novel Looking Backward: 2000–1887 in 1888, he didn’t know that it would be one of the best-selling books of the era; that it would inspire political groups around the world; or that it would influence the thinking of some of the most prominent intellectuals of the time.
All this he didn’t know. But he did know that the slums, sweatshops, and unsafe factories he observed as America industrialized in the second half of the nineteenth century, alongside the skyrocketing wealth of a handful of men, couldn’t represent the pinnacle of human society; there had to be something better.
One of the fundamental ways we misperceive the world is by believing that the ways things are is the way they have to be; that the world as it is today reflects the natural order of things. Looking Backward was Bellamy’s attempt to help people avoid falling into this cognitive quicksand. Because if the way things are is the way they have to be, what use is there trying to change them? Or, even if change is possible, you’re constrained by the “nature of things,” so there is only so much you can do. Through the novel, Bellamy imagined what the world could be in the year 2000, if humans realized their rational and moral potential, and hoped to inspire readers to work toward achieving it.
The lead character in Looking Backward is Julian West, a well-to-do thirty-something living in Boston who falls into a deep sleep in 1887 and wakes up over a century later in the year 2000. When West comes to, he discovers a utopian society, free from war and economic and social injustice, and full of community and solidarity. As the novel unfurls, West learns chapter by chapter how society has been organized in order to achieve this.
There is a guaranteed income (similar to a universal basic income); work is tied to motivation and duty as opposed to external incentives; and the good life is found through relationships rather than material consumption. And of course West is tasked with explaining to those in 2000 what things were like in the nineteenth century—they find it hard to believe society could have ever tolerated such inequality and injustice. (And like any good science fiction author, Bellamy was the first to dream up several inventions, including the clock radio and the idea of a payment card—there is a monument to the credit card in Russia with Bellamy’s name on it.)…(More)”.
Article by Shihab Jamal: “…Hemmings is part of a vast number of research-support specialists working at scientific institutions around the world, often in the shadows. As the ‘stagehands’ of science, they are mostly invisible to the audience but essential to the show. Even though their fingerprints are all over many data sets, they’re rarely recognized as full contributors on projects, as co-authors and in other ways that are formally rewarded by the scientific establishment. In publications, they often appear only in a short phrase in the acknowledgements section. Their hidden labour and expertise can be difficult to measure.
Simon Hettrick, the chair of the Hidden REF initiative, a campaign launched at the University of Southampton, UK, in 2020 to highlight and celebrate these crucial roles, says that “this lack of recognition translates into significant difficulties for people in these roles: in getting support, finding positions and progressing their careers”.
Nature’s careers team interviewed research-support professionals to hear how their work helps to shape the course of modern science, and how recognition — or the lack of it — has influenced their careers…(More)”.