Stefaan Verhulst
Special issue compiled and edited by Cathal O’Madagain, Sarah Alami, Monique Borgerhoff Mulder, Edmond Seabright, José Segovia Martin, James Winters and Andrew Whiten: “Collective intelligence is the ability of groups to solve problems and make decisions more effectively than their individual members can. The phenomenon appears across the natural world. We see it when shoals of fish decide as a group which direction to travel, and in the elaborate mound systems built by ants through the decentralized activity of thousands of individuals. In humans, collective intelligence is exhibited in the accumulation of knowledge transmitted across generations, and in procedures such as majority voting, used to decide questions for a group. This theme issue brings together scholars from multiple disciplines to explore the evolutionary origins of collective intelligence, its role in contemporary societies, and how emerging technologies may reshape it in the future…(More)”.
Book by Roland Betancourt: “When Disneyland opened to the public in 1955, it demystified the hidden world of factory automation through its extraordinary new attractions. In this fascinating book, Roland Betancourt tells the story of how the visionary engineers and designers at Disney transformed the technologies of the postwar assembly line into an entertainment experience unlike anything the world had ever seen.
Disneyland and the Rise of Automation traces the origins and evolution of these technical innovations during the theme park’s first three decades in operation, exploring how engineers reimagined the systems and machines of industrial manufacturing and the military. The magnetic tape used to test ballistic missiles was repurposed to animate the talking macaws in the Enchanted Tiki Room. Programmable Logic Controllers, widely used on automotive assembly lines, brought to life the spectacular rides of the Matterhorn Bobsleds and Space Mountain. Betancourt shows how these and other attractions helped to allay fears about automation and job displacement in 1950s America. Along the way, he situates Disneyland’s remarkable creations within a broader history of the technologies that increasingly order and construct the world around us, from the Fordist factory to artificial intelligence.
Essential reading for anyone interested in engineering, corporate histories, or popular culture, Disneyland and the Rise of Automation invites us to consider how technology and the logic of automation become integrated into our lives through entertainment…(More)”.
Essay by Matt Duffy: “James C. Scott writes in Seeing Like a State that governance requires simplification and compression to understand the facts of its world. It generates reductive artifacts that enable the state to grasp what it is governing. They are important proxies for local and tacit knowledge. If a state measures grain production and grain stores, they don’t need to understand how to work the land. The measures are a compressed but manageable proxy for productivity, land value, worker skill, and more.
“Data-driven” governance is not new. Rome ran censuses, tracked taxable property, and knew who was eligible for conscription. Ultimately, every civilization is data-driven. What changes is the algorithm that processes the data. Sometimes it’s the local chief’s gut instinct. Sometimes a massive bureaucracy synthesizing reports and modern data streams into executive action.
And in every society, the leaders processing that data are ultimately beholden to sentiment. Sentiment is not necessarily opinion polls, it’s the actual mood of the citizenry. It’s obvious in a democracy, but Hume tells us it’s true of autocracy as well. Viktor Orbán just lost an election in Hungary despite sixteen years of tilting the playing field in his favor. Scott Alexander recently made the point clearly: modern autocrats calibrate fraud, coercion, and institutional meddling to what the public and key elites will bear. Sentiment is the ceiling every ruler operates under. It’s also incredibly difficult to measure directly, which is why governments build elaborate information channels to approximate it. They track resources, behaviors, and a suite of outcomes as proxies for the mood that ultimately determines their legitimacy.
But despite every government’s great efforts to process information that converts into effective action, every great society has eventually declined. There are other causes, but one driver, consistently, is that every declining society loses some connection with and control over its citizenry. Formalized information channels fail. Governments falter when information is corrupted. This is easiest to see at the level of metrics, our consistent, repeatable measurements of what’s happening in the world. Every metric has something like a half-life. From the moment a metric is adopted, its relationship with the underlying condition it seeks to quantify erodes.
Formalization of a metric generates a new world condition. It alters incentives, changing the behavior of the people within the process it is measuring. It narrows the focus of governments and other organizations, to the detriment of other information that could be considered. And once a metric starts decaying, it is impossible to right the ship without redefining the metric or adopting a new one entirely. Such adjustments happen, but generally institutions are slow to make these changes, often in order to keep longitudinal comparisons in force…(More)”.
Book by Christian Sandvig et al: “Our lives are increasingly governed by automated systems influencing everything from medical care to policing to employment opportunities, but researchers and investigative journalists have proven that AI systems regularly get things wrong.
Auditing AI is a first-of-its-kind exploration of why and how to audit artificial intelligence systems. It offers a simple roadmap for using AI audits to make product and policy changes that benefit companies and the public alike. The book aims to convince readers that AI systems should be subject to robust audits to protect all of us from the dangers of these systems. Readers will come away with an understanding of what an AI audit is, why AI audits are important, key components of an audit that follows best practices, how to interpret an audit, and the available choices to act on an audit’s results.
The book is organized around canonical examples: from AI-powered drones mistakenly targeting civilians in conflict areas to false arrests triggered by facial recognition systems that misidentified people with dark skin tones to HR hiring software that prefers men. It explains these definitive cases of AI decision-making gone wrong and then highlights specific audits that have led to concrete changes in government policy and corporate practice…(More)”.
Book edited by Elisabeth B. Reynolds: “A new world order is emerging, and within it, US priorities are shifting. A reconfiguration of global supply chains. The redrawing of geopolitical lines and alliances with increasing threats of conflict. A rise in weather-related disasters. And the emergence of transformative technologies. All these factors are converging to create an environment filled with uncertainty and change—but also possibility.
For the country to flourish as well as defend and secure its interests, it must build on its decades of experience in developing frontier technologies and globally competitive industries through investments into priority technologies for the twenty-first century. This volume, edited by Elisabeth Reynolds, presents a high-level introduction to some of the key areas where the United States must excel and lead in the coming decades to ensure both national and economic security. The book provides an overview of six key priority technologies—critical minerals, semiconductors, biomanufacturing, quantum computing, drones, and advanced manufacturing—needed to build the innovation and industrial ecosystems that will keep the US secure and drive shared prosperity…(More)”.
Textbook by Alan Garfinkel and Yina Guo”…introduces statistics to beginning students in a distinctly original and non-traditional way. It assumes minimal mathematical or statistical background, yet offers substantial depth that will also engage experienced practitioners. Motivated by the growing call to move beyond the statistical practices and concepts that contributed to the current “reproducibility crisis,” the book encourages readers to rethink what statistics is, how it is used, and how it should be taught. Instead of memorizing formulas that were derived as approximations under unrealistic assumptions, modern computing enables us to simulate scenarios thousands of times in seconds and simply count outcomes.
Taking this computational approach as fundamental, the book provides thorough coverage of the material, including describing and presenting data, two-group and multi-group comparisons, correlation, regression, statistical power and Bayesianism, deliberately forgoing many standard techniques in favor of simulation-based methods. This philosophy is gaining momentum…(More)”.
Report by IPPR: “The public are understandably worried about AI and, so far, governments have struggled to articulate a clear vision for what it would mean for AI to go well.
Governments must stand ready to both protect people from the risks of AI and deliberately steer this transformation towards public value. But policy has, so far, been too timid to do so.
In this report we draw reflections from IPPR’s work so far on AI policy and highlight next steps, with recommendations for European governments seeking to demonstrate that they are intervening ambitiously in their citizens’ interests. We also introduce a how-to guide for directing AI to public value, identifying priority policies for the near term…(More)”.
Paper by Lexin Zhou et al: “Ensuring safe and effective use of artificial intelligence (AI) requires understanding and anticipating its performance on new tasks, from advanced scientific challenges to transformed workplace activities. So far, benchmarking has guided progress in AI but has offered limited explanatory and predictive power for general-purpose AI systems, attributed to limited transferability across specific tasks. Here we introduce general scales for AI evaluation that elicit demand profiles explaining what capabilities common AI benchmarks truly measure, extract ability profiles quantifying the general strengths and limits of AI systems and robustly predict AI performance for new task instances. Our fully automated methodology builds on 18 rubrics, capturing a broad range of cognitive and intellectual demands, which place different task instances on the same general scales, illustrated on 15 large language models (LLMs) and 63 tasks. Both the demand and the ability profiles on these scales bring new insights such as construct validity through benchmark sensitivity and specificity and explain conflicting claims about whether AI has reasoning capabilities. Ultimately, high predictive power at the instance level becomes possible using the general scales, providing superior estimates over strong black-box baseline predictors, especially in out-of-distribution settings (new tasks and benchmarks). The scales, rubrics, battery, techniques and results presented here constitute a solid foundation for a science of AI evaluation, underpinning the reliable deployment of AI in the years ahead…(More)”
Essay by Patrick K. Lin: “Before the 1870s, retail goods rarely carried fixed prices. Instead, haggling was the norm. Customers and store clerks engaged in a song and dance, testing the other’s economic limits. Then, on the eve of the Philadelphia World’s Fair, businessman John Wanamaker transformed an abandoned railroad station into the Grand Depot, one of the first department stores in the United States. At the grand opening, each item in the sprawling store was affixed with a conspicuous label declaring a non-negotiable price. When millions came to the city for the fair, many had their first encounter with fixed price tags. The elimination of haggling saved both customers and clerks time, making the market significantly more efficient. Fair visitors brought the idea of the price tag home with them. Soon, businesses around the world adopted fixed prices and price transparency.
One hundred and fifty years later, the datafication of the economy is causing the retail experience to regress to a form of variable pricing far more coercive than the haggling of the past. With online shopping, social media, and data collection, modern corporations have access to more information than ever before. Retailers can view your purchase history, location, personal demographics, and much more. This has enabled businesses across a variety of sectors to engage in surveillance pricing—the practice of extracting and exploiting personal information in order to charge customers different prices for the same product or service. Today, variable pricing is back, but this time the seller knows everything about you.
The viability of surveillance pricing—its profitability, ubiquity, and exploitative nature—hinges on the presence of market failures. Severe information asymmetries are perhaps the most insidious. While corporations have access to data brokers, online behavioral advertising, and algorithms that can adjust prices in real time, consumers are more disempowered than ever…(More)”.
Paper by Alexandros Sagkriotis: “Real-world data (RWD) and real-world evidence (RWE) are increasingly used to inform regulatory decisions, health technology assessment, and health system planning. However, patients whose data underpin these activities often experience limited transparency or benefit when their information is monetised. While regulatory and HTA frameworks emphasise methodological rigor and analytical transparency, they provide limited guidance on fairness, reciprocity, and legitimacy from a patient perspective. This Policy Brief examines this governance gap and argues that evidence integrity must extend beyond technical standards to include ethical stewardship and public trust. Drawing on policy contexts from UK, EU, and North America, it proposes five pragmatic safeguards to strengthen transparency, accountability, and patient-centred governance in secondary data use, supporting the sustainability and legitimacy of RWE infrastructures as data initiatives expand…(More)”.