Stefaan Verhulst
Open Access Book by Bart van der Sloot: “Commonly attributed to the oracle of Delphi, the classic Greek maxim “Know thyself” is in fact much older. Although emblematic, its meaning is by no means unambiguous, even within Greek antiquity. Originally, it served as a call to humility: know your limits, know your place. One was cautioned not to think too highly of oneself, but to recognise the self as a minuscule fragment subject to the caprices of the gods, constrained by societal rules and bodily needs. Later, especially through the work of Plato, the maxim took on a different—perhaps even antithetical—meaning, shifting towards self-knowledge and introspection.1 While not entirely undone of its original undertone, it increasingly came to symbolise humanity’s intellectual capacity for self-understanding and self-correction. In this guise, it was revitalised during the Enlightenment as a credo of individual autonomy.
This book is about both sides of that coin. It explores the myriad ways in which humans are shaped by society, bound by personal histories, and constrained by physical and cognitive limitations. As will become evident, man is an ambivalent being—often far more fragile than his intellectual bravado would suggest. If there is a universal human experience, it is one of limitation, misjudgement, and failure. At the same time, like no other creature, man possesses the unique capacity for self-reflection and self-improvement. Humanity, for better or worse, has managed to bring vast stretches of the universe under its dominion, perpetually striving to refine and master itself. In modern Western culture, autonomy stands perhaps as the ultimate aspiration, and being autonomous as the highest of all aspirations.
Technology can deepen this rift—who has not felt a tinge of shame upon viewing their weekly screen time?—but it also amplifies our capacities, granting us ever greater control over ourselves and our environments, if only through the gentle prodding of fitness apps reminding us that we have yet to meet our daily step count. Yet while it augments self-regulation, it also grants technology companies unprecedented insight into our inner conflicts,…(More)”.
Article by Michelle Goldberg: “When I saw “Data,” a zippy Off Broadway play about the ethical crises of employees at a Palantir-like A.I. company, last month, I was struck by its prescience. It’s about a brilliant, conflicted computer programmer pulled into a secret project — stop reading here if you want to avoid spoilers — to win a Department of Homeland Security contract for a database tracking immigrants. A brisk theatrical thriller, the play perfectly captures the slick, grandiose language with which tech titans justify their potentially totalitarian projects to the public and perhaps to themselves.
“Data is the language of our time,” says a data analytics manager named Alex, sounding a lot like the Palantir chief Alex Karp. “And like all languages, its narratives will be written by the victors. So if those fluent in the language don’t help democracy flourish, we hurt it. And if we don’t win this contract, someone else less fluent will.”
I’m always on the lookout for art that tries to make sense of our careening, crises-ridden political moment, and found the play invigorating. But over the last two weeks, as events in the real world have come to echo some of the plot points in “Data,” it’s started to seem almost prophetic.
Its protagonist, Maneesh, has created an algorithm with frighteningly accurate predictive powers. When I saw the play, I had no idea whether such technology was really on the horizon. But this week, The Atlantic reported on Mantic, a start-up whose A.I. engine outperforms many of the best human forecasters across domains from politics to sports to entertainment…(More)”.
Paper by Daron Acemoglu, Dingwen Kong & Asuman Ozdaglar: “We study how generative AI, and in particular agentic AI, shapes human learning incentives and the long-run evolution of society’s information ecosystem. We build a dynamic model of learning and decision-making in which successful decisions require combining shared, community-level general knowledge with individual-level, context-specific knowledge; these two inputs are complements. Learning exhibits economies of scope: costly human effort jointly produces a private signal about their own context and a “thin” public signal that accumulates into the community’s stock of general knowledge, generating a learning externality. Agentic AI delivers context-specific recommendations that substitute for human effort. By contrast, a richer stock of general knowledge complements human effort by raising its marginal return. The model highlights a sharp dynamic tension: while agentic AI can improve contemporaneous decision quality, it can also erode learning incentives that sustain long-run collective knowledge. When human effort is sufficiently elastic and agentic recommendations exceed an accuracy threshold, the economy can tip into a knowledge-collapse steady state in which general knowledge vanishes ultimately, despite high-quality personalized advice. Welfare is generally non-monotone in agentic accuracy, implying an interior, welfare-maximizing level of agentic precision and motivating information-design regulations. In contrast, greater aggregation capacity for general knowledge—meaning more effective sharing and pooling of human-generated general knowledge—unambiguously raises welfare and increases resilience to knowledge collapse…(More)”.
Article by Dashun Wang: “In the early 1980s, Apple co-founder Steve Jobs described the computer as “a bicycle for our minds”. He was inspired by a Scientific American graphic he’d encountered as a boy, showing that a human on a bicycle is more energy-efficient than any animal1. The metaphor captured the promise of personal computing: tools that enable people to go further and faster with less effort. But the deeper brilliance of bicycles lies in what they do not do: they do not mimic human biology, nor any form found in nature. The bicycle reimagined motion entirely.
By comparison, I propose that artificial-intelligence agents are aeroplanes for the mind — they can speed things up for humans even more than bicycles do, but they are harder to control and the consequences of mistakes can be huge. And scientists are particularly poised to benefit from these tools. Scientific research is, at its core, a journey into the unknown. Yet working in new terrains brings unexpected challenges and frequent failures…(More)”.
Paper by Haydn Belfield: “The artificial intelligence community (AI) has recently engaged in activism in relation to their employers, other members of the community, and their governments in order to shape the societal and ethical implications of AI. It has achieved some notable successes, but prospects for further political organising and activism are uncertain. We survey activism by the AI community over the last six years; apply two analytical frameworks drawing upon the literature on epistemic communities, and worker organising and bargaining; and explore what they imply for the future prospects of the AI community. Success thus far has hinged on a coherent shared culture, and high bargaining power due to the high demand for a limited supply of AI talent. Both are crucial to the future of AI activism and worthy of sustained attention…(More)”.
Paper by Amirmohammad Ghavimi: “Collective memory—closely related to, yet distinct from, social memory—plays a significant role in guiding the sustainable transition of cities. Multiple qualitative, quantitative, and mixed methods have been employed to investigate collective memory; however, there remains a need to spatially map it for each city to provide decision-makers with a clear, quantitative guide. Such mapping can help preserve and strengthen a city’s collective memory, thereby informing future urban development. This study examines the urban dimension of collective memory—collective urban memory (CUM)—by mapping its tangible, physical aspects through a facilitated Public Participation GIS (PPGIS) approach within a citizen science framework. Due to challenges in encouraging public use of the mobile GIS application QField, we adopted a facilitated PPGIS approach, whereby trained interviewers assisted participants in the data collection process. Results from Oldenburg, Germany, identified several significant urban locations that play key roles in the city’s CUM. Notably, certain places are mentioned disproportionately by different age groups, while a common core set of tangible landmarks emerges across the population. These findings highlight the value of mapping CUM to support culturally sensitive and sustainable city planning…(More)”.
Paper by Zachary Catanzaro: “Judges now consult ChatGPT about what statutes mean. The scholarly response treats this as a reliability problem. Reliability is beside the point. LLMs generate text by predicting probable token sequences, manipulating symbols without accessing what those symbols mean. But syntax cannot generate semantics. Computational legal interpretation does not fail because the technology is immature. It fails because it is a category error. A theory that fixes meaning in historical usage and treats interpretation as empirical recovery cannot resist algorithms that measure historical usage patterns. The progression from dictionaries to corpus databases to generative models follows originalism’s empirical commitments to their logical end. AI-generated content saturates the corpora on which future models train, and the resulting degradation eliminates marginal claims first; those upon which life and liberty depend. Computational methods did not contaminate originalist interpretation. Originalism was already a jurisprudence that simulated meaning while discarding the semantic content that interpretation requires. The machines simply made the method hyperreal…(More)”.
Chapter by Maria Michali, Amalia Kallergi, Eva Paraschou, Laurens Landeweerd, Steffi Friedrichs, Athina Vakali & George Gaskell: “…examines three historical case studies: (1) ‘genetic modification to genome editing’; (2) ‘controversies over climate science’, and (3) ‘artificial intelligence in social media’. On this basis it develops an understanding of how public trust and confidence in science, technology, and innovation (STI) can be gained, maintained, or lost. This leads to practical recommendations for ethical and societally sustainable STI. There are both intuitive and evidenced warrants for trust in science. Intuitive warrants arise when innovation creates an immediate sense of familiarity, making the future feel like a natural continuation of the past. Evidenced warrants occur when science generates new insights or produces technologies that benefit individuals or society. However, trust in science may be undermined by scientific fraud, the dismissal of public concerns about innovations that challenge societal values, and the populist rejection of science, often accompanied by conspiracy theories. Building and maintaining public trust in STI is a multifaceted challenge that requires coordinated efforts from scientists, research institutions, funding bodies, regulators, and democratic governance processes. A commitment to transparency, proactive engagement with public concerns, risk assessment and mitigation, responsible communication, and strong regulatory frameworks is essential for navigating the complexities of technological advancement and ensuring public trust…(More)”.
Report by Data Quality Campaign: “Statewide longitudinal data systems (SLDSs) have enormous potential to be used to improve education and workforce outcomes, but not every system is designed to work in the same way. Even two “good” SLDSs may not look the same because they can be designed with different goals, or functions, in mind: public reports and dashboards, research and analytics, and support for individuals. All three functions are essential and address different people’s data access needs. Each function requires different considerations for infrastructure, data governance, legal frameworks, and ongoing investments.
This brief explores how policymakers can purposefully shape the design of their state’s SLDS to effectively support any or all of the three functions. When a system’s function is aligned with the intended users, required infrastructure, appropriate governance structure, and intended data uses, the system can effectively enable access to data that people need to make education and workforce decisions…(More)”.
Article by Vivian Liu: “A group of teenagers is standing in front of a room of residents, government officials, and organizations. They are presenting findings from data they helped to collect. Their work speaks to urgent challenges in their communities, including displacement, air pollution, extreme heat, and lack of community spaces. For some of the teenagers, this is their first experience collecting data and contributing to solutions in their own communities.
Engaging teenagers and young adults in data collection, analysis, and dissemination improves the quality of the results, provides better information for policy and program responses, and supports the next generation of leaders.
In this fifth blog post in our Equity in Action series, we explore how four local organizations that received grants from the Local Data for Equitable Communities program are training and partnering with youth to be the voices shaping community-informed solutions…(More)”.