Stefaan Verhulst
Article by David Adam: “Attached to the Very Large Telescope in Chile, the Multi Unit Spectroscopic Explorer (MUSE) allows researchers to probe the most distant galaxies. It’s a popular instrument: for its next observing session, from October to April, scientists have applied for more than 3,000 hours of observation time. That’s a problem. Even though it’s dubbed a cosmic time machine, not even MUSE can squeeze 379 nights of work into just seven months.
The European Southern Observatory (ESO), which runs the Chile telescope, usually asks panels of experts to select the worthiest proposals. But as the number of requests has soared, so has the burden on the scientists asked to grade them.
“The load was simply unbearable,” says astronomer Nando Patat at ESO’s Observing Programmes Office in Garching, Germany. So, in 2022, ESO passed the work back to the applicants. Teams that want observing time must also assess related applications from rival groups.AI is transforming peer review — and many scientists are worried
The change is one increasingly popular answer to the labour crisis engulfing peer review — the process by which grant applications and research manuscripts are assessed and filtered by specialists before a final decision is made about funding or publication.
With the number of scholarly papers rising each year, publishers and editors complain that it’s getting harder to get everything reviewed. And some funding bodies, such as ESO, are struggling to find reviewers.
As pressure on the system grows, many researchers point to low-quality or error-strewn research appearing in journals as an indictment of their peer-review systems failing to uphold rigour. Others complain that clunky grant-review systems are preventing exciting research ideas from being funded…(More)”.
Article by Sophia Fox-Sowell: “Illinois Gov. JB Pritzker last Friday signed a a bill into law banning the use of artificial intelligence from providing mental health services, aiming to protect residents from potentially harmful advice.
Known as the Wellness and Oversight for Psychological Resources Act, the law prohibits AI systems from delivering therapeutic treatment or making clinical decisions. The legislation still allows AI tools to be used in administrative roles, such as scheduling or note-taking, but draws a clear boundary around direct patient care.
Companies or individuals found to be in violation could face $10,000 in fines, enforced by the Illinois Department of Financial and Professional Regulation.
“The people of Illinois deserve quality healthcare from real, qualified professionals and not computer programs that pull information from all corners of the internet to generate responses that harm patients,” Mario Treto, Jr., Illinois’ financial regulation secretary, said in a press release. “This legislation stands as our commitment to safeguarding the well-being of our residents by ensuring that mental health services are delivered by trained experts who prioritize patient care above all else.”
The new legislation is a response to growing concerns over the use of AI in sensitive areas like health care. The Washington Post reported last May that an AI-powered therapist chatbot recommended “a small hit of meth to get through this week” to a fictional former addict.
Last year, the Illinois House Health Care Licenses and Insurance Committees held a joint hearing on AI in health insurance in which legislators and experts warned that AI systems lack the empathy, accountability or clinical oversight necessary for safe mental health treatment…(More)”.
Report by the National Academies of Sciences, Engineering, and Medicine: “As the artificial intelligence (AI) landscape rapidly evolves, many state and local governments are exploring how to use these technologies to enhance public services and governance. Alongside the potential to improve efficiency, responsiveness, and decision-making, AI adoption also brings challenges including concerns about privacy, bias, transparency, public trust, and long-term oversight. This guidance is intended for those involved in shaping, implementing, or managing AI in state and local government. By following structured, evidence-informed strategies, governments can integrate AI tools responsibly and in ways that reflect community values and institutional goals…(More)”.
Essay by Henry Farrell and Hahrie Han: “Could existing democratic institutions and processes be improved by AI? A burgeoning body of scholarship asks how AI-driven machine learning can improve—or even replace— democratic institutions that aggregate opinions and beliefs (Ovadya 2023, Jungherr 2023).
This literature makes strong, but often unstated assumptions about how democracy works, and where it can go wrong, creating a tacit paradigm that guides scholars to focus on some questions, problems, and hypotheses at the expense of others. As one of us has argued together with co-authors in the past:
Paradigms guide action. Particularly in moments of crisis, those paradigms—or cohered sets of assumptions about ourselves, each other, and the world around us—shape the intentions we develop, the solutions we imagine, and, ultimately, the actions we choose. What happens when the paradigms we carry are limited or, worse, wrong? … [Paradigms] illuminate possibilities for change, they also constrain where we look. The wrong paradigm leads us to misread situations, overlook opportunities, and pursue the wrong solutions. (Vallone et al 2023)
In this paper, we argue that the existing paradigm of democracy driving scholarship about its relationship to AI highlights the wrong questions. The essay describes this broad paradigm—which emphasizes the benefits of deliberation and sortition—and explains why it is insufficient for understanding or acting in a healthy democracy. We argue that we should instead focus on enduring democratic publics and how they shape collective behavior. That would raise very different questions. How might AI reshape these publics and the feedback loops that they depend on? Will this contribute to democratic stability or undermine it? Such questions would underpin a broader and different research agenda on AI and democracy than the one we have today…(More)”.
Paper by Kiran Tomlinson et al: “Given the rapid adoption of generative AI and its potential to impact a wide range of tasks, understanding the effects of AI on the economy is one of society’s most important questions. In this work, we take a step toward that goal by analyzing the work activities people do with AI, how successfully and broadly those activities are done, and combine that with data on what occupations do those activities. We analyze a dataset of 200k anonymized and privacy-scrubbed conversations between users and Microsoft Bing Copilot, a publicly available generative AI system. We find the most common work activities people seek AI assistance for involve gathering information and writing, while the most common activities that AI itself is performing are providing information and assistance, writing, teaching, and advising. Combining these activity classifications with measurements of task success and scope of impact, we compute an AI applicability score for each occupation. We find the highest AI applicability scores for knowledge work occupation groups such as computer and mathematical, and office and administrative support, as well as occupations such as sales whose work activities involve providing and communicating information. Additionally, we characterize the types of work activities performed most successfully, how wage and education correlate with AI applicability, and how real-world usage compares to predictions of occupational AI impact…(More)”.
Book edited by Ganna Pogrebna and Thomas T. Hills: “…offers an essential exploration of how behavioural science and data science converge to study, predict, and explain human, algorithmic, and systemic behaviours. Bringing together scholars from psychology, economics, computer science, engineering, and philosophy, the Handbook presents interdisciplinary perspectives on emerging methods, ethical dilemmas, and real-world applications. Organised into modular parts-Human Behaviour, Algorithmic Behaviour, Systems and Culture, and Applications—it provides readers with a comprehensive, flexible map of the field. Covering topics from cognitive modelling to explainable AI, and from social network analysis to ethics of large language models, the Handbook reflects on both technical innovations and the societal impact of behavioural data, and reinforces concepts in online supplementary materials and videos. The book is an indispensable resource for researchers, students, practitioners, and policymakers who seek to engage critically and constructively with behavioural data in an increasingly digital and algorithmically mediated world…(More)”.
Article by Ricardo Hausmann: “Many of today’s most urgent global challenges, from stagnant growth to climate change, require ambitious, innovative policies. Yet economics has shifted away from creative problem solving toward a narrow approach that is incapable of devising practical solutions to complex, real-world problems….Should the world have dentists or lawyers? Obviously, it needs both, given that each profession serves different purposes. But when it comes to economics, the question is more complicated, because the field confronts an internal identity crisis over what kind of economists it should produce: policy architects or program auditors.
The distinction matters beyond the halls of academia. Auditors are methodical rule-followers. They arrive with checklists, verify compliance, and flag deviations from established norms. Their work is careful, precise, and fundamentally conservative; it focuses on ensuring that systems function according to predetermined standards, rather than imagining new possibilities.
Architects, on the other hand, are creative problem solvers. They must reconcile competing goals and address complex spatial, material, and financial constraints. Their work is inherently innovative – they envision what does not yet exist.
These professional archetypes attract different personalities and sensibilities, and they require different skill sets. Yet over time, economics has increasingly abandoned the architect’s mindset in favor of the auditor’s, changing not just who enters the field but also what they seek to accomplish.
This shift can be traced to a common misinterpretation of Kenneth Arrow and Gérard Debreu’s first fundamental theorem of welfare economics, which asserts that, in the absence of market failures, free markets lead to efficient outcomes. While Arrow himself believed that market failures were pervasive, the theorem fostered a defensive stance within the field: if markets usually work, then economists’ job is to protect them from interference…(More)”.
Article by Debora Price: “In an age of contested facts, polarised public discourse and eroded trust in institutions, the preservation of data and its independent governance are not technical details. They are foundational to democracy, social understanding, and the pursuit of knowledge. They form the basis of sound decision-making across policy, economics, industry and society….
In recent months, developments in the United States have sent a chill through the global data community: cuts, political interference, and a climate of uncertainty around national statistical services. While many have heard about the sudden withdrawal of billions of dollars of federal funding for science, and attacks on the National Science Foundation, there has been far less public visibility of the parallel loss of globally important data from archives.
In Spring 2025, the BBC headlined: “Inside the desperate rush to save decades of US scientific data from deletion” and the Financial Times “The White House War on Federal Statistics”. This was the subject of Anna Britten’s editorial in the May edition of Significance, official magazine of the Royal Statistical Society in the UK. She raises the alarm about the unexplained removal of datasets from Data.gov, stating that “it remains unclear at the time of writing whether they have been permanently deleted”. She cites staffing losses and terminations at key statistical agencies, and the disbanding of critical scientific advisory committees…
It is easy to take data archives for granted, especially when they are working well. In the UK, amongst other well supported Data Services, the UK Data Archive and UK Data Service have for nearly six decades quietly and expertly ensured that population, social and economic data of national importance — from the Census to the British Social Attitudes Survey, the Labour Force Survey to the Family Resources Survey, Understanding Society, the renowned Cohort Studies, and countless others — are actively preserved, curated, and made available for re-use. These are not merely data files. They are collective memory, social history, and the evidence base upon which we build policy and research…(More)”.
Article by Isobel Moure, Tim O’Reilly and Ilan Strauss: “Can we head off AI monopolies before they harden? As AI models become commoditized, incumbent Big Tech platforms are racing to rebuild their moats at the application layer, around context: the sticky user- and project-level data that makes AI applications genuinely useful. With the right context-aware AI applications, each additional user-chatbot conversation, file upload, or coding interaction improves results; better results attract more users; and more users mean more data. This context flywheel — a rich, structured user- and project-data layer — can drive up switching costs, creating a lock-in effect that effectively traps accumulated data within the platform.
Protocols prevent lock-in. We argue that open protocols — exemplified by Anthropic’s Model Context Protocol (MCP) — serve as a powerful rulebook, helping to keep API-exposed context fluid and to prevent Big Tech from using data lock-in to extend their monopoly power. However, as an API wrapper, MCP can access only what a particular service (such as GitHub or Slack) happens to expose through its API.
To fully enable open, healthy, and competitive AI markets, we need complementary measures that ensure protocols can access the full spectrum of user context, including through:
1. Guaranteed access, for authorized developers, to user-owned data, through open APIs at major platforms.
2. Portable memory that separates a user’s agentic memory from specific applications.
3. Guardrails governing how AI services can leverage user data.
Drawing on the example of open-banking regulations, we show that security and data standards are required for any of these proposals to be realized.
Architecting an open, interoperable AI stack through the protocol layer is about supporting broad value creation rather than value capture by a few firms. Policy efforts such as the EU’s General-Purpose AI Code of Practice do matter; but, ultimately, it is software architecture that most immediately and decisively shapes market outcomes. Protocols — the shared standards that let different systems communicate with one another — function as a deeper de facto law, enabling independent, decentralized, and secure action in digital markets…(More)”.
Article by Michelle Nichols: “A United Nations report seeking ways to improve efficiency and cut costs has revealed: U.N. reports are not widely read.
U.N. Secretary-General Antonio Guterres briefed countries on Friday on the report, produced by his UN80 reform that focused on how U.N. staff implement thousands of mandates given to them by bodies like the General Assembly or Security Council. He said last year that the U.N. system supported 27,000 meetings involving 240 bodies, and the U.N. secretariat produced 1,100 reports, a 20% increase since 1990. “The sheer number of meetings and reports is pushing the system – and all of us – to the breaking point,” Guterres said.
“Many of these reports are not widely read,” he said. “The top 5% of reports are downloaded over 5,500 times, while one in five reports receives fewer than 1,000 downloads. And downloading doesn’t necessarily mean reading.”..(More)”.