Stefaan Verhulst
Paper by Ida Kubiszewski: “To achieve sustainable wellbeing for both humanity and the rest of nature, we must shift from a narrow focus on Gross Domestic Product (GDP) to a broader understanding and measurement of sustainable wellbeing and prosperity within the planetary boundaries. Several hundred alternative indicators have been proposed to replace GDP, but their variety and lack of consensus have allowed GDP to retain its privileged status. What is needed now is broad agreement on shifting beyond GDP. We conducted a systematic literature review of existing alternative indicators and identified over 200 across multiple spatial scales. Using these indicators, we built a database to compare their similarities and differences. While the terminology for describing the components of wellbeing varied greatly, there was a surprising degree of agreement on the core concepts and elements. We applied semantic modelling to estimate the degree of similarity among the indicators’ components and identified those that represented a broad synthesis. Results show that indicators with around 20 components capture a large share of the overall similarity across the indicators in the dataset. Beyond 20 components, adding additional components yielded diminishing returns in similarity. Based on this, we created a 20-component indicator to serve as a model for building consensus and mapped its relationship to several well-known alternative indicators. We aim for this database and synthesis to support broad stakeholder engagement toward the consensus we need to move beyond GDP…(More)“
Paper by Chiara Gallese et al: “Research has shown how data sets convey social bias in Artificial Intelligence systems, especially those based on machine learning. A biased data set is not representative of reality and might contribute to perpetuate societal biases within the model. To tackle this problem, it is important to understand how to avoid biases, errors, and unethical practices while creating the data sets. In order to provide guidance for the use of data sets in contexts of critical decision-making, such as health decisions, we identified six fundamental data set features (balance, numerosity, unevenness, compliance, quality, incompleteness) that could affect model fairness. These features were the foundation for the FanFAIR framework.
We extended the FanFAIR framework for the semi-automated evaluation of fairness in data sets, by combining statistical information on data with qualitative features. In particular, we present an improved version of FanFAIR which introduces novel outlier detection capabilities working in multivariate fashion, using two state-of-the-art methods: the Empirical Cumulative-distribution Outlier Detection (ECOD) and Isolation Forest. We also introduce a novel metric for data set balance, based on an entropy measure.
We addressed the issue of how much (un)fairness can be included in a data set used for machine learning research, focusing on classification issues. We developed a rule-based approach based on fuzzy logic that combines these characteristics into a single score and enables a semi-automatic evaluation of a data set in algorithmic fairness research. Our tool produces a detailed visual report about the fairness of the data set. We show the effectiveness of FanFAIR by applying the method on two open data sets…(More)”.
Report by CIPL: “Without these data categories, organizations may be unable to uncover disparities in how AI models perform across different demographic groups, making it impossible to ensure fairness and equal benefits of AI across all communities. For instance, in order to ensure a bank’s AI system is not used to assess whether a customer is creditworthy enough to apply for a mortgage in a way that disproportionally denies mortgages to people with a certain ethnicity, the developer of the AI system needs to be able to distinguish the ethnicity of the people about whom its AI system makes decisions. Regulators such as the UK’s ICO acknowledge that sensitive data may be necessary to assess discrimination risks, evaluate model performance, and retrain models accordingly. The categorical restrictions many data protection laws place on sensitive data processing, such as requiring specific consent, coupled with an increasingly broad interpretation of the concept of sensitive data, can place organizations in a position of being unable to include sensitive data in AI training datasets to the detriment of the performance of the model, where such consent is not obtainable, for example..(More)”
Article by Sara Frueh: “State and local governments around the U.S. are harnessing AI for a range of applications — such as translating public meetings into multiple languages in real time to allow broader participation, or using chatbots to deliver services to the public, for instance.
While AI systems can offer benefits to agencies and the people they serve, the technology can also be harmful if misapplied. In one high-profile example, around 40,000 Michigan residents were wrongly accused of unemployment insurance fraud based on a state AI system with a faulty algorithm and inadequate human oversight.
“We have to think about a lot of AI systems as potentially useful and quite often unreliable, and treat them as such,” said Suresh Venkatasubramanian of Brown University, co-author of a recent National Academies rapid expert consultation on AI use by state and local governments.
He urged state and local leaders to avoid extreme hype about AI, both its promise and dangers, and instead to use a careful, experimental approach. “We have to embrace an ethos of experimentation and sandboxing, where we can understand how they work in our specific contexts.”
Venkatasubramanian spoke at a National Academies webinar that explored the report’s recommendations and other AI-related resources for state and local governments. He was joined by fellow co-author Nathan McNeese of Clemson University, and Leila Doty, a privacy and AI analyst for the city of San José, California, along with Kate Stoll of the American Association for the Advancement of Science, who moderated the session.
In considering whether to implement AI, McNeese advised state and city agencies to start by asking, “What’s the problem?” or “What’s the aspect of the organization that we want to enhance?”
“You do not want to introduce AI if there is not a specific need,” said McNeese. “You don’t want to implement AI because everyone else is.”
The point was seconded by Venkatasubramanian. “If you have a problem that needs to be solved, figure out what people need to solve it,” he said. “Maybe AI can be a part of it, maybe not. Don’t start by asking, ‘How can we bring AI to this?’ That way leads to problems.”
When AI is used, the report urges a human-centered approach to designing it — one that takes people’s needs, wants, and motivations into account, explained McNeese.
Those who have domain expertise — employees who provide services of value to the public — should be involved in determining where AI tools might and might not be useful, said Venkatasubramanian. “It is really, really important to empower the people who have the expertise to understand the domain,” he stressed…(More)”.
Article by Clarisse Girot, Limor Shmerling Magazanik and Eric Sutherland: “Every day, healthcare and health research systems around the world generate vast amounts of data that could profoundly transform how we manage public health and understand, prevent, and treat diseases. From electronic health records and genomic sequences to imaging studies and population health statistics, responsible use of this information could accelerate medical research and fuel medical breakthroughs.
The core challenge for governments is to strike a balance between using health data for public good and safeguarding individual rights like privacy and autonomy. This requires distinct types of legal and technical protections, depending on how the data are collected and used.
The OECD Recommendation on Health Data Governance encourages countries to make health data available for public interest uses such as research and innovation, while safeguarding individual privacy and freedoms. The level and type of safeguards that are needed depend on how the data are collected and used. In clinical research, where patients participate directly in trials or medical studies, a key requirement is informed consent, recognising the importance that individuals can exercise their right to make autonomous decisions…(More)”.
Article by Nicholas Andreou, Philipp Essl & Jeremy Rogers: “Impact assessment, either pre-investment or post-investment, is a critical component of robust impact measurement and management (IMM). As many social and environmental issues worsen, high-quality data and insights are needed, more than ever, to effectively allocate resources to solutions that address these challenges. However, impact assessment is often a resource-intensive and difficult process to do well. Artificial Intelligence (AI) is an exciting umbrella of technologies that has the potential to transform how investors think about IMM (for example, in deep listening).
Our firm, Better Society Capital (BSC) is an impact fund-of-funds with a mandate to build the UK impact investing market. We have spent many years developing our IMM toolkit and processes, and we were recently placed on the Bluemark Leaderboard as having top-quartile scores across the eight categories of the Operating Principles of Impact Management (Bluemark is a leading impact management verification company, and the operating principles are a recognized framework outlining impact management best practices for impact investors). We are interested in how using AI alongside existing processes and judgment can bring additional insight, so we decided to run an experiment using our own portfolio to test the question: Can AI give investors the impact assessment rigor they crave at the speed they need?…(More)”.
Article by Daniel Innerarity and Fabrizio Tassinari: “When Albanian Prime Minister Edi Rama recently announced his new cabinet, it was not his choice of finance minister or foreign minister that gained the most attention. The biggest news was Rama’s appointment of an AI-powered bot as the new minister of public procurement.
“Diella” will oversee and allocate all public tenders that the government assigns to private firms. “[It] is the first member of government who is not physically present, but virtually created by artificial intelligence,” Rama declared. She will help make Albania “a country where public procurement is 100% corruption-free.”
At once evocative and provocative, the move reminds us that those who place the greatest hope in technology tend to be among those with the least confidence in human nature. But more to the point, the appointment of Diella is evidence that the supposed cure for whatever ails democracy is increasingly taking the form of digital authoritarianism. Such interventions might appeal to Silicon Valley oligarchs, but democrats everywhere should be alarmed.
The conceptual basis for an AI minister lies in how technophiles imagine humanity’s relationship with the future. “Techno-solutionists” treat political problems that normally require deliberation as if they were engineering challenges that could be resolved purely through technical means. As we saw in the United States during Elon Musk’s brief stint at the helm of DOGE (the Department of Government Efficiency), technology is offered as a substitute for politics and political decision-making.
The implication of AI-administered governance is that democracy will become redundant. Digital technocracy consists of technology developers claiming the authority to decide on the rules we must abide by and thus the conditions under which we will live. The checks and balances defended by Locke, Montesquieu, and America’s founders become obstacles to efficient decision-making. Why bother with such institutions when we can leverage the power of digital tools and algorithms? Under digital technocracy, debate is a waste of time, regulation is a brake on progress, and popular sovereignty is merely the consecration of incompetence…(More)”.
Paper by Michael Byczkowski: “At the core of the medical data economy lies a fundamental challenge: while data drive scientific progress, their collection and maintenance require significant financial and human resources. Hospitals and research institutions invest heavily in collecting, curating, annotating and analysing vast amounts of medical data, all while complying with strict regulatory and ethical requirements. These processes demand advanced technology, skilled personnel and secure digital infrastructures, yet the financial burden of it is often disproportionately shouldered by public institutions and healthcare providers.6
Despite these substantial investments, the economic returns from medical data are often realised much later and largely benefit private sector entities which commercialise insights through pharmaceuticals, medical devices or AI-driven diagnostics. This creates an inherent imbalance: while data originators bear the initial effort, financial rewards accrue downstream, where companies leverage refined datasets for product development and monetisation.
This pattern reflects a broader dynamic in the biomedical innovation pipeline, where public research institutions frequently contribute foundational knowledge and infrastructure in early-stage, non-commercial discovery, while private-sector actors engage in later-stage development with commercial potential, regulatory approval and market delivery…(More)”
Paper by Aaron Chatterji et al: “Despite the rapid adoption of LLM chatbots, little is known about how they are used. We document the growth of ChatGPT’s consumer product from its launch in November 2022 through July 2025, when it had been adopted by around 10% of the world’s adult population. Early adopters were disproportionately male but the gender gap has narrowed dramatically, and we find higher growth rates in lower-income countries. Using a privacy-preserving automated pipeline, we classify usage patterns within a representative sample of ChatGPT conversations. We find steady growth in work-related messages but even faster growth in non-work-related messages, which have grown from 53% to more than 70% of all usage. Work usage is more common for educated users in highly-paid professional occupations. We classify messages by conversation topic and find that “Practical Guidance,” “Seeking Information,” and “Writing” are the three most common topics and collectively account for nearly 80% of all conversations. Writing dominates work-related tasks, highlighting chatbots’ unique ability to generate digital outputs compared to traditional search engines. Computer programming and self-expression both represent relatively small shares of use. Overall, we find that ChatGPT provides economic value through decision support, which is especially important in knowledge-intensive jobs…(More)”.
Article by Emanuel Maiberg: “The Department of Justice has removed a study showing that white supremacist and far-right violence “continues to outpace all other types of terrorism and domestic violent extremism” in the United States. The study, which was conducted by the National Institute of Justice and hosted on a DOJ website was available there at least until September 12, 2025, according to an archive of the page saved by the Wayback Machine. Daniel Malmer, a PhD student studying online extremism at UNC-Chapel Hill, first noticed the paper was deleted. “The Department of Justice’s Office of Justice Programs is currently reviewing its websites and materials in accordance with recent Executive Orders and related guidance,” reads a message on the page where the study was formerly hosted. “During this review, some pages and publications will be unavailable. We apologize for any inconvenience this may cause.” Shortly after Donald Trump took office he issued an executive order that forced government agencies to scrub their sites of any mention of “diversity,” “gender,” “DEI,” and other “forbidden words” and perceived notions of “wokeness.” The executive order impacted every government agency, including NASA, and was a huge waste of engineers’ time. We don’t know why the study about far-right extremist violence was removed recently, but it comes immediately after the assassination of conservative personality Charlie Kirk, accusations from the administration that the left is responsible for most of the political violence in the country, and a renewed commitment from the administration to crack down on the “radical left..(More)”.