Stefaan Verhulst
Article by Peter Buneman, Dennis Dosso, Matteo Lissandrini, Gianmaria Silvello, and He Sun: “Databases publish data. This is undoubtedly the case for scientific and statistical databases, which have largely replaced traditional reference works. Database and Web technologies have led to an explosion in the number of databases that support scientific research, for obvious reasons: Databases provide faster communication of knowledge, hold larger volumes of data, are more easily searched, and are both human- and machine-readable. Moreover, they can be developed rapidly and collaboratively by a mixture of researchers and curators. For example, more than 1,500 curated databases are relevant to molecular biology alone. The value of these databases lies not only in the data they present but also in how they organize that data.
In the case of an author or journal, most bibliometric measures are obtained from citations to an associated set of publications. There are typically many ways of decomposing a database into publications, so we might use its organization to guide our choice of decompositions. We will show that when the database has a hierarchical structure, there is a natural extension of the h-index that works on this hierarchy…(More)”.
Paper by Nicholas Biddle, Alexander Fischer, Simon D. Angus, Selen Ercan, Max Grömping, andMatthew Gray: “Global indices and media narratives indicate a decline in democratic institutions, values, and practices. Simultaneously, democratic innovators are experimenting with new ways to strengthen democracy at local and national levels. These both suggest democracies are not static; they evolve as society, technology and the environment change.
This paper examines democracy as a resilient system, emphasizing the role of applied analysis in shaping effective policy and programs, particularly in Australia. Grounded in adaptive processes, democratic resilience is the capacity of a democracy to identify problems, and collectively respond to changing conditions, balancing institutional stability with transformative. It outlines the ambition of a national network of scholars, civil society leaders, and policymakers to equip democratic innovators with practical insights and foresight underpinning new ideas. These insights are essential for strengthening both public institutions, public narratives and community programs.
We review current literature on resilient democracies and highlight a critical gap: current measurement efforts focus heavily on composite indices—especially trust—while neglecting dynamic flows and causal drivers. They focus on the descriptive features and identify weaknesses, they do not focus on the diagnostics or evidence to what strengths democracies. This is reflected in the lack of cross-sector networked, living evidence systems to track what works and why across the intersecting dynamics of democratic practices. To address this, we propose a practical agenda centred on three core strengthening flows of democratic resilience: trusted institutions, credible information, and social inclusion.
The paper reviews six key data sources and several analytic methods for continuously monitoring democratic institutions, diagnosing causal drivers, and building an adaptive evidence system to inform innovation and reform. By integrating resilience frameworks and policy analysis, we demonstrate how real-time monitoring and analysis can enable innovation, experimentation and cross-sector ingenuity.
This article presents a practical research agenda connecting a national network of scholars and civil society leaders. We suggest this agenda be problem-driven, facilitated by participatory approaches to asking and prioritising the questions that matter most. We propose a connected approach to collectively posing key questions that matter most, expanding data sources, and fostering applied ideation between communities, civil society, government, and academia—ensuring democracy remains resilient in an evolving global and national context…(More)”.
Paper by John Michael Maxel Okoche et al: “Despite significant technology advances especially in artificial intelligence (AI), crowdsourcing platforms still struggle with issues such as data overload and data quality problems, which hinder their full potential. This study addresses a critical gap in the literature how the integration of AI technologies in crowdsourcing could help overcome some these challenges. Using a systematic literature review of 77 journal papers, we identify the key limitations of current crowdsourcing platforms that included issues of quality control, scalability, bias, and privacy. Our research highlights how different forms of AI including from machine learning (ML), deep learning (DL), natural language processing (NLP), automatic speech recognition (ASR), and natural language generation techniques (NLG) can address the challenges most crowdsourcing platforms face. This paper offers knowledge to support the integration of AI first by identifying types of crowdsourcing applications, their challenges and the solutions AI offers for improvement of crowdsourcing…(More)”.
Report by Arianna Salazar-Miranda & Emily Talen: “Cities are at the forefront of addressing global sustainability challenges, particularly those exacerbated by climate change. Traditional zoning codes, which often segregate land uses, have been linked to increased vehicular dependence, urban sprawl and social disconnection, undermining broader social and environmental sustainability objectives. This study investigates the adoption and impact of form-based codes (FBCs), which aim to promote sustainable, compact and mixed-use urban forms as a solution to these issues. Using natural language processing techniques, we analyzed zoning documents from over 2,000 United States census-designated places to identify linguistic patterns indicative of FBC principles. Our fndings reveal widespread adoption of FBCs across the country, with notable variations within regions. FBCs are associated with higher foor to area ratios, narrower and more consistent street setbacks and smaller plots. We also fnd that places with FBCs have improved walkability, shorter commutes and a higher share of multifamily housing. Our fndings highlight the utility of natural language processing for evaluating zoning codes and underscore the potential benefts of form-based zoning reforms for enhancing urban sustainability…(More)”.
Report by National Academies of Sciences, Engineering, and Medicine: “Advances in artificial intelligence (AI) promise to improve productivity significantly, but there are many questions about how AI could affect jobs and workers.
Recent technical innovations have driven the rapid development of generative AI systems, which produce text, images, or other content based on user requests – advances which have the potential to complement or replace human labor in specific tasks, and to reshape demand for certain types of expertise in the labor market.
Artificial Intelligence and the Future of Work evaluates recent advances in AI technology and their implications for economic productivity, the workforce, and education in the United States. The report notes that AI is a tool with the potential to enhance human labor and create new forms of valuable work – but this is not an inevitable outcome. Tracking progress in AI and its impacts on the workforce will be critical to helping inform and equip workers and policymakers to flexibly respond to AI developments…(More)”.
Essay by Blaise Agüera y Arcas and James Manyika: “Dramatic advances in artificial intelligence today are compelling us to rethink our understanding of what intelligence truly is. Our new insights will enable us to build better AI and understand ourselves better.
In short, we are in paradigm-shifting territory.
Paradigm shifts are often fraught because it’s easier to adopt new ideas when they are compatible with one’s existing worldview but harder when they’re not. A classic example is the collapse of the geocentric paradigm, which dominated cosmological thought for roughly two millennia. In the geocentric model, the Earth stood still while the Sun, Moon, planets and stars revolved around us. The belief that we were at the center of the universe — bolstered by Ptolemy’s theory of epicycles, a major scientific achievement in its day — was both intuitive and compatible with religious traditions. Hence, Copernicus’s heliocentric paradigm wasn’t just a scientific advance but a hotly contested heresy and perhaps even, for some, as Benjamin Bratton notes, an existential trauma. So, today, artificial intelligence.
In this essay, we will describe five interrelated paradigm shifts informing our development of AI:
- Natural Computing — Computing existed in nature long before we built the first “artificial computers.” Understanding computing as a natural phenomenon will enable fundamental advances not only in computer science and AI but also in physics and biology.
- Neural Computing — Our brains are an exquisite instance of natural computing. Redesigning the computers that power AI so they work more like a brain will greatly increase AI’s energy efficiency — and its capabilities too.
- Predictive Intelligence — The success of large language models (LLMs) shows us something fundamental about the nature of intelligence: it involves statistical modeling of the future (including one’s own future actions) given evolving knowledge, observations and feedback from the past. This insight suggests that current distinctions between designing, training and running AI models are transitory; more sophisticated AI will evolve, grow and learn continuously and interactively, as we do.
- General Intelligence — Intelligence does not necessarily require biologically based computation. Although AI models will continue to improve, they are already broadly capable, tackling an increasing range of cognitive tasks with a skill level approaching and, in some cases, exceeding individual human capability. In this sense, “Artificial General Intelligence” (AGI) may already be here — we just keep shifting the goalposts.
- Collective Intelligence — Brains, AI agents and societies can all become more capable through increased scale. However, size alone is not enough. Intelligence is fundamentally social, powered by cooperation and the division of labor among many agents. In addition to causing us to rethink the nature of human (or “more than human”) intelligence, this insight suggests social aggregations of intelligences and multi-agent approaches to AI development that could reduce computational costs, increase AI heterogeneity and reframe AI safety debates.
But to understand our own “intelligence geocentrism,” we must begin by reassessing our assumptions about the nature of computing, since it is the foundation of both AI and, we will argue, intelligence in any form…(More)”.
Article by Alice Miranda Ollstein: “The federal teams that count public health problems are disappearing — putting efforts to solve those problems in jeopardy.
Health Secretary Robert F. Kennedy Jr.’s purge of tens of thousands of federal workers has halted efforts to collect data on everything from cancer rates in firefighters to mother-to-baby transmission of HIV and syphilis to outbreaks of drug-resistant gonorrhea to cases of carbon monoxide poisoning.
The cuts threaten to obscure the severity of pressing health threats and whether they’re getting better or worse, leaving officials clueless on how to respond. They could also make it difficult, if not impossible, to assess the impact of the administration’s spending and policies. Both outside experts and impacted employees argue the layoffs will cost the government more money in the long run by eliminating information on whether programs are effective or wasteful, and by allowing preventable problems to fester.
“Surveillance capabilities are crucial for identifying emerging health issues, directing resources efficiently, and evaluating the effectiveness of existing policies,” said Jerome Adams, who served as surgeon general in the first Trump’s administration. “Without robust data and surveillance systems, we cannot accurately assess whether we are truly making America healthier.”..(More)”.
Book by Rogayeh Tabrizi: “…delivers an intuitive roadmap to help organizations disentangle the complexity of their data to create tangible and lasting value. The book explains how to balance the multiple disciplines that power AI and behavioral economics using a combination of the right questions and insightful problem solving.
You’ll learn why intellectual diversity and combining subject matter experts in psychology, behavior, economics, physics, computer science, and engineering is essential to creating advanced AI solutions. You’ll also discover:
- How behavioral economics principles influence data models and governance architectures and make digital transformation processes more efficient and effective
- Discussions of the most important barriers to value in typical big data and AI projects and how to bring them down
- The most effective methodology to help shorten the long, wasteful process of “boiling the ocean of data”
An exciting and essential resource for managers, executives, board members, and other business leaders engaged or interested in harnessing the power of artificial intelligence and big data, Behavioral AI will also benefit data and machine learning professionals…(More)”
Book by Glenn Adamson: “For millennia, predicting the future was the province of priests and prophets, the realm of astrologers and seers. Then, in the twentieth century, futurologists emerged, claiming that data and design could make planning into a rational certainty. Over time, many of these technologists and trend forecasters amassed power as public intellectuals, even as their predictions proved less than reliable. Now, amid political and ecological crises of our own making, we drown in a cacophony of potential futures-including, possibly, no future at all.
A Century of Tomorrows offers an illuminating account of how the world was transformed by the science (or is it?) of futurecasting. Beneath the chaos of competing tomorrows, Adamson reveals a hidden order: six key themes that have structured visions of what’s next. Helping him to tell this story are remarkable characters, including self-proclaimed futurologists such as Buckminster Fuller and Stewart Brand, as well as an eclectic array of other visionaries who have influenced our thinking about the world ahead: Octavia Butler and Ursula LeGuin, Shulamith Firestone and Sun Ra, Marcus Garvey and Timothy Leary, and more.
Arriving at a moment of collective anxiety and fragile hope, Adamson’s extraordinary bookshows how our projections for the future are, always and ultimately, debates about the present. For tomorrow is contained within the only thing we can ever truly know: today…(More)”.
Chapter by Andrew Heiss: “This essay provides an overview of statistical methods in public policy, focused primarily on the United States. I trace the historical development of quantitative approaches in policy research, from early ad hoc applications through the 19th and early 20th centuries, to the full institutionalization of statistical analysis in federal, state, local, and nonprofit agencies by the late 20th century.
I then outline three core methodological approaches to policy-centered statistical research across social science disciplines: description, explanation, and prediction, framing each in terms of the focus of the analysis. In descriptive work, researchers explore what exists and examine any variable of interest to understand their different distributions and relationships. In explanatory work, researchers ask why does it exist and how can it be influenced. The focus of the analysis is on explanatory variables (X) to either (1) accurately estimate their relationship with an outcome variable (Y), or (2) causally attribute the effect of specific explanatory variables on outcomes. In predictive work, researchers as what will happen next and focus on the outcome variable (Y) and on generating accurate forecasts, classifications, and predictions from new data. For each approach, I examine key techniques, their applications in policy contexts, and important methodological considerations.
I then consider critical perspectives on quantitative policy analysis framed around issues related to a three-part “data imperative” where governments are driven to count, gather, and learn from data. Each of these imperatives entail substantial issues related to privacy, accountability, democratic participation, and epistemic inequalities—issues at odds with public sector values of transparency and openness. I conclude by identifying some emerging trends in public sector-focused data science, inclusive ethical guidelines, open research practices, and future directions for the field…(More)”.