Democratic Resilience: Moving from Theoretical Frameworks to a Practical Measurement Agenda


Paper by Nicholas Biddle, Alexander Fischer, Simon D. Angus, Selen Ercan, Max Grömping, andMatthew Gray: “Global indices and media narratives indicate a decline in democratic institutions, values, and practices. Simultaneously, democratic innovators are experimenting with new ways to strengthen democracy at local and national levels. These both suggest democracies are not static; they evolve as society, technology and the environment change.

This paper examines democracy as a resilient system, emphasizing the role of applied analysis in shaping effective policy and programs, particularly in Australia. Grounded in adaptive processes, democratic resilience is the capacity of a democracy to identify problems, and collectively respond to changing conditions, balancing institutional stability with transformative. It outlines the ambition of a national network of scholars, civil society leaders, and policymakers to equip democratic innovators with practical insights and foresight underpinning new ideas. These insights are essential for strengthening both public institutions, public narratives and community programs.

We review current literature on resilient democracies and highlight a critical gap: current measurement efforts focus heavily on composite indices—especially trust—while neglecting dynamic flows and causal drivers. They focus on the descriptive features and identify weaknesses, they do not focus on the diagnostics or evidence to what strengths democracies. This is reflected in the lack of cross-sector networked, living evidence systems to track what works and why across the intersecting dynamics of democratic practices. To address this, we propose a practical agenda centred on three core strengthening flows of democratic resilience: trusted institutions, credible information, and social inclusion.

The paper reviews six key data sources and several analytic methods for continuously monitoring democratic institutions, diagnosing causal drivers, and building an adaptive evidence system to inform innovation and reform. By integrating resilience frameworks and policy analysis, we demonstrate how real-time monitoring and analysis can enable innovation, experimentation and cross-sector ingenuity.

This article presents a practical research agenda connecting a national network of scholars and civil society leaders. We suggest this agenda be problem-driven, facilitated by participatory approaches to asking and prioritising the questions that matter most. We propose a connected approach to collectively posing key questions that matter most, expanding data sources, and fostering applied ideation between communities, civil society, government, and academia—ensuring democracy remains resilient in an evolving global and national context…(More)”.

AI adoption in crowdsourcing


Paper by John Michael Maxel Okoche et al: “Despite significant technology advances especially in artificial intelligence (AI), crowdsourcing platforms still struggle with issues such as data overload and data quality problems, which hinder their full potential. This study addresses a critical gap in the literature how the integration of AI technologies in crowdsourcing could help overcome some these challenges. Using a systematic literature review of 77 journal papers, we identify the key limitations of current crowdsourcing platforms that included issues of quality control, scalability, bias, and privacy. Our research highlights how different forms of AI including from machine learning (ML), deep learning (DL), natural language processing (NLP), automatic speech recognition (ASR), and natural language generation techniques (NLG) can address the challenges most crowdsourcing platforms face. This paper offers knowledge to support the integration of AI first by identifying types of crowdsourcing applications, their challenges and the solutions AI offers for improvement of crowdsourcing…(More)”.

Code Shift: Using AI to Analyze Zoning Reform in American Cities


Report by Arianna Salazar-Miranda & Emily Talen: “Cities are at the forefront of addressing global sustainability challenges, particularly those exacerbated by climate change. Traditional zoning codes, which often segregate land uses, have been linked to increased vehicular dependence, urban sprawl and social disconnection, undermining broader social and environmental sustainability objectives. This study investigates the adoption and impact of form-based codes (FBCs), which aim to promote sustainable, compact and mixed-use urban forms as a solution to these issues. Using natural language processing techniques, we analyzed zoning documents from over 2,000 United States census-designated places to identify linguistic patterns indicative of FBC principles. Our fndings reveal widespread adoption of FBCs across the country, with notable variations within regions. FBCs are associated with higher foor to area ratios, narrower and more consistent street setbacks and smaller plots. We also fnd that places with FBCs have improved walkability, shorter commutes and a higher share of multifamily housing. Our fndings highlight the utility of natural language processing for evaluating zoning codes and underscore the potential benefts of form-based zoning reforms for enhancing urban sustainability…(More)”.

Statistical methods in public policy research


Chapter by Andrew Heiss: “This essay provides an overview of statistical methods in public policy, focused primarily on the United States. I trace the historical development of quantitative approaches in policy research, from early ad hoc applications through the 19th and early 20th centuries, to the full institutionalization of statistical analysis in federal, state, local, and nonprofit agencies by the late 20th century.

I then outline three core methodological approaches to policy-centered statistical research across social science disciplines: description, explanation, and prediction, framing each in terms of the focus of the analysis. In descriptive work, researchers explore what exists and examine any variable of interest to understand their different distributions and relationships. In explanatory work, researchers ask why does it exist and how can it be influenced. The focus of the analysis is on explanatory variables (X) to either (1) accurately estimate their relationship with an outcome variable (Y), or (2) causally attribute the effect of specific explanatory variables on outcomes. In predictive work, researchers as what will happen next and focus on the outcome variable (Y) and on generating accurate forecasts, classifications, and predictions from new data. For each approach, I examine key techniques, their applications in policy contexts, and important methodological considerations.

I then consider critical perspectives on quantitative policy analysis framed around issues related to a three-part “data imperative” where governments are driven to count, gather, and learn from data. Each of these imperatives entail substantial issues related to privacy, accountability, democratic participation, and epistemic inequalities—issues at odds with public sector values of transparency and openness. I conclude by identifying some emerging trends in public sector-focused data science, inclusive ethical guidelines, open research practices, and future directions for the field…(More)”.

Data Sharing: A Case-Study of Luxury Surveillance by Tesla


Paper by Marc Schuilenburg and Yarin Eski: “Why do people voluntarily give away their personal data to private companies? In this paper, we show how data sharing is experienced at the level of Tesla car owners. We regard Tesla cars as luxury surveillance goods for which the drivers voluntarily choose to share their personal data with the US company. Based on an analysis of semi-structured interviews and observations of Tesla owners’ posts on Facebook groups, we discern three elements of luxury surveillance: socializing, enjoying and enduring. We conclude that luxury surveillance can be traced back to the social bonds created by a gift economy…(More)”.

Fostering Open Data


Paper by Uri Y. Hacohen: “Data is often heralded as “the world’s most valuable resource,” yet its potential to benefit society remains unrealized due to systemic barriers in both public and private sectors. While open data-defined as data that is available, accessible, and usable-holds immense promise to advance open science, innovation, economic growth, and democratic values, its utilization is hindered by legal, technical, and organizational challenges. Public sector initiatives, such as U.S. and European Union open data regulations, face uneven enforcement and regulatory complexity, disproportionately affecting under-resourced stakeholders such as researchers. In the private sector, companies prioritize commercial interests and user privacy, often obstructing data openness through restrictive policies and technological barriers. This article proposes an innovative, four-layered policy framework to overcome these obstacles and foster data openness. The framework includes (1) improving open data infrastructures, (2) ensuring legal frameworks for open data, (3) incentivizing voluntary data sharing, and (4) imposing mandatory data sharing obligations. Each policy cluster is tailored to address sector-specific challenges and balance competing values such as privacy, property, and national security. Drawing from academic research and international case studies, the framework provides actionable solutions to transition from a siloed, proprietary data ecosystem to one that maximizes societal value. This comprehensive approach aims to reimagine data governance and unlock the transformative potential of open data…(More)”.

Global data-driven prediction of fire activity


Paper by Francesca Di Giuseppe, Joe McNorton, Anna Lombardi & Fredrik Wetterhall: “Recent advancements in machine learning (ML) have expanded the potential use across scientific applications, including weather and hazard forecasting. The ability of these methods to extract information from diverse and novel data types enables the transition from forecasting fire weather, to predicting actual fire activity. In this study we demonstrate that this shift is feasible also within an operational context. Traditional methods of fire forecasts tend to over predict high fire danger, particularly in fuel limited biomes, often resulting in false alarms. By using data on fuel characteristics, ignitions and observed fire activity, data-driven predictions reduce the false-alarm rate of high-danger forecasts, enhancing their accuracy. This is made possible by high quality global datasets of fuel evolution and fire detection. We find that the quality of input data is more important when improving forecasts than the complexity of the ML architecture. While the focus on ML advancements is often justified, our findings highlight the importance of investing in high-quality data and, where necessary create it through physical models. Neglecting this aspect would undermine the potential gains from ML-based approaches, emphasizing that data quality is essential to achieve meaningful progress in fire activity forecasting…(More)”.

Exploring Human Mobility in Urban Nightlife: Insights from Foursquare Data


Article by Ehsan Dorostkar: “In today’s digital age, social media platforms like Foursquare provide a wealth of data that can reveal fascinating insights into human behavior, especially in urban environments. Our recent study, published in Cities, delves into how virtual mobility on Foursquare translates into actual human mobility in Tehran’s nightlife scenes. By analyzing user-generated data, we uncovered patterns that can help urban planners create more vibrant and functional nightlife spaces…

Our study aimed to answer two key questions:

  1. How does virtual mobility on Foursquare influence real-world human mobility in urban nightlife?
  2. What spatial patterns emerge from these movements, and how can they inform urban planning?

To explore these questions, we focused on two bustling nightlife spots in Tehran—Region 1 (Darband Square) and Region 6 (Valiasr crossroads)—where Foursquare data indicated high user activity.

Methodology

We combined data from two sources:

  1. Foursquare API: To track user check-ins and identify popular nightlife venues.
  2. Tehran Municipality API: To contextualize the data within the city’s urban framework.

Using triangulation and interpolation techniques, we mapped the “human mobility triangles” in these areas, calculating the density and spread of user activity…(More)”.

AI for collective intelligence


Introduction to special issue by Christoph Riedl and David De Cremer: “AI has emerged as a transformative force in society, reshaping economies, work, and everyday life. We argue that AI can not only improve short-term productivity but can also enhance a group’s collective intelligence. Specifically, AI can be employed to enhance three elements of collective intelligence: collective memory, collective attention, and collective reasoning. This editorial reviews key emerging work in the area to suggest ways in which AI can support the socio-cognitive architecture of collective intelligence. We will then briefly introduce the articles in the “AI for Collective Intelligence” special issue…(More)”.

LLM Social Simulations Are a Promising Research Method


Paper by Jacy Reese Anthis et al: “Accurate and verifiable large language model (LLM) simulations of human research subjects promise an accessible data source for understanding human behavior and training new AI systems. However, results to date have been limited, and few social scientists have adopted these methods. In this position paper, we argue that the promise of LLM social simulations can be achieved by addressing five tractable challenges. We ground our argument in a literature survey of empirical comparisons between LLMs and human research subjects, commentaries on the topic, and related work. We identify promising directions with prompting, fine-tuning, and complementary methods. We believe that LLM social simulations can already be used for exploratory research, such as pilot experiments for psychology, economics, sociology, and marketing. More widespread use may soon be possible with rapidly advancing LLM capabilities, and researchers should prioritize developing conceptual models and evaluations that can be iteratively deployed and refined at pace with ongoing AI advances…(More)”.