Paper by Christopher Walker and Sally Washington: “… presents a process model to guide the production of quality policy advice. The work draws on engagement with both public sector practitioners and academics to design a process model for the development of policy advice that works in practice (can be used by policy professionals in their day-to-day work) and aligns with theory (can be taught as part of explaining the dynamics of a wider policy advisory system). The 5D Model defines five key domains of inquiry: understanding Demand, being open to Discovery, undertaking Design, identifying critical Decision points, and shaping advice to enable Delivery. Our goal is a ‘repeatable, scalable’ model for supporting policy practitioners to provide quality advice to decision makers. The model was developed and tested through an extensive process of engagement with senior policy practitioners who noted the heuristic gave structure to practices that determine how policy advice is organized and formulated. Academic colleagues confirmed the utility of the model for explaining and teaching how policy is designed and delivered within the context of a wider policy advisory system (PAS). A unique aspect of this work was the collaboration and shared interest amongst academics and practitioners to define a model that is ‘useful for teaching’ and ‘useful for doing’…(More)”.
Open with care: transparency and data sharing in civically engaged research
Paper by Ankushi Mitra: “Research transparency and data access are considered increasingly important for advancing research credibility, cumulative learning, and discovery. However, debates persist about how to define and achieve these goals across diverse forms of inquiry. This article intervenes in these debates, arguing that the participants and communities with whom scholars work are active stakeholders in science, and thus have a range of rights, interests, and researcher obligations to them in the practice of transparency and openness. Drawing on civically engaged research and related approaches that advocate for subjects of inquiry to more actively shape its process and share in its benefits, I outline a broader vision of research openness not only as a matter of peer scrutiny among scholars or a top-down exercise in compliance, but rather as a space for engaging and maximizing opportunities for all stakeholders in research. Accordingly, this article provides an ethical and practical framework for broadening transparency, accessibility, and data-sharing and benefit-sharing in research. It promotes movement beyond open science to a more inclusive and socially responsive science anchored in a larger ethical commitment: that the pursuit of knowledge be accountable and its benefits made accessible to the citizens and communities who make it possible…(More)”.
Decision Making under Deep Uncertainty and the Great Acceleration
Paper by Robert J. Lempert: “Seventy-five years into the Great Acceleration—a period marked by unprecedented growth in human activity and its effects on the planet—some type of societal transformation is inevitable. Successfully navigating these tumultuous times requires scientific, evidence-based information as an input into society’s value-laden decisions at all levels and scales. The methods and tools most commonly used to bring such expert knowledge to policy discussions employ predictions of the future, which under the existing conditions of complexity and deep uncertainty can often undermine trust and hinder good decisions. How, then, should experts best inform society’s attempts to navigate when both experts and decisionmakers are sure to be surprised? Decision Making under Deep Uncertainty (DMDU) offers an answer to this question. With its focus on model pluralism, learning, and robust solutions coproduced in a participatory process of deliberation with analysis, DMDU can repair the fractured conversations among policy experts, decisionmakers, and the public. In this paper, the author explores how DMDU can reshape policy analysis to better align with the demands of a rapidly evolving world and offers insights into the roles and opportunities for experts to inform societal debates and actions toward more-desirable futures…(More)”.
Democratic Resilience: Moving from Theoretical Frameworks to a Practical Measurement Agenda
Paper by Nicholas Biddle, Alexander Fischer, Simon D. Angus, Selen Ercan, Max Grömping, andMatthew Gray: “Global indices and media narratives indicate a decline in democratic institutions, values, and practices. Simultaneously, democratic innovators are experimenting with new ways to strengthen democracy at local and national levels. These both suggest democracies are not static; they evolve as society, technology and the environment change.
This paper examines democracy as a resilient system, emphasizing the role of applied analysis in shaping effective policy and programs, particularly in Australia. Grounded in adaptive processes, democratic resilience is the capacity of a democracy to identify problems, and collectively respond to changing conditions, balancing institutional stability with transformative. It outlines the ambition of a national network of scholars, civil society leaders, and policymakers to equip democratic innovators with practical insights and foresight underpinning new ideas. These insights are essential for strengthening both public institutions, public narratives and community programs.
We review current literature on resilient democracies and highlight a critical gap: current measurement efforts focus heavily on composite indices—especially trust—while neglecting dynamic flows and causal drivers. They focus on the descriptive features and identify weaknesses, they do not focus on the diagnostics or evidence to what strengths democracies. This is reflected in the lack of cross-sector networked, living evidence systems to track what works and why across the intersecting dynamics of democratic practices. To address this, we propose a practical agenda centred on three core strengthening flows of democratic resilience: trusted institutions, credible information, and social inclusion.
The paper reviews six key data sources and several analytic methods for continuously monitoring democratic institutions, diagnosing causal drivers, and building an adaptive evidence system to inform innovation and reform. By integrating resilience frameworks and policy analysis, we demonstrate how real-time monitoring and analysis can enable innovation, experimentation and cross-sector ingenuity.
This article presents a practical research agenda connecting a national network of scholars and civil society leaders. We suggest this agenda be problem-driven, facilitated by participatory approaches to asking and prioritising the questions that matter most. We propose a connected approach to collectively posing key questions that matter most, expanding data sources, and fostering applied ideation between communities, civil society, government, and academia—ensuring democracy remains resilient in an evolving global and national context…(More)”.
AI adoption in crowdsourcing
Paper by John Michael Maxel Okoche et al: “Despite significant technology advances especially in artificial intelligence (AI), crowdsourcing platforms still struggle with issues such as data overload and data quality problems, which hinder their full potential. This study addresses a critical gap in the literature how the integration of AI technologies in crowdsourcing could help overcome some these challenges. Using a systematic literature review of 77 journal papers, we identify the key limitations of current crowdsourcing platforms that included issues of quality control, scalability, bias, and privacy. Our research highlights how different forms of AI including from machine learning (ML), deep learning (DL), natural language processing (NLP), automatic speech recognition (ASR), and natural language generation techniques (NLG) can address the challenges most crowdsourcing platforms face. This paper offers knowledge to support the integration of AI first by identifying types of crowdsourcing applications, their challenges and the solutions AI offers for improvement of crowdsourcing…(More)”.
Code Shift: Using AI to Analyze Zoning Reform in American Cities
Report by Arianna Salazar-Miranda & Emily Talen: “Cities are at the forefront of addressing global sustainability challenges, particularly those exacerbated by climate change. Traditional zoning codes, which often segregate land uses, have been linked to increased vehicular dependence, urban sprawl and social disconnection, undermining broader social and environmental sustainability objectives. This study investigates the adoption and impact of form-based codes (FBCs), which aim to promote sustainable, compact and mixed-use urban forms as a solution to these issues. Using natural language processing techniques, we analyzed zoning documents from over 2,000 United States census-designated places to identify linguistic patterns indicative of FBC principles. Our fndings reveal widespread adoption of FBCs across the country, with notable variations within regions. FBCs are associated with higher foor to area ratios, narrower and more consistent street setbacks and smaller plots. We also fnd that places with FBCs have improved walkability, shorter commutes and a higher share of multifamily housing. Our fndings highlight the utility of natural language processing for evaluating zoning codes and underscore the potential benefts of form-based zoning reforms for enhancing urban sustainability…(More)”.
Statistical methods in public policy research
Chapter by Andrew Heiss: “This essay provides an overview of statistical methods in public policy, focused primarily on the United States. I trace the historical development of quantitative approaches in policy research, from early ad hoc applications through the 19th and early 20th centuries, to the full institutionalization of statistical analysis in federal, state, local, and nonprofit agencies by the late 20th century.
I then outline three core methodological approaches to policy-centered statistical research across social science disciplines: description, explanation, and prediction, framing each in terms of the focus of the analysis. In descriptive work, researchers explore what exists and examine any variable of interest to understand their different distributions and relationships. In explanatory work, researchers ask why does it exist and how can it be influenced. The focus of the analysis is on explanatory variables (X) to either (1) accurately estimate their relationship with an outcome variable (Y), or (2) causally attribute the effect of specific explanatory variables on outcomes. In predictive work, researchers as what will happen next and focus on the outcome variable (Y) and on generating accurate forecasts, classifications, and predictions from new data. For each approach, I examine key techniques, their applications in policy contexts, and important methodological considerations.
I then consider critical perspectives on quantitative policy analysis framed around issues related to a three-part “data imperative” where governments are driven to count, gather, and learn from data. Each of these imperatives entail substantial issues related to privacy, accountability, democratic participation, and epistemic inequalities—issues at odds with public sector values of transparency and openness. I conclude by identifying some emerging trends in public sector-focused data science, inclusive ethical guidelines, open research practices, and future directions for the field…(More)”.
Data Sharing: A Case-Study of Luxury Surveillance by Tesla
Paper by Marc Schuilenburg and Yarin Eski: “Why do people voluntarily give away their personal data to private companies? In this paper, we show how data sharing is experienced at the level of Tesla car owners. We regard Tesla cars as luxury surveillance goods for which the drivers voluntarily choose to share their personal data with the US company. Based on an analysis of semi-structured interviews and observations of Tesla owners’ posts on Facebook groups, we discern three elements of luxury surveillance: socializing, enjoying and enduring. We conclude that luxury surveillance can be traced back to the social bonds created by a gift economy…(More)”.
Fostering Open Data
Paper by Uri Y. Hacohen: “Data is often heralded as “the world’s most valuable resource,” yet its potential to benefit society remains unrealized due to systemic barriers in both public and private sectors. While open data-defined as data that is available, accessible, and usable-holds immense promise to advance open science, innovation, economic growth, and democratic values, its utilization is hindered by legal, technical, and organizational challenges. Public sector initiatives, such as U.S. and European Union open data regulations, face uneven enforcement and regulatory complexity, disproportionately affecting under-resourced stakeholders such as researchers. In the private sector, companies prioritize commercial interests and user privacy, often obstructing data openness through restrictive policies and technological barriers. This article proposes an innovative, four-layered policy framework to overcome these obstacles and foster data openness. The framework includes (1) improving open data infrastructures, (2) ensuring legal frameworks for open data, (3) incentivizing voluntary data sharing, and (4) imposing mandatory data sharing obligations. Each policy cluster is tailored to address sector-specific challenges and balance competing values such as privacy, property, and national security. Drawing from academic research and international case studies, the framework provides actionable solutions to transition from a siloed, proprietary data ecosystem to one that maximizes societal value. This comprehensive approach aims to reimagine data governance and unlock the transformative potential of open data…(More)”.
Global data-driven prediction of fire activity
Paper by Francesca Di Giuseppe, Joe McNorton, Anna Lombardi & Fredrik Wetterhall: “Recent advancements in machine learning (ML) have expanded the potential use across scientific applications, including weather and hazard forecasting. The ability of these methods to extract information from diverse and novel data types enables the transition from forecasting fire weather, to predicting actual fire activity. In this study we demonstrate that this shift is feasible also within an operational context. Traditional methods of fire forecasts tend to over predict high fire danger, particularly in fuel limited biomes, often resulting in false alarms. By using data on fuel characteristics, ignitions and observed fire activity, data-driven predictions reduce the false-alarm rate of high-danger forecasts, enhancing their accuracy. This is made possible by high quality global datasets of fuel evolution and fire detection. We find that the quality of input data is more important when improving forecasts than the complexity of the ML architecture. While the focus on ML advancements is often justified, our findings highlight the importance of investing in high-quality data and, where necessary create it through physical models. Neglecting this aspect would undermine the potential gains from ML-based approaches, emphasizing that data quality is essential to achieve meaningful progress in fire activity forecasting…(More)”.