Objectivity vs affect: how competing forms of legitimacy can polarize public debate in data-driven public consultation


Paper by Alison Powell: “How do data and objectivity become politicized? How do processes intended to include citizen voices instead push them into social media that intensify negative expression? This paper examines the possibility and limits of ‘agonistic data practices’ (Crooks & Currie, 2021) examining how data-driven consultation practices create competing forms of legitimacy for quantifiable knowledge and affective lived experience. Drawing on a two-year study of a private Facebook group self-presenting as a supportive space for working-class people critical of the development of ‘low-traffic neighbourhoods’ (LTNs), the paper reveals how the dynamics of ‘affective polarization’ associated the use of data with elite and exclusionary politics. Participants addressed this by framing their online contributions as ‘vernacular data’ and also by associating numerical data with exclusion and inequality. Over time the strong statements of feeling began to support content of a conspiratorial nature, reflected at the social level of discourse in the broader media environment where stories of strong feeling gain legitimacy in right-wing sources. The paper concludes that ideologies of dataism and practices of datafication may create conditions for political extremism to develop when the potential conditions of ‘agonistic data practices’ are not met, and that consultation processes must avoid overly valorizing data and calculable knowledge if they wish to retain democratic accountability…(More)”.

EBP+: Integrating science into policy evaluation using Evidential Pluralism


Article by Joe Jones, Alexandra Trofimov, Michael Wilde & Jon Williamson: “…While the need to integrate scientific evidence in policymaking is clear, there isn’t a universally accepted framework for doing so in practice. Orthodox evidence-based approaches take Randomised Controlled Trials (RCTs) as the gold standard of evidence. Others argue that social policy issues require theory-based methods to understand the complexities of policy interventions. These divisions may only further decrease trust in science at this critical time.

EBP+ offers a broader framework within which both orthodox and theory-based methods can sit. EBP+ also provides a systematic account of how to integrate and evaluate these different types of evidence. EBP+ can offer consistency and objectivity in policy evaluation, and could yield a unified approach that increases public trust in scientifically-informed policy…

EBP+ is motivated by Evidential Pluralism, a philosophical theory of causal enquiry that has been developed over the last 15 years. Evidential Pluralism encompasses two key claims. The first, object pluralism, says that establishing that A is a cause of B (e.g., that a policy intervention causes a specific outcome) requires establishing both that and B are appropriately correlated and that there is some mechanism which links the two and which can account for the extent of the correlation. The second claim, study pluralism, maintains that assessing whether is a cause of B requires assessing both association studies (studies that repeatedly measure and B, together with potential confounders, to measure their association) and mechanistic studies (studies of features of the mechanisms linking A to B), where available…(More)”.

A diagrammatic representation of Evidential Pluralism
Evidential Pluralism (© Jon Williamson)

Citizen Jury on New Genomic Techniques


Paper by Kai P. Purnhagen and Alexandra Molitorisova: “Between 26-28 January 2024, a citizen jury was convened at the Schloss Thurnau in Upper Franconia, Germany to deliberate about new genomic techniques (NGTs) used in agriculture and food/feed production, ahead of the vote of the European Parliament and the Council of the European Union on the European Commission’s proposal for a regulation on plants obtained by certain NGTs and their food and feed. This report serves as a policy brief with all observations, assessments, and recommendations agreed by the jury with a minimum of 75 percent of the jurors’ votes. This report aims to provide policymakers, stakeholders, and the public with perspectives and considerations surrounding the use of NGTs in agriculture and food/feed production, as articulated by the members of the jury. There are 18 final recommendations produced by the jury. Through thoughtful analysis and dialogue, the jury sought to contribute to informed decision-making processes…(More)”.

Central banks use AI to assess climate-related risks


Article by Huw Jones: “Central bankers said on Tuesday they have broken new ground by using artificial intelligence to collect data for assessing climate-related financial risks, just as the volume of disclosures from banks and other companies is set to rise.

The Bank for International Settlements, a forum for central banks, the Bank of Spain, Germany’s Bundesbank and the European Central Bank said their experimental Gaia AI project was used to analyse company disclosures on carbon emissions, green bond issuance and voluntary net-zero commitments.

Regulators of banks, insurers and asset managers need high-quality data to assess the impact of climate-change on financial institutions. However, the absence of a single reporting standard confronts them with a patchwork of public information spread across text, tables and footnotes in annual reports.

Gaia was able to overcome differences in definitions and disclosure frameworks across jurisdictions to offer much-needed transparency, and make it easier to compare indicators on climate-related financial risks, the central banks said in a joint statement.

Despite variations in how the same data is reported by companies, Gaia focuses on the definition of each indicator, rather than how the data is labelled.

Furthermore, with the traditional approach, each additional key performance indicator, or KPI, and each new institution requires the analyst to either search for the information in public corporate reports or contact the institution for information…(More)”.

Ukrainians Are Using an App to Return Home


Article by Yuliya Panfil and Allison Price: “Two years into Russia’s invasion of Ukraine, the human toll continues to mount. At least 11 million people have been displaced by heavy bombing, drone strikes, and combat, and well over a million homes have been damaged or destroyed. But just miles from the front lines of what is a conventional land invasion, something decidedly unconventional has been deployed to help restore Ukrainian communities.

Thousands of families whose homes have been hit by Russian shelling are using their smartphones to file compensation claims, access government funds, and begin to rebuild their homes. This innovation is part of eRecovery, the world’s first-ever example of a government compensation program for damaged or destroyed homes rolled out digitally, at scale, in the midst of a war. It’s one of the ways in which Ukraine’s tech-savvy government and populace have leaned into digital solutions to help counter Russian aggression with resilience and a speedier approach to reconstruction and recovery.

According to Ukraine’s Housing, Land and Property Technical Working Group, since its launch last summer eRecovery has processed more than 83,000 compensation claims for damaged or destroyed property and paid out more than 45,000. In addition, more than half a million Ukrainians have taken the first step in the compensation process by filing a property damage report through Ukraine’s e-government platform, Diia. eRecovery’s potential to transform the way governments get people back into their homes following a war, natural disaster, or other calamity is hard to overstate…(More)”.

Unconventional data, unprecedented insights: leveraging non-traditional data during a pandemic


Paper by Kaylin Bolt et al: “The COVID-19 pandemic prompted new interest in non-traditional data sources to inform response efforts and mitigate knowledge gaps. While non-traditional data offers some advantages over traditional data, it also raises concerns related to biases, representativity, informed consent and security vulnerabilities. This study focuses on three specific types of non-traditional data: mobility, social media, and participatory surveillance platform data. Qualitative results are presented on the successes, challenges, and recommendations of key informants who used these non-traditional data sources during the COVID-19 pandemic in Spain and Italy….

Non-traditional data proved valuable in providing rapid results and filling data gaps, especially when traditional data faced delays. Increased data access and innovative collaborative efforts across sectors facilitated its use. Challenges included unreliable access and data quality concerns, particularly the lack of comprehensive demographic and geographic information. To further leverage non-traditional data, participants recommended prioritizing data governance, establishing data brokers, and sustaining multi-institutional collaborations. The value of non-traditional data was perceived as underutilized in public health surveillance, program evaluation and policymaking. Participants saw opportunities to integrate them into public health systems with the necessary investments in data pipelines, infrastructure, and technical capacity…(More)”.

Public sector capacity matters, but what is it?


Blog by Rainer Kattel, Marriana Mazzucato, Rosie Collington, Fernando Fernandez-Monge, Iacopo Gronchi, Ruth Puttick: “As governments turn increasingly to public sector innovations, challenges, missions and transformative policy initiatives, the need to understand and develop public sector capacities is ever more important. In IIPP’s project with Bloomberg Philanthropies to develop a Public Sector Capabilities Index, we propose to define public sector capacities through three inter-connected layers: state capacities, organisational capabilities, and dynamic capabilities of the public organisations.

The idea that governments should be able to design and deliver effective policies has existed ever since we had governments. A quick search in Google’s Ngram viewer shows that the use of state capacity in published books has experienced exponential growth since the late 1980s. It is, however, not a coincidence that focus on state and public sector capacities more broadly emerges in the shadow of new public management and neoliberal governance and policy reforms. Rather than understanding governance as a collaborative effort between all sectors, these reforms gave normative preference to business practices. Increasing focus on public sector capacity as a concept should thus be understood as an attempt to rebalance our understanding of how change happens in societies — through cross-sectoral co-creation — and as an effort to build the muscles in public organisations to work together to tackle socio-economic challenges.

We propose to define public sector capacities through three inter-connected layers: state capacities, organizational routines, and dynamic capabilities of the public organisations…(More)”.

How will AI shape our future cities?


Article by Ying Zhang: “For city planners, a bird’s-eye view of a map showing buildings and streets is no longer enough. They need to simulate changes to bus routes or traffic light timings before implementation to know how they might affect the population. Now, they can do so with digital twins – often referred to as a “mirror world” – which allows them to simulate scenarios more safely and cost-effectively through a three-dimensional virtual replica.

Cities such as New York, Shanghai and Helsinki are already using digital twins. In 2022, the city of Zurich launched its own version. Anyone can use it to measure the height of buildings, determine the shadows they cast and take a look into the future to see how Switzerland’s largest city might develop. Traffic congestion, a housing shortage and higher energy demands are becoming pressing issues in Switzerland, where 74% of the population already lives in urban areas.

But updating and managing digital twins will become more complex as population densities and the levels of detail increase, according to architect and urban designer Aurel von Richthofen of the consultancy Arup.

The world’s current urban planning models are like “individual silos” where “data cannot be shared, which makes urban planning not as efficient as we expect it to be”, said von Richthofen at a recent event hosted by the Swiss innovation network Swissnex. …

The underlying data is key to whether a digital twin city is effective. But getting access to quality data from different organisations is extremely difficult. Sensors, drones and mobile devices may collect data in real-time. But they tend to be organised around different knowledge domains – such as land use, building control, transport or ecology – each with its own data collection culture and physical models…(More)”

The Radical How


Report by Public Digital: “…We believe in the old adage about making the most of a crisis. We think the constraints facing the next government provide an unmissable opportunity to change how government works for the better.

Any mission-focused government should be well equipped to define, from day one, what outcomes it wants to bring about.

But radically changing what the government does is only part of the challenge. We also need to change how government does things. The usual methods, we argue in this paper, are too prone to failure and delay.

There’s a different approach to public service organisation, one based on multidisciplinary teams, starting with citizen needs, and scaling iteratively by testing assumptions. We’ve been arguing in favour of it for years now, and the more it gets used, the more we see success and timely delivery.

We think taking a new approach makes it possible to shift government from an organisation of programmes and projects, to one of missions and services. It offers even constrained administrations an opportunity to improve their chances of delivering outcomes, reducing risk, saving money, and rebuilding public trust…(More)”.

i.AI Consultation Analyser


New Tool by AI.Gov.UK: “Public consultations are a critical part of the process of making laws, but analysing consultation responses is complex and very time consuming. Working with the No10 data science team (10DS), the Incubator for Artificial Intelligence (i.AI) is developing a tool to make the process of analysing public responses to government consultations faster and fairer.

The Analyser uses AI and data science techniques to automatically extract patterns and themes from the responses, and turns them into dashboards for policy makers.

The goal is for computers to do what they are best at: finding patterns and analysing large amounts of data. That means humans are free to do the work of understanding those patterns.

Screenshot showing donut chart for those who agree or disagree, and a bar chart showing popularity of prevalent themes

Government runs 700-800 consultations a year on matters of importance to the public. Some are very small, but a large consultation might attract hundreds of thousands of written responses.

A consultation attracting 30,000 responses requires a team of around 25 analysts for 3 months to analyse the data and write the report. And it’s not unheard of to get double that number

If we can apply automation in a way that is fair, effective and accountable, we could save most of that £80m…(More)”