Constructing Valid Geospatial Tools for Environmental Justice


Report from the National Academies of Sciences, Engineering, and Medicine: “Decades of research have shown that the most disadvantaged communities exist at the intersection of high levels of hazard exposure, racial and ethnic marginalization, and poverty.

Mapping and geographical information systems have been crucial for analyzing the environmental burdens of marginalized communities, and several federal and state geospatial tools have emerged to help address environmental justice concerns — such as the Climate and Economic Justice Screening Tool developed in 2022 in response to Justice40 initiatives from the Biden administration.

Constructing Valid Geospatial Tools for Environmental Justice, a new report from the National Academies of Sciences, Engineering, and Medicine, offers recommendations for developing environmental justice tools that reflect the experiences of the communities they measure.

The report recommends data strategies focused on community engagement, validation, and documentation. It emphasizes using a structured development process and offers guidance for selecting and assessing indicators, integrating indicators, and incorporating cumulative impact scoring. Tool developers should choose measures of economic burden beyond the federal poverty level that account for additional dimensions of wealth and geographic variations in cost of living. They should also use indicators that measure the impacts of racism in policies and practices that have led to current disparities…(More)”.

Using AI to Map Urban Change


Brief by Tianyuan Huang, Zejia Wu, Jiajun Wu, Jackelyn Hwang, Ram Rajagopal: “Cities are constantly evolving, and better understanding those changes facilitates better urban planning and infrastructure assessments and leads to more sustainable social and environmental interventions. Researchers currently use data such as satellite imagery to study changing urban environments and what those changes mean for public policy and urban design. But flaws in the current approaches, such as inadequately granular data, limit their scalability and their potential to inform public policy across social, political, economic, and environmental issues.

Street-level images offer an alternative source of insights. These images are frequently updated and high-resolution. They also directly capture what’s happening on a street level in a neighborhood or across a city. Analyzing street-level images has already proven useful to researchers studying socioeconomic attributes and neighborhood gentrification, both of which are essential pieces of information in urban design, sustainability efforts, and public policy decision-making for cities. Yet, much like other data sources, street-level images present challenges: accessibility limits, shadow and lighting issues, and difficulties scaling up analysis.

To address these challenges, our paper “CityPulse: Fine-Grained Assessment of Urban Change with Street View Time Series” introduces a multicity dataset of labeled street-view images and proposes a novel artificial intelligence (AI) model to detect urban changes such as gentrification. We demonstrate the change-detection model’s effectiveness by testing it on images from Seattle, Washington, and show that it can provide important insights into urban changes over time and at scale. Our data-driven approach has the potential to allow researchers and public policy analysts to automate and scale up their analysis of neighborhood and citywide socioeconomic change…(More)”.

Visualization for Public Involvement


Report by the National Academies of Sciences, Engineering, and Medicine: “Visualization methods have long been integral to the public involvement process for transportation planning and project development. From well-established methods such as conceptual sketches or photo simulations to the latest immersive technologies, state departments of transportation (DOTs) recognize that visualizations can significantly increase public understanding of a project’s appearance and physical impacts. Emerging methods such as interactive three-dimensional environments, virtual reality, and augmented reality can dramatically enhance public understanding of transportation options and design concepts…(More)”.

Policy fit for the future: the Australian Government Futures primer


Primer by Will Hartigan and Arthur Horobin: “Futures is a systematic exploration of probable, possible and preferable future developments to inform present-day policy, strategy and decision-making. It uses multiple plausible scenarios of the future to anticipate and make sense of disruptive change. It is also known as strategic foresight...

This primer provides an overview of Futures methodologies and their practical application to policy development and advice. It is a first step for policy teams and officers interested in Futures: providing you with a range of flexible tools, ideas and advice you can adapt to your own policy challenges and environments.

This primer was developed by the Policy Projects and Taskforce Office in the Department of Prime Minister and Cabinet. We have drawn on expertise from inside and outside of government –including through our project partners, the Futures Hub at the National Security College in the Australian National University. 

This primer has been written by policy officers, for policy officers –with a focus on practical and tested approaches that can support you to create policy fit for the future…(More)”.

Automating public services


Report by Anna Dent: “…Public bodies, under financial stress and looking for effective solutions, are at risk of jumping on the automation bandwagon without critically assessing whether it’s actually appropriate for their needs, and whether the potential benefits outweigh the risks. To realise the benefits of automation and minimise problems for communities and public bodies themselves, a clear-eyed approach which really gets to grips with the risks is needed. 

The temptation to introduce automation to tackle complex social challenges is strong; they are often deep-rooted and expensive to deal with, and can have life-long implications for individuals and communities. But precisely because of their complex nature they are not the best fit for rules-based automated processes, which may fail to deliver what they set out to achieve. 

Bias is increasingly recognised as a critical challenge with automation in the public sector. Bias can be introduced through training data, and can occur when automated tools are disproportionately used on a particular community. In either case, the effectiveness of the tool or process is undermined, and citizens are at risk of discrimination, unfair targeting and exclusion from services. 

Automated tools and processes rely on huge amounts of data; in public services this will often mean personal information and data about us and our lives which we may or may not feel comfortable being used. Balancing everyone’s right to privacy with the desire for efficiency and better outcomes is rarely straightforward, and if done badly can lead to a breakdown in trust…(More)”.

The double-edged sword of AI in education


Article by Rose Luckin: “Artificial intelligence (AI) could revolutionize education as profoundly as the internet has already revolutionized our lives. However, our experience with commercial internet platforms gives us pause. Consider how social media algorithms, designed to maximize engagement and ad revenue, have inadvertently promoted divisive content and misinformation, a development at odds with educational goals.

Like the commercialization of the internet, the AI consumerization trend, driven by massive investments across sectors, prioritizes profit over societal and educational benefits. This focus on monetization risks overshadowing crucial considerations about AI’s integration into educational contexts.

The consumerization of AI in education is a double-edged sword. While increasing accessibility, it could also undermine fundamental educational principles and reshape students’ attitudes toward learning. We must advocate for a thoughtful, education-centric approach to AI development that enhances, rather than replaces, human intelligence and recognises the value of effort in learning.

As generative AI systems for education emerge, technical experts and policymakers have a unique opportunity to ensure their design supports the interests of learners and educators.

Risk 1: Overestimating AI’s intelligence

In essence, learning is not merely an individual cognitive process but a deeply social endeavor, intricately linked to cultural context, language development, and the dynamic relationship between practical experience and theoretical knowledge…(More)”.

The impact of data portability on user empowerment, innovation, and competition


OECD Note: “Data portability enhances access to and sharing of data across digital services and platforms. It can empower users to play a more active role in the re-use of their data and can help stimulate competition and innovation by fostering interoperability while reducing switching costs and lock-in effects. However, the effectiveness of data portability in enhancing competition depends on the terms and conditions of data transfer and the extent to which competitors can make use of the data effectively. Additionally, there are potential downsides: data portability measures may unintentionally stifle competition in fast-evolving markets where interoperability requirements may disproportionately burden SMEs and start-ups. Data portability can also increase digital security and privacy risks by enabling data transfers to multiple destinations. This note presents the following five dimensions essential for designing and implementing data portability frameworks: sectoral scope; beneficiaries; type of data; legal obligations; and operational modality…(More)”.

Reliability of U.S. Economic Data Is in Jeopardy, Study Finds


Article by Ben Casselman: “A report says new approaches and increased spending are needed to ensure that government statistics remain dependable and free of political influence.

Federal Reserve officials use government data to help determine when to raise or lower interest rates. Congress and the White House use it to decide when to extend jobless benefits or send out stimulus payments. Investors place billions of dollars worth of bets that are tied to monthly reports on job growth, inflation and retail sales.

But a new study says the integrity of that data is in increasing jeopardy.

The report, issued on Tuesday by the American Statistical Association, concludes that government statistics are reliable right now. But that could soon change, the study warns, citing factors including shrinking budgets, falling survey response rates and the potential for political interference.

The authors — statisticians from George Mason University, the Urban Institute and other institutions — likened the statistical system to physical infrastructure like highways and bridges: vital, but often ignored until something goes wrong.

“We do identify this sort of downward spiral as a threat, and that’s what we’re trying to counter,” said Nancy Potok, who served as chief statistician of the United States from 2017 to 2019 and was one of the report’s authors. “We’re not there yet, but if we don’t do something, that threat could become a reality, and in the not-too-distant future.”

The report, “The Nation’s Data at Risk,” highlights the threats facing statistics produced across the federal government, including data on education, health, crime and demographic trends.

But the risks to economic data are particularly notable because of the attention it receives from policymakers and investors. Most of that data is based on surveys of households or businesses. And response rates to government surveys have plummeted in recent years, as they have for private polls. The response rate to the Current Population Survey — the monthly survey of about 60,000 households that is the basis for the unemployment rate and other labor force statistics — has fallen to about 70 percent in recent months, from nearly 90 percent a decade ago…(More)”.

Drivers of Trust in Public Institutions


Press Release: “In an increasingly challenging environment – marked by successive economic shocks, rising protectionism, the war in Europe and ongoing conflicts in the Middle East, as well as structural challenges and disruptions caused by rapid technological developments, climate change and population aging – 44% of respondents now have low or no trust in their national government, surpassing the 39% of respondents who express high or moderately high trust in national government, according to a new OECD report.  

OECD Survey on Drivers of Trust in Public Institutions – 2024 Results, presents findings from the second OECD Trust Survey, conducted in October and November 2023 across 30 Member countries. The biennial report offers a comprehensive analysis of current trust levels and their drivers across countries and public institutions. 

This edition of the Trust Survey confirms the previous finding that socio-economic and demographic factors, as well as a sense of having a say in decision making, affect trust. For example, 36% of women reported high or moderately high trust in government, compared to 43% of men. The most significant drop in trust since 2021 is seen among women and those with lower levels of education. The trust gap is largest between those who feel they have a say and those who feel they do not have a say in what the government does. Among those who report they have a say, 69% report high or moderately high trust in their national government, whereas among those who feel they do not only 22% do…(More)”.

Big Tech-driven deliberative projects


Report by Canning Malkin and Nardine Alnemr: “Google, Meta, OpenAI and Anthropic have commissioned projects based on deliberative democracy. What was the purpose of each project? How was deliberation designed and implemented, and what were the outcomes? In this Technical Paper, Malkin and Alnemr describe the commissioning context, the purpose and remit, and the outcomes of these deliberative projects. Finally, they offer insights on contextualising projects within the broader aspirations of deliberative democracy…(More)”.