How Differential Privacy Will Affect Estimates of Air Pollution Exposure and Disparities in the United States


Article by Madalsa Singh: “Census data is crucial to understand energy and environmental justice outcomes such as poor air quality which disproportionately impact people of color in the U.S. With the advent of sophisticated personal datasets and analysis, Census Bureau is considering adding top-down noise (differential privacy) and post-processing 2020 census data to reduce the risk of identification of individual respondents. Using 2010 demonstration census and pollution data, I find that compared to the original census, differentially private (DP) census significantly changes ambient pollution exposure in areas with sparse populations. White Americans have lowest variability, followed by Latinos, Asian, and Black Americans. DP underestimates pollution disparities for SO2 and PM2.5 while overestimates the pollution disparities for PM10…(More)”.

How civic capacity gets urban social innovations started


Article by Christof Brandtner: “After President Trump withdrew from the Paris Climate Accords, several hundred mayors signed national and global treaties announcing their commitments to “step up and do more,” as a senior official of the City of New York told me in a poorly lit room in 2017. Cities were rushing to the forefront of adopting practices and policies to address contemporary social and environmental problems, such as climate change.

What the general enthusiasm masked is significant variation in the extent and speed at which cities adopt these innovations…My study of the geographic dispersion of green buildings certified with the U.S. Green Building Council’s Leadership in Energy and Environmental Design (LEED) rating system, published in the American Journal of Sociology, suggests that the organizational communities within cities play a significant role in adopting urban innovations. Cities with a robust civic capacity, where values-oriented organizations actively address social problems, are more likely to adopt new practices quickly and extensively. Civic capacity matters not only through structural channels, as a sign of ample resources and community social capital, but also through organizational channels. Values-oriented organizations are often early adopters of new practices, such as green construction, solar panels, electric vehicles, or equitable hiring practices. By creating proofs of concepts, these early adopters can serve as catalysts of municipal policies and widespread adoption…(More)”.

How Would You Defend the Planet From Asteroids? 


Article by Mahmud Farooque, Jason L. Kessler: “On September 26, 2022, NASA successfully smashed a spacecraft into a tiny asteroid named Dimorphos, altering its orbit. Although it was 6.8 million miles from Earth, the Double Asteroid Redirect Test (DART) was broadcast in real time, turning the impact into a rare pan-planetary moment accessible from smartphones around the world. 

For most people, the DART mission was the first glimmer—outside of the movies—that NASA was seriously exploring how to protect Earth from asteroids. Rightly famous for its technological prowess, NASA is less recognized for its social innovations. But nearly a decade before DART, the agency had launched the Asteroid Grand Challenge. In a pioneering approach to public engagement, the challenge brought citizens together to weigh in on how the taxpayer-funded agency might approach some technical decisions involving asteroids. 

The following account of how citizens came to engage with strategies for planetary defense—and the unexpected conclusions they reached—is based on the experiences of NASA employees, members of the Expert and Citizen Assessment of Science and Technology (ECAST) network, and forum participants…(More)”.

The Metaverse and Homeland Security


Report by Timothy Marler, Zara Fatima Abdurahaman, Benjamin Boudreaux, and Timothy R. Gulden: “The metaverse is an emerging concept and capability supported by multiple underlying emerging technologies, but its meaning and key characteristics can be unclear and will likely change over time. Thus, its relevance to some organizations, such as the U.S. Department of Homeland Security (DHS), can be unclear. This lack of clarity can lead to unmitigated threats and missed opportunities. It can also inhibit healthy public discourse and effective technology management generally. To help address these issues, this Perspective provides an initial review of the metaverse concept and how it might be relevant to DHS. As a critical first step with the analysis of any emerging technology, the authors review current definitions and identify key practical characteristics. Often, regardless of a precise definition, it is the fundamental capabilities that are central to discussion and management. Then, given a foundational understanding of what a metaverse entails, the authors summarize primary goals and relevant needs for DHS. Ultimately, in order to be relevant, technologies must align with actual needs for various organizations or users. By cross-walking exemplary DHS needs that stem from a variety of mission sets with pervasive characteristics of metaverses, the authors demonstrate that metaverses are, in fact, relevant to DHS. Finally, the authors identify specific threats and opportunities that DHS could proactively manage. Although this work focuses the discussion of threats and opportunities on DHS, it has broad implications. This work provides a foundation on which further discussions and research can build, minimizing disparities and discoordination in development and policy…(More)”.

Yes, No, Maybe? Legal & Ethical Considerations for Informed Consent in Data Sharing and Integration


Report by Deja Kemp, Amy Hawn Nelson, & Della Jenkins: “Data sharing and integration are increasingly commonplace at every level of government, as cross-program and cross-sector data provide valuable insights to inform resource allocation, guide program implementation, and evaluate policies. Data sharing, while routine, is not without risks, and clear legal frameworks for data sharing are essential to mitigate those risks, protect privacy, and guide responsible data use. In some cases, federal privacy laws offer clear consent requirements and outline explicit exceptions where consent is not required to share data. In other cases, the law is unclear or silent regarding whether consent is needed for data sharing. Importantly, consent can present both ethical and logistical challenges, particularly when integrating cross-sector data. This brief will frame out key concepts related to consent; explore major federal laws governing the sharing of administrative data, including individually identifiable information; and examine important ethical implications of consent, particularly in cases when the law is silent or unclear. Finally, this brief will outline the foundational role of strong governance and consent frameworks in ensuring ethical data use and offer technical alternatives to consent that may be appropriate for certain data uses….(More)”.

Generative Artificial Intelligence and Data Privacy: A Primer


Report by Congressional Research Service: “Since the public release of Open AI’s ChatGPT, Google’s Bard, and other similar systems, some Members of Congress have expressed interest in the risks associated with “generative artificial intelligence (AI).” Although exact definitions vary, generative AI is a type of AI that can generate new content—such as text, images, and videos—through learning patterns from pre-existing data.
It is a broad term that may include various technologies and techniques from AI and machine learning (ML). Generative AI models have received significant attention and scrutiny due to their potential harms, such as risks involving privacy, misinformation, copyright, and non-consensual sexual imagery. This report focuses on privacy issues and relevant policy considerations for Congress. Some policymakers and stakeholders have raised privacy concerns about how individual data may be used to develop and deploy generative models. These concerns are not new or unique to generative AI, but the scale, scope, and capacity of such technologies may present new privacy challenges for Congress…(More)”.

A Hiring Law Blazes a Path for A.I. Regulation


Article by Steve Lohr: “European lawmakers are finishing work on an A.I. act. The Biden administration and leaders in Congress have their plans for reining in artificial intelligence. Sam Altman, the chief executive of OpenAI, maker of the A.I. sensation ChatGPT, recommended the creation of a federal agency with oversight and licensing authority in Senate testimony last week. And the topic came up at the Group of 7 summit in Japan.

Amid the sweeping plans and pledges, New York City has emerged as a modest pioneer in A.I. regulation.

The city government passed a law in 2021 and adopted specific rules last month for one high-stakes application of the technology: hiring and promotion decisions. Enforcement begins in July.

The city’s law requires companies using A.I. software in hiring to notify candidates that an automated system is being used. It also requires companies to have independent auditors check the technology annually for bias. Candidates can request and be told what data is being collected and analyzed. Companies will be fined for violations.

New York City’s focused approach represents an important front in A.I. regulation. At some point, the broad-stroke principles developed by governments and international organizations, experts say, must be translated into details and definitions. Who is being affected by the technology? What are the benefits and harms? Who can intervene, and how?

“Without a concrete use case, you are not in a position to answer those questions,” said Julia Stoyanovich, an associate professor at New York University and director of its Center for Responsible A.I.

But even before it takes effect, the New York City law has been a magnet for criticism. Public interest advocates say it doesn’t go far enough, while business groups say it is impractical.

The complaints from both camps point to the challenge of regulating A.I., which is advancing at a torrid pace with unknown consequences, stirring enthusiasm and anxiety.

Uneasy compromises are inevitable.

Ms. Stoyanovich is concerned that the city law has loopholes that may weaken it. “But it’s much better than not having a law,” she said. “And until you try to regulate, you won’t learn how.”…(More)” – See also AI Localism: Governing AI at the Local Level

Boston Isn’t Afraid of Generative AI


Article by Beth Simone Noveck: “After ChatGPT burst on the scene last November, some government officials raced to prohibit its use. Italy banned the chatbot. New York City, Los Angeles Unified, Seattle, and Baltimore School Districts either banned or blocked access to generative AI tools, fearing that ChatGPT, Bard, and other content generation sites could tempt students to cheat on assignments, induce rampant plagiarism, and impede critical thinking. This week, US Congress heard testimony from Sam Altman, CEO of OpenAI, and AI researcher Gary Marcus as it weighed whether and how to regulate the technology.

In a rapid about-face, however, a few governments are now embracing a less fearful and more hands-on approach to AI. New York City Schools chancellor David Banks announced yesterday that NYC is reversing its ban because “the knee jerk fear and risk overlooked the potential of generative AI to support students and teachers, as well as the reality that our students are participating in and will work in a world where understanding generative AI is crucial.” And yesterday, City of Boston chief information officer Santiago Garces sent guidelines to every city official encouraging them to start using generative AI “to understand their potential.” The city also turned on use of Google Bard as part of the City of Boston’s enterprise-wide use of Google Workspace so that all public servants have access.

The “responsible experimentation approach” adopted in Boston—the first policy of its kind in the US—could, if used as a blueprint, revolutionize the public sector’s use of AI across the country and cause a sea change in how governments at every level approach AI. By promoting greater exploration of how AI can be used to improve government effectiveness and efficiency, and by focusing on how to use AI for governance instead of only how to govern AI, the Boston approach might help to reduce alarmism and focus attention on how to use AI for social good…(More)”.

Can Mobility of Care Be Identified From Transit Fare Card Data? A Case Study In Washington D.C.


Paper by Daniela Shuman, et al: “Studies in the literature have found significant differences in travel behavior by gender on public transit that are largely attributable to household and care responsibilities falling disproportionately on women. While the majority of studies have relied on survey and qualitative data to assess “mobility of care”, we propose a novel data-driven workflow utilizing transit fare card transactions, name-based gender inference, and geospatial analysis to identify mobility of care trip making. We find that the share of women travelers trip-chaining in the direct vicinity of mobility of care places of interest is 10% – 15% higher than men….(More)”.

How a small news site built an innovative data project to visualise the impact of climate change on Uruguay’s capital


Interview by Marina Adami: “La ciudad sumergida (The submerged city), an investigation produced by Uruguayan science and technology news site Amenaza Roboto, is one of the winners of this year’s Sigma Awards for data journalism. The project uses maps of the country’s capital, Montevideo, to create impressive visualisations of the impact sea level rises are predicted to have on the city and its infrastructure. The project is a first of its kind for Uruguay, a small South American country in which data journalism is still a novelty. It is also a good example of a way news outlets can investigate and communicate the disastrous effects of climate change in local communities. 

I spoke to Miguel Dobrich, a journalist, educator and digital entrepreneur who worked on the project together with colleagues Gabriel FaríasNatalie Aubet and Nahuel Lamas, to find out what lessons other outlets can take from this project and from Amenaza Roboto’s experiments with analysing public data, collaborating with scientists, and keeping the focus on their communities….(More)”