People Have a Right to Climate Data


Article by Justin S. Mankin: “As a climate scientist documenting the multi-trillion-dollar price tag of the climate disasters shocking economies and destroying lives, I sometimes field requests from strategic consultantsfinancial investment analysts and reinsurers looking for climate data, analysis and computer code.

Often, they want to chat about my findings or have me draw out the implications for their businesses, like the time a risk analyst from BlackRock, the world’s largest asset manager, asked me to help with research on what the current El Niño, a cyclical climate pattern, means for financial markets.

These requests make sense: People and companies want to adapt to the climate risks they face from global warming. But these inquiries are also part of the wider commodification of climate science. Venture capitalists are injecting hundreds of millions of dollars into climate intelligence as they build out a rapidly growing business of climate analytics — the data, risk models, tailored analyses and insights people and institutions need to understand and respond to climate risks.

I point companies to our freely available data and code at the Dartmouth Climate Modeling and Impacts Group, which I run, but turn down additional requests for customized assessments. I regard climate information as a public good and fear contributing to a world in which information about the unfolding risks of droughts, floods, wildfires, extreme heat and rising seas are hidden behind paywalls. People and companies who can afford private risk assessments will rent, buy and establish homes and businesses in safer places than the billions of others who can’t, compounding disadvantage and leaving the most vulnerable among us exposed.

Despite this, global consultants, climate and agricultural technology start-ups, insurance companies and major financial firms are all racing to meet the ballooning demand for information about climate dangers and how to prepare for them. While a lot of this information is public, it is often voluminous, technical and not particularly useful for people trying to evaluate their personal exposure. Private risk assessments fill that gap — but at a premium. The climate risk analytics market is expected to grow to more than $4 billion globally by 2027.

I don’t mean to suggest that the private sector should not be involved in furnishing climate information. That’s not realistic. But I worry that an overreliance on the private sector to provide climate adaptation information will hollow out publicly provided climate risk science, and that means we all will pay: the well-off with money, the poor with lives…(More)”.

A tale of two cities: one real, one virtual


Joy Lo Dico in the Financial Times: “In recent years, digital city-building has become a legitimate part of urban planning. Barcelona, Cambridge and Helsinki are among a number of European cities exploring how copies of themselves could prove useful in making their built environments sharper, faster, cleaner and greener.

What exists in real life is being rendered a second time in the digital space: creating a library of the past, an eagle’s-eye view of the present and, potentially, a vision of the future.

One of the most striking projects has been happening in Ukraine, where technology company Skeiron has, since 2022, been mapping the country’s monuments, under threat from bombing.

The project #SaveUkrainianHeritage has recorded 60 buildings, from the St Sofia Cathedral in Kyiv and the Chernivtsi National University — both Unesco world heritage sites — to wooden churches across the country, something Skeiron’s co-founder Yurii Prepodobnyi mentions with pride. There are thousands of them. “Some are only 20 or 30 square metres,” he says. “But Ukrainian churches keep Ukrainian identity.”

With laser measurements, drone photography and photogrammetry — the art of stitching photographs together — Prepodobnyi and his team can produce highly detailed 3D models.

They have even managed to recreate the exterior of the Mariupol drama theatre, destroyed in the early days of the Ukraine war, after calling for photographs and drone footage.

Another project, in Pompeii, has been using similar digital techniques to capture the evolution of excavations into a 3D model. The Pompeii I. 14 Project, led by Tulane University and Indiana State University, takes the process of excavating buildings within one block of Pompeii, Insula 14, and turns it into a digital representation. Using laser measurements, iPad Pros, a consumer drone and handheld cameras, a space can be measured to within a couple of millimetres. What is relayed back along the stream is a visual record of how a room changes over thousands of years, as the debris, volcanic eruption and layers of life that went before are revealed…(More)”.

Privacy-Enhancing and Privacy-Preserving Technologies: Understanding the Role of PETs and PPTs in the Digital Age


Paper by the Centre for Information Policy Leadership: “The paper explores how organizations are approaching privacy-enhancing technologies (“PETs”) and how PETs can advance data protection principles, and provides examples of how specific types of PETs work. It also explores potential challenges to the use of PETs and possible solutions to those challenges.

CIPL emphasizes the enormous potential inherent in these technologies to mitigate privacy risks and support innovation, and recommends a number of steps to foster further development and adoption of PETs. In particular, CIPL calls for policymakers and regulators to incentivize the use of PETs through clearer guidance on key legal concepts that impact the use of PETs, and by adopting a pragmatic approach to the application of these concepts.

CIPL’s recommendations towards wider adoption are as follows:

  • Issue regulatory guidance and incentives regarding PETs: Official regulatory guidance addressing PETs in the context of specific legal obligations or concepts (such as anonymization) will incentivize greater investment in PETs.
  • Increase education and awareness about PETs: PET developers and providers need to show tangible evidence of the value of PETs and help policymakers, regulators and organizations understand how such technologies can facilitate responsible data use.
  • Develop industry standards for PETs: Industry standards would help facilitate interoperability for the use of PETs across jurisdictions and help codify best practices to support technical reliability to foster trust in these technologies.
  • Recognize PETs as a demonstrable element of accountability: PETs complement robust data privacy management programs and should be recognized as an element of organizational accountability…(More)”.

Testing the Assumptions of the Data Revolution


Report by TRENDS: “Ten years have passed since the release of A World that Counts and the formal adoption of the Sustainable Development Goals (SDGs). This seems an appropriate time for national governments and the global data community to reflect on where progress has been made so far. 

This report supports this objective in three ways: it evaluates the assumptions that underpin A World that Counts’ core hypothesis that the data revolution would lead to better outcomes across the 17 SDGs, it summarizes where and how we have made progress, and it identifies knowledge gaps related to each assumption. These knowledge gaps will serve as the foundation for the next phase of the SDSN TReNDS research program, guiding our exploration of emerging data-driven paradigms and their implications for the SDGs. By analyzing these assumptions, we can consider how SDSN TReNDs and other development actors might adapt their activities to a new set of circumstances in the final six years of the SDG commitments.

Given that the 2030 Agenda established a 15-year timeframe for SDG attainment, it is to be expected that some of A World that Counts’ key assumptions would fall short or require recalibration along the way. Unforeseen events such as the COVID-19 pandemic would inevitably shift global attention and priorities away from the targets set out in the SDG framework, at least temporarily…(More)”.

Tackling Today’s Data Dichotomy: Unveiling the Paradox of Abundant Supply and Restricted Access in the Quest for Social Equity


Article by Stefaan Verhulst: “…One of the ironies of this moment, however, is that an era of unprecedented supply is simultaneously an era of constricted access to data. Much of the data we generate is privately “owned,” hidden away in private or public sector silos, or otherwise inaccessible to those who are most likely to benefit from it or generate valuable insights. These restrictions on access are grafted onto existing socioeconomic inequalities, driven by broader patterns of exclusion and marginalization, and also exacerbating them. Critically, restricted or unequal access to data does not only harm individuals: it causes untold public harm by limiting the potential of data to address social ills. It also limits attempts to improve the output of AI both in terms of bias and trustworthiness.

In this paper, we outline two potential approaches that could help address—or at least mitigate—the harms: social licensing and a greater role for data stewards. While not comprehensive solutions, we believe that these represent two of the most promising avenues to introduce greater efficiencies into how data is used (and reused), and thus lead to more targeted, responsive, and responsible policymaking…(page 22-25)”.

Ground Truths Are Human Constructions


Article by Florian Jaton: “Artificial intelligence algorithms are human-made, cultural constructs, something I saw first-hand as a scholar and technician embedded with AI teams for 30 months. Among the many concrete practices and materials these algorithms need in order to come into existence are sets of numerical values that enable machine learning. These referential repositories are often called “ground truths,” and when computer scientists construct or use these datasets to design new algorithms and attest to their efficiency, the process is called “ground-truthing.”

Understanding how ground-truthing works can reveal inherent limitations of algorithms—how they enable the spread of false information, pass biased judgments, or otherwise erode society’s agency—and this could also catalyze more thoughtful regulation. As long as ground-truthing remains clouded and abstract, society will struggle to prevent algorithms from causing harm and to optimize algorithms for the greater good.

Ground-truth datasets define AI algorithms’ fundamental goal of reliably predicting and generating a specific output—say, an image with requested specifications that resembles other input, such as web-crawled images. In other words, ground-truth datasets are deliberately constructed. As such, they, along with their resultant algorithms, are limited and arbitrary and bear the sociocultural fingerprints of the teams that made them…(More)”.

Generative AI for economic research: Use cases and implications for economists  


Paper by Anton Korinek: “…This article describes use cases of modern generative AI to interested economic researchers based on the author’s exploration of the space. The main emphasis is on LLMs, which are the type of generative AI that is currently most useful for research. I have categorized their use cases into six areas: ideation and feedback, writing, background research, data analysis, coding, and mathematical derivations. I provide general instructions for how to take advantage of each of these capabilities and demonstrate them using specific examples. Moreover, I classify the capabilities of the most commonly used LLMs from experimental to highly useful to provide an overview. My hope is that this paper will be a useful guide both for researchers starting to use generative AI and for expert users who are interested in new use cases beyond what they already have experience with to take advantage of the rapidly growing capabilities of LLMs. The online resources associated with this paper are available at the journal website and will provide semi-annual updates on the capabilities and use cases of the most advanced generative AI tools for economic research. In addition, they offer a guide on “How do I start?” as well as a page with “Useful Resources on Generative AI for Economists.”…(More)”

The Branding Dilemma of AI: Steering Towards Efficient Regulation


Blog by Zeynep Engin: “…Undoubtedly, the term ‘Artificial Intelligence’ has captured the public imagination, proving to be an excellent choice from a marketing standpoint (particularly serving the marketing goals of big AI tech companies). However, this has not been without its drawbacks. The field has experienced several ‘AI winters’ when lofty promises failed to translate into real-world outcomes. More critically, this term has anthropomorphized what are, at their core, high-dimensional statistical optimization processes. Such representation has obscured their true nature and the extent of their potential. Moreover, as computing capacities have expanded exponentially, the ability of these systems to process large datasets quickly and precisely, identifying patterns autonomously, has often been misinterpreted as evidence of human-like or even superhuman intelligence. Consequently, AI systems have been elevated to almost mystical status, perceived as incomprehensible to humans and, thus, uncontrollable by humans…

A profound shift in the discourse surrounding AI is urgently necessary. The quest to replicate or surpass human intelligence, while technologically fascinating, does not fully encapsulate the field’s true essence and progress. Indeed, AI has seen significant advances, uncovering a vast array of functionalities. However, its core strength still lies in computational speed and precision — a mechanical prowess. The ‘magic’ of AI truly unfolds when this computational capacity intersects with the wealth of real-world data generated by human activities and the environment, transforming human directives into computational actions. Essentially, we are now outsourcing complex processing tasks to machines, moving beyond crafting bespoke solutions for each problem in favour of leveraging vast computational resources we have. This transition does not yield an ‘artificial intelligence’, but poses a new challenge to human intelligence in the knowledge creation cycle: the responsibility to formulate the ‘right’ questions and vigilantly monitor the outcomes of such intricate processing, ensuring the mitigation of any potential adverse impacts…(More)”.

The Data Revolution and the Study of Social Inequality: Promise and Perils


Paper by Mario L. Small: “The social sciences are in the midst of a revolution in access to data, as governments and private companies have accumulated vast digital records of rapidly multiplying aspects of our lives and made those records available to researchers. The accessibility and comprehensiveness of the data are unprecedented. How will the data revolution affect the study of social inequality? I argue that the speed, breadth, and low cost with which large-scale data can be acquired promise a dramatic transformation in the questions we can answer, but this promise can be undercut by size-induced blindness, the tendency to ignore important limitations amidst a source with billions of data points. The likely consequences for what we know about the social world remain unclear…(More)”.

In shaping AI policy, stories about social impacts are just as important as expert information


Blog by Daniel S. Schiff and Kaylyn Jackson Schiff: “Will artificial intelligence (AI) save the world or destroy it? Will it lead to the end of manual labor and an era of leisure and luxury, or to more surveillance and job insecurity? Is it the start of a revolution in innovation that will transform the economy for the better? Or does it represent a novel threat to human rights?

Irrespective of what turns out to be the truth, what our key policymakers believe about these questions matters. It will shape how they think about the underlying problems that AI policy is aiming to address, and which solutions are appropriate to do so. …In late 2021, we ran a study to better understand the impact of policy narratives on the behavior of policymakers. We focused on US state legislators,…

In our analysis, we found something surprising. We measured whether legislators were more likely to engage with a message featuring a narrative or featuring expert information, which we assessed by seeing if they clicked on a given fact sheet/story or clicked to register for or attended the webinar.

Despite the importance attached to technical expertise in AI circles, we found that narratives were at least as persuasive as expert information. Receiving a narrative emphasizing, say, growing competition between the US and China, or the faulty arrest of Robert Williams due to facial recognition, led to a 30 percent increase in legislator engagement compared to legislators who only received basic information about the civil society organization. These narratives were just as effective as more neutral, fact-based information about AI with accompanying fact sheets…(More)”