The ethical imperative to identify and address data and intelligence asymmetries


Article by Stefaan Verhulst in AI & Society: “The insight that knowledge, resulting from having access to (privileged) information or data, is power is more relevant today than ever before. The data age has redefined the very notion of knowledge and information (as well as power), leading to a greater reliance on dispersed and decentralized datasets as well as to new forms of innovation and learning, such as artificial intelligence (AI) and machine learning (ML). As Thomas Piketty (among others) has shown, we live in an increasingly stratified world, and our society’s socio-economic asymmetries are often grafted onto data and information asymmetries. As we have documented elsewhere, data access is fundamentally linked to economic opportunity, improved governance, better science and citizen empowerment. The need to address data and information asymmetries—and their resulting inequalities of political and economic power—is therefore emerging as among the most urgent ethical challenges of our era, yet often not recognized as such.

Even as awareness grows of this imperative, society and policymakers lag in their understanding of the underlying issue. Just what are data asymmetries? How do they emerge, and what form do they take? And how do data asymmetries accelerate information and other asymmetries? What forces and power structures perpetuate or deepen these asymmetries, and vice versa? I argue that it is a mistake to treat this problem as homogenous. In what follows, I suggest the beginning of a taxonomy of asymmetries. Although closely related, each one emerges from a different set of contingencies, and each is likely to require different policy remedies. The focus of this short essay is to start outlining these different types of asymmetries. Further research could deepen and expand the proposed taxonomy as well help define solutions that are contextually appropriate and fit for purpose….(More)”.

When Launching a Collaboration, Keep It Agile


Essay by the Stakeholder Alignment Collaborative: “Conventional wisdom holds that large-scale societal challenges require large-scale responses. By contrast, we argue that progress on major societal challenges can and often should begin with small, agile initiatives—minimum viable consortia (MVC)—that learn and adapt as they build the scaffolding for large-scale change. MVCs can address societal challenges by overcoming institutional inertia, opposition, capability gaps, and other barriers because they require less energy for activation, reveal dead ends early on, and can more easily adjust and adapt over time.

Large-scale societal challenges abound, and organizations and institutions are increasingly looking for ways to deal with them. For example, the National Academy of Engineering (NAE) has identified 14 Grand Societal Challenges for “sustaining civilization’s continuing advancement while still improving the quality of life” in the 21st century. They include making solar energy economical, developing carbon sequestration methods, advancing health informatics, and securing cyberspace. The United Nations has set 17 Sustainable Development Goals (SDGs) to achieve by 2030 for a better future for humanity. They include everything from eliminating hunger to reducing inequality.

Tackling such universal goals requires large-scale cooperation, because existing organizations and institutions simply do not have the ability to resolve these challenges independently. Further note that the NAE’s announcement of the challenges stated that “governmental and institutional, political and economic, and personal and social barriers will repeatedly arise to impede the pursuit of solutions to problems.” The United Nations included two enabling SDGs: “peace, justice, and strong institutions” and “partnership for the goals.” The question is how to bring such large-scale partnerships and institutional change into existence.

We are members of the Stakeholder Alignment Collaborative, a research consortium of scholars at different career stages, spanning multiple fields and disciplines. We study collaboration collaboratively and maintain a very flat structure. We have published on multistakeholder consortia associated with science1 and provided leadership and facilitation for the launch and sustainment of many of these consortia. Based on our research into the problem of developing large-scale, multistakeholder partnerships, we believe that MVCs provide an answer.

MVCs are less vulnerable to the many barriers to large-scale solutions, better able to forge partnerships and a more agile framework for making needed adjustments. To demonstrate these points, we focus on examples of MVCs in the domain of scientific research data and computing infrastructure. Research data are essential for virtually all societal challenges, and an upsurge of multistakeholder consortia has occurred in this domain. But the MVC concept is not limited to these challenges, nor to digitally oriented settings. We have chosen this sphere because it offers a diversity of MVC examples for illustration….(More)”. (See also “The Potential and Practice of Data Collaboratives for Migration“).

Mapping of exposed water tanks and swimming pools based on aerial images can help control dengue


Press Release by Fundação de Amparo à Pesquisa do Estado de São Paulo: “Brazilian researchers have developed a computer program that locates swimming pools and rooftop water tanks in aerial photographs with the aid of artificial intelligence to help identify areas vulnerable to infestation by Aedes aegypti, the mosquito that transmits dengue, zika, chikungunya and yellow fever. 

The innovation, which can also be used as a public policy tool for dynamic socio-economic mapping of urban areas, resulted from research and development work by professionals at the University of São Paulo (USP), the Federal University of Minas Gerais (UFMG) and the São Paulo State Department of Health’s Endemic Control Superintendence (SUCEN), as part of a project supported by FAPESP. An article about it is published in the journal PLOS ONE

“Our work initially consisted of creating a model based on aerial images and computer science to detect water tanks and pools, and to use them as a socio-economic indicator,” said Francisco Chiaravalloti Neto, last author of the article. He is a professor in the Epidemiology Department at USP’s School of Public Health (FSP), with a first degree in engineering. 

As the article notes, previous research had already shown that dengue tends to be most prevalent in deprived urban areas, so that prevention of dengue, zika and other diseases transmitted by the mosquito can be made considerably more effective by use of a relatively dynamic socio-economic mapping model, especially given the long interval between population censuses in Brazil (ten years or more). 

“This is one of the first steps in a broader project,” Chiaravalloti Neto said. Among other aims, he and his team plan to detect other elements of the images and quantify real infestation rates in specific areas so as to be able to refine and validate the model. 

“We want to create a flow chart that can be used in different cities to pinpoint at-risk areas without the need for inspectors to call on houses, buildings and other breeding sites, as this is time-consuming and a waste of the taxpayer’s money,” he added…(More)”.

The Use of Artificial Intelligence as a Strategy to Analyse Urban Informality


Article by Agustina Iñiguez: “Within the Latin American and Caribbean region, it has been recorded that at least 25% of the population lives in informal settlements. Given that their expansion is one of the major problems afflicting these cities, a project is presented, supported by the IDB, which proposes how new technologies are capable of contributing to the identification and detection of these areas in order to intervene in them and help reduce urban informality.

Informal settlements, also known as slums, shantytowns, camps or favelas, depending on the country in question, are uncontrolled settlements on land where, in many cases, the conditions for a dignified life are not in place. Through self-built dwellings, these sites are generally the result of the continuous growth of the housing deficit.

For decades, the possibility of collecting information about the Earth’s surface through satellite imagery has been contributing to the analysis and production of increasingly accurate and useful maps for urban planning. In this way, not only the growth of cities can be seen, but also the speed at which they are growing and the characteristics of their buildings.

Advances in artificial intelligence facilitate the processing of a large amount of information. When a satellite or aerial image is taken of a neighbourhood where a municipal team has previously demarcated informal areas, the image is processed by an algorithm that will identify the characteristic visual patterns of the area observed from space. The algorithm will then identify other areas with similar characteristics in other images, automatically recognising the districts where informality predominates. It is worth noting that while satellites are able to report both where and how informal settlements are growing, specialised equipment and processing infrastructure are also required…(More)”

Eight reasons responsible data for and about children matters


Article by Stefaan Verhulst and Andrew Young: “…The relationship between the datafication of everyday life and child welfare has generally been under-explored, both by researchers in data ethics and those who work to advance the rights of children. This neglect is a lost opportunity, and also poses a risk to children, who are in many ways at the forefront of the steady incursions of data into our lives. In what follows, over a series of two articles, we outline eight reasons why child welfare advocates should pay more attention to data, and why we need a framework for responsible data collection and use for children….(Part 1) and (Part2). (See also Responsible Data for Children).

A new data deal: the case of Barcelona


Paper by Fernando Monge, Sarah Barns, Rainer Kattel and Francesca Bria: “Cities today are key sites for the operation of global digital marketplaces. It is at the curbsides and the intersections of cities where global digital platforms gain access to valuable urban data to be used in the delivery of data-driven urban services. Signalling an emerging role for city governments in contributing to regulatory responses to global digital platforms, a number of cities have in recent years tested their capacity to reclaim the urban data that is ‘harvested’ and monetised by digital platforms for improved local governance and participation. Focusing on the City of Barcelona, this paper investigates the conditions that enabled Barcelona to pivot from its strong focus on attracting commercial platforms under the rubric of smart city programs, to becoming one of the leading advocates of a citizen-first data rights and data sovereignty agenda. Through a series of interviews with key participants involved in the design and implementation of Barcelona’s data sovereignty program under Mayor Ada Colau, the paper examines the policy and governance instruments deployed by the city to regain access and control over data and discusses the challenges and tensions it faced during the implementation phases of the program. Finally, the paper presents the main lessons of the Barcelona experience for other cities, including a reflection on the role that cities can play in shaping a global agenda around improved data governance….(More)”.

Advancing Digital Agency: The Power of Data Intermediaries


World Economic Forum Report: “With the integration of screenless technology into everyday life, the data ecosystem is growing increasingly complicated. New ambient data collection methods bring many benefits, but they also have the potential to amplify mistrust between people and technology.

In this Insight Report, the World Economic Forum’s Taskforce on Data Intermediaries explores the potential to outsource human decision points to an agent acting on an individual’s behalf, in the form of a data intermediary.

The opportunities and risks of such a new approach are explored, representing one of many new policy anchors through and around which individuals may navigate new data ecosystem models. Levers of action for both the public and private sectors are suggested to ensure a future-proof digital policy environment that allows for the seamless and trusted movement of data between people and the technology that serves them…(More)”.

Shared Measures: Collective Performance Data Use in Collaborations


Paper by Alexander Kroll: “Traditionally, performance metrics and data have been used to hold organizations accountable. But public service provision is not merely hierarchical anymore. Increasingly, we see partnerships among government agencies, private or nonprofit organizations, and civil society groups. Such collaborations may also use goals, measures, and data to manage group efforts, however, the application of performance practices here will likely follow a different logic. This Element introduces the concepts of “shared measures” and “collective data use” to add collaborative, relational elements to existing performance management theory. It draws on a case study of collaboratives in North Carolina that were established to develop community responses to the opioid epidemic. To explain the use of shared performance measures and data within these collaboratives, this Element studies the role of factors such as group composition, participatory structures, social relationships, distributed leadership, group culture, and value congruence…(More)”.

Public bodies’ access to private sector data: The perspectives of twelve European local administrations


Article by Marina Micheli: “Public bodies’ access to private sector data of public interest (also referred to as business-to-government (B2G) data sharing) is still an emerging and sporadic practice. The article discusses the findings of a qualitative research that examined B2G data sharing in European local administrations. Drawing from semi-structured interviews with managers and project leaders of twelve municipalities, the study contextualizes access to private sector data in the perspectives of those working in the field. The findings examine the four operational models to access data that featured more prominently in the interviews: data donorship, public procurement of data, data partnerships and pools, and data sharing obligations. The analysis highlights the power unbalances embedded in B2G data sharing as perceived by representatives of local administrations. In particular, the findings address the gap between municipalities in the opportunities to access private sector data of public interest, the lack of negotiating power of local administrations vis-à-vis private sector data holders and the strategies envisioned to foster more inclusive forms of data governance…(More)”.

Data Federalism


Article by Bridget A. Fahey: “Private markets for individual data have received significant and sustained attention in recent years. But data markets are not for the private sector alone. In the public sector, the federal government, states, and cities gather data no less intimate and on a scale no less profound. And our governments have realized what corporations have: It is often easier to obtain data about their constituents from one another than to collect it directly. As in the private sector, these exchanges have multiplied the data available to every level of government for a wide range of purposes, complicated data governance, and created a new source of power, leverage, and currency between governments.

This Article provides an account of this vast and rapidly expanding intergovernmental marketplace in individual data. In areas ranging from policing and national security to immigration and public benefits to election management and public health, our governments exchange data both by engaging in individual transactions and by establishing “data pools” to aggregate the information they each have and diffuse access across governments. Understanding the breadth of this distinctly modern practice of data federalism has descriptive, doctrinal, and normative implications.

In contrast to conventional cooperative federalism programs, Congress has largely declined to structure and regulate intergovernmental data exchange. And in Congress’s absence, our governments have developed unorthodox cross-governmental administrative institutions to manage data flows and oversee data pools, and these sprawling, unwieldy institutions are as important as the usual cooperative initiatives to which federalism scholarship typically attends.

Data exchanges can also go wrong, and courts are not prepared to navigate the ways that data is both at risk of being commandeered and ripe for use as coercive leverage. I argue that these constitutional doctrines can and should be adapted to police the exchange of data. I finally place data federalism in normative frame and argue that data is a form of governmental power so unlike the paradigmatic ones our federalism is believed to distribute that it has the potential to unsettle federalism in both function and theory…(More)”.