Decoding human behavior with big data? Critical, constructive input from the decision sciences


Paper by Konstantinos V. Katsikopoulos and Marc C. Canellas: “Big data analytics employs algorithms to uncover people’s preferences and values, and support their decision making. A central assumption of big data analytics is that it can explain and predict human behavior. We investigate this assumption, aiming to enhance the knowledge basis for developing algorithmic standards in big data analytics. First, we argue that big data analytics is by design atheoretical and does not provide process-based explanations of human behavior; thus, it is unfit to support deliberation that is transparent and explainable. Second, we review evidence from interdisciplinary decision science, showing that the accuracy of complex algorithms used in big data analytics for predicting human behavior is not consistently higher than that of simple rules of thumb. Rather, it is lower in situations such as predicting election outcomes, criminal profiling, and granting bail. Big data algorithms can be considered as candidate models for explaining, predicting, and supporting human decision making when they match, in transparency and accuracy, simple, process-based, domain-grounded theories of human behavior. Big data analytics can be inspired by behavioral and cognitive theory….(More)”.

Police surveillance and facial recognition: Why data privacy is an imperative for communities of color


Paper by Nicol Turner Lee and Caitlin Chin: “Governments and private companies have a long history of collecting data from civilians, often justifying the resulting loss of privacy in the name of national security, economic stability, or other societal benefits. But it is important to note that these trade-offs do not affect all individuals equally. In fact, surveillance and data collection have disproportionately affected communities of color under both past and current circumstances and political regimes.

From the historical surveillance of civil rights leaders by the Federal Bureau of Investigation (FBI) to the current misuse of facial recognition technologies, surveillance patterns often reflect existing societal biases and build upon harmful and virtuous cycles. Facial recognition and other surveillance technologies also enable more precise discrimination, especially as law enforcement agencies continue to make misinformed, predictive decisions around arrest and detainment that disproportionately impact marginalized populations.

In this paper, we present the case for stronger federal privacy protections with proscriptive guardrails for the public and private sectors to mitigate the high risks that are associated with the development and procurement of surveillance technologies. We also discuss the role of federal agencies in addressing the purposes and uses of facial recognition and other monitoring tools under their jurisdiction, as well as increased training for state and local law enforcement agencies to prevent the unfair or inaccurate profiling of people of color. We conclude the paper with a series of proposals that lean either toward clear restrictions on the use of surveillance technologies in certain contexts, or greater accountability and oversight mechanisms, including audits, policy interventions, and more inclusive technical designs….(More)”

Researcher Helps Create Big Data ‘Early Alarm’ for Ukraine Abuses


Article by Chris Carroll: From searing images of civilians targeted by shelling to detailed accounts of sick children and their families fleeing nearby fighting to seek medical care, journalists have created a kaleidoscopic view of the suffering that has engulfed Ukraine since Russia invaded—but the news media can’t be everywhere.

Social media practically can be, however, and a University of Maryland researcher is part of a U.S.-Ukrainian multi-institutional team that’s harvesting data from Twitter and analyzing it with machine-learning algorithms. The result is a real-time system that provides a running account of what people in Ukraine are facing, constructed from their own accounts.

The project, Data for Ukraine, has been running for about three weeks, and has shown itself able to surface important events a few hours ahead of Western or even Ukrainian media sources. It focuses on four areas: humanitarian needs, displaced people, civilian resistance and human rights violations. In addition to simply showing spikes of credible tweets about certain subjects the team is tracking, the system also geolocates tweets—essentially mapping where events take place.

“It’s an early alarm system for human rights abuses,” said Ernesto Calvo, professor of government and politics and director of UMD’s Inter-Disciplinary Lab for Computational Social Science. “For it to work, we need to know two basic things: what is happening or being reported, and who is reporting those things.”

Calvo and his lab focus on the second of those two requirements, and constructed a “community detection” system to identify key nodes of Twitter users from which to use data. Other team members with expertise in Ukrainian society and politics spotted him a list of about 400 verified users who actively tweet on relevant topics. Then Calvo, who honed his approach analyzing social media from political and environmental crises in Latin America, and his team expanded and deepened the collection, drawing on connections and followers of the initial list so that millions of tweets per day now feed the system.

Nearly half of the captured tweets are in Ukrainian, 30% are in English and 20% are in Russian. Knowing who to exclude—accounts started the day before the invasion, for instance, or with few long-term connections—is key, Calvo said…(More)”.

The ethical imperative to identify and address data and intelligence asymmetries


Article by Stefaan Verhulst in AI & Society: “The insight that knowledge, resulting from having access to (privileged) information or data, is power is more relevant today than ever before. The data age has redefined the very notion of knowledge and information (as well as power), leading to a greater reliance on dispersed and decentralized datasets as well as to new forms of innovation and learning, such as artificial intelligence (AI) and machine learning (ML). As Thomas Piketty (among others) has shown, we live in an increasingly stratified world, and our society’s socio-economic asymmetries are often grafted onto data and information asymmetries. As we have documented elsewhere, data access is fundamentally linked to economic opportunity, improved governance, better science and citizen empowerment. The need to address data and information asymmetries—and their resulting inequalities of political and economic power—is therefore emerging as among the most urgent ethical challenges of our era, yet often not recognized as such.

Even as awareness grows of this imperative, society and policymakers lag in their understanding of the underlying issue. Just what are data asymmetries? How do they emerge, and what form do they take? And how do data asymmetries accelerate information and other asymmetries? What forces and power structures perpetuate or deepen these asymmetries, and vice versa? I argue that it is a mistake to treat this problem as homogenous. In what follows, I suggest the beginning of a taxonomy of asymmetries. Although closely related, each one emerges from a different set of contingencies, and each is likely to require different policy remedies. The focus of this short essay is to start outlining these different types of asymmetries. Further research could deepen and expand the proposed taxonomy as well help define solutions that are contextually appropriate and fit for purpose….(More)”.

Big data, computational social science, and other recent innovations in social network analysis


Paper by David Tindall, John McLevey, Yasmin Koop-Monteiro, Alexander Graham: “While sociologists have studied social networks for about one hundred years, recent developments in data, technology, and methods of analysis provide opportunities for social network analysis (SNA) to play a prominent role in the new research world of big data and computational social science (CSS). In our review, we focus on four broad topics: (1) Collecting Social Network Data from the Web, (2) Non-traditional and Bipartite/Multi-mode Networks, including Discourse and Semantic Networks, and Social-Ecological Networks, (3) Recent Developments in Statistical Inference for Networks, and (4) Ethics in Computational Network Research…(More)”

The Use of Artificial Intelligence as a Strategy to Analyse Urban Informality


Article by Agustina Iñiguez: “Within the Latin American and Caribbean region, it has been recorded that at least 25% of the population lives in informal settlements. Given that their expansion is one of the major problems afflicting these cities, a project is presented, supported by the IDB, which proposes how new technologies are capable of contributing to the identification and detection of these areas in order to intervene in them and help reduce urban informality.

Informal settlements, also known as slums, shantytowns, camps or favelas, depending on the country in question, are uncontrolled settlements on land where, in many cases, the conditions for a dignified life are not in place. Through self-built dwellings, these sites are generally the result of the continuous growth of the housing deficit.

For decades, the possibility of collecting information about the Earth’s surface through satellite imagery has been contributing to the analysis and production of increasingly accurate and useful maps for urban planning. In this way, not only the growth of cities can be seen, but also the speed at which they are growing and the characteristics of their buildings.

Advances in artificial intelligence facilitate the processing of a large amount of information. When a satellite or aerial image is taken of a neighbourhood where a municipal team has previously demarcated informal areas, the image is processed by an algorithm that will identify the characteristic visual patterns of the area observed from space. The algorithm will then identify other areas with similar characteristics in other images, automatically recognising the districts where informality predominates. It is worth noting that while satellites are able to report both where and how informal settlements are growing, specialised equipment and processing infrastructure are also required…(More)”

The Staggering Ecological Impacts of Computation and the Cloud


Essay by Steven Gonzalez Monserrate: “While in technical parlance the “Cloud” might refer to the pooling of computing resources over a network, in popular culture, “Cloud” has come to signify and encompass the full gamut of infrastructures that make online activity possible, everything from Instagram to Hulu to Google Drive. Like a puffy cumulus drifting across a clear blue sky, refusing to maintain a solid shape or form, the Cloud of the digital is elusive, its inner workings largely mysterious to the wider public, an example of what MIT cybernetician Norbert Weiner once called a “black box.” But just as the clouds above us, however formless or ethereal they may appear to be, are in fact made of matter, the Cloud of the digital is also relentlessly material.

To get at the matter of the Cloud we must unravel the coils of coaxial cables, fiber optic tubes, cellular towers, air conditioners, power distribution units, transformers, water pipes, computer servers, and more. We must attend to its material flows of electricity, water, air, heat, metals, minerals, and rare earth elements that undergird our digital lives. In this way, the Cloud is not only material, but is also an ecological force. As it continues to expand, its environmental impact increases, even as the engineers, technicians, and executives behind its infrastructures strive to balance profitability with sustainability. Nowhere is this dilemma more visible than in the walls of the infrastructures where the content of the Cloud lives: the factory libraries where data is stored and computational power is pooled to keep our cloud applications afloat….

To quell this thermodynamic threat, data centers overwhelmingly rely on air conditioning, a mechanical process that refrigerates the gaseous medium of air, so that it can displace or lift perilous heat away from computers. Today, power-hungry computer room air conditioners (CRACs) or computer room air handlers (CRAHs) are staples of even the most advanced data centers. In North America, most data centers draw power from “dirty” electricity grids, especially in Virginia’s “data center alley,” the site of 70 percent of the world’s internet traffic in 2019. To cool, the Cloud burns carbon, what Jeffrey Moro calls an “elemental irony.” In most data centers today, cooling accounts for greater than 40 percent of electricity usage….(More)”.

Launch of UN Biodiversity Lab 2.0: Spatial data and the future of our planet


Press Release: “…The UNBL 2.0 is a free, open-source platform that enables governments and others to access state-of-the-art maps and data on nature, climate change, and human development in new ways to generate insight for nature and sustainable development. It is freely available online to governments and other stakeholders as a digital public good…

The UNBL 2.0 release responds to a known global gap in the types of spatial data and tools, providing an invaluable resource to nations around the world to take transformative action. Users can now access over 400 of the world’s best available global spatial data layers; create secure workspaces to incorporate national data alongside global data; use curated data collections to generate insight for action; and more. Without specialized tools or training, decision-makers can leverage the power of spatial data to support priority-setting and the implementation of nature-based solutions. Dynamic metrics and indicators on the state of our planet are also available….(More)”.

AI, big data, and the future of consent


Paper by Adam J. Andreotta, Nin Kirkham & Marco Rizzi: “In this paper, we discuss several problems with current Big data practices which, we claim, seriously erode the role of informed consent as it pertains to the use of personal information. To illustrate these problems, we consider how the notion of informed consent has been understood and operationalised in the ethical regulation of biomedical research (and medical practices, more broadly) and compare this with current Big data practices. We do so by first discussing three types of problems that can impede informed consent with respect to Big data use. First, we discuss the transparency (or explanation) problem. Second, we discuss the re-repurposed data problem. Third, we discuss the meaningful alternatives problem. In the final section of the paper, we suggest some solutions to these problems. In particular, we propose that the use of personal data for commercial and administrative objectives could be subject to a ‘soft governance’ ethical regulation, akin to the way that all projects involving human participants (e.g., social science projects, human medical data and tissue use) are regulated in Australia through the Human Research Ethics Committees (HRECs). We also consider alternatives to the standard consent forms, and privacy policies, that could make use of some of the latest research focussed on the usability of pictorial legal contracts…(More)”

Satellite Earth observation for sustainable rural development


A blog post by Peter Hargreaves: “…We find ourselves in a “golden age for satellite exploration”. ‘Big Data’ from satellite Earth observation – hereafter denoted ‘EO’ – could be an important part of the solution to the shortage of socioeconomic data required to inform several of the goals and targets that compose the United Nations (UN) Sustainable Development Goals (SDGs) [hyperlink]. In particular, the goals that pertain to socioeconomic and human wellbeing dimensions of development. EO data could play a significant role in producing the transparent data system necessary to achieve sustainable development….

Census and nationally representative household surveys are the medium through which most socioeconomic data are collected. It is impossible to understand socioeconomic conditions without them – I cannot stress this enough. But they have limitations, particularly in terms of cost and spatio-temporal coverage. In an ideal world, we would vastly upscale the spatial and temporal reporting of these surveys to cover more places and points in time. But this mass enumeration would be prohibitively expensive and *logistically impossible*. Imagine the quantity of data produced and the burden placed upon National Statistics Offices (NSOs) and governmental institutions? The 2030 end point for the SDGs would be upon us before much of the data was processed leaving very little time to use the outputs for policy.

This is where unconventional data enters the debate, and in this sphere – that of measuring socioeconomic conditions for development – EO data is unconventional. EO data has considerable potential to augment survey and census data for measuring rural poverty development in rural spaces, especially during intercensal periods, and where ground data are patchy, or non-existent. While on the subject, there is an important point to make: you can’t use EO to understand everything about a particular context. It does not matter how elaborate the model or the effort put in. Quite simply, EO cannot give you the full picture.

What EO *does* have is a five-decade temporal legacy (most platforms and data products are near continuous), and its broadly open access with low to negligible acquisition costs. EO data is also availabile across multiple spatial resolutions and is often easily comparable and complementary. When we say, ‘five-decade temporal legacy’, this means that there are roughly 50 years of EO data (if we use the Landsat program as an anchor). Not all EO platforms have operated across the whole timeline – Figure 1 below offers an idea of when different platforms were launched and for how long they were, or have been, operational. What’s more, data will be increasingly available and accessible, catalysed by technological innovation and investment in public and private ventures. A lot of this data is open access e.g. EO platforms operated by NASA or the ESA Copernicus programme, which include Landsat, MODIS, AVHRR, VIIRs, and the Sentinels amongst others. Meanwhile, the availability of EO data across multiple spatial resolutions enables disaggregation of data alongside survey and census data for subnational monitoring of socioeconomic conditions….(More)”.