Launch of UN Biodiversity Lab 2.0: Spatial data and the future of our planet


Press Release: “…The UNBL 2.0 is a free, open-source platform that enables governments and others to access state-of-the-art maps and data on nature, climate change, and human development in new ways to generate insight for nature and sustainable development. It is freely available online to governments and other stakeholders as a digital public good…

The UNBL 2.0 release responds to a known global gap in the types of spatial data and tools, providing an invaluable resource to nations around the world to take transformative action. Users can now access over 400 of the world’s best available global spatial data layers; create secure workspaces to incorporate national data alongside global data; use curated data collections to generate insight for action; and more. Without specialized tools or training, decision-makers can leverage the power of spatial data to support priority-setting and the implementation of nature-based solutions. Dynamic metrics and indicators on the state of our planet are also available….(More)”.

False Positivism


Essay by Peter Polack: “During the pandemic, the everyday significance of modeling — data-driven representations of reality designed to inform planning — became inescapable. We viewed our plans, fears, and desires through the lens of statistical aggregates: Infection-rate graphs became representations not only of the virus’s spread but also of shattered plans, anxieties about lockdowns, concern for the fate of our communities. 

But as epidemiological models became more influential, their implications were revealed as anything but absolute. One model, the Recidiviz Covid-19 Model for Incarceration, predicted high infection rates in prisons and consequently overburdened hospitals. While these predictions were used as the basis to release some prisoners early, the model has also been cited by those seeking to incorporate more data-driven surveillance technologies into prison management — a trend new AI startups like Blue Prism and Staqu are eager to get in on. Thus the same model supports both the call to downsize prisons and the demand to expand their operations, even as both can claim a focus on flattening the curve. …

The ethics and effects of interventions depend not only on facts in themselves, but also on how facts are construed — and on what patterns of organization, existing or speculative, they are mobilized to justify. Yet the idea persists that data collection and fact finding should override concerns about surveillance, and not only in the most technocratic circles and policy think tanks. It also has defenders in the world of design theory and political philosophy. Benjamin Bratton, known for his theory of global geopolitics as an arrangement of computational technologies he calls “theStack,” sees in data-driven modeling the only political rationality capable of responding to difficult social and environmental problems like pandemics and climate change. In his latest book, The Revenge of the Real: Politics for a Post-Pandemic World, he argues that expansive models — enabled by what he theorizes as “planetary-scale computation” — can transcend individualistic perspectives and politics and thereby inaugurate a more inclusive and objective regime of governance. Against a politically fragmented world of polarized opinions and subjective beliefs, these models, Bratton claims, would unite politics and logistics under a common representation of the world. In his view, this makes longstanding social concerns about personal privacy and freedom comparatively irrelevant and those who continue to raise them irrational…(More)”.

The AI gambit: leveraging artificial intelligence to combat climate change—opportunities, challenges, and recommendations


Paper by Josh Cowls, Andreas Tsamados, Mariarosaria Taddeo & Luciano Floridi: “In this article, we analyse the role that artificial intelligence (AI) could play, and is playing, to combat global climate change. We identify two crucial opportunities that AI offers in this domain: it can help improve and expand current understanding of climate change, and it can contribute to combatting the climate crisis effectively. However, the development of AI also raises two sets of problems when considering climate change: the possible exacerbation of social and ethical challenges already associated with AI, and the contribution to climate change of the greenhouse gases emitted by training data and computation-intensive AI systems. We assess the carbon footprint of AI research, and the factors that influence AI’s greenhouse gas (GHG) emissions in this domain. We find that the carbon footprint of AI research may be significant and highlight the need for more evidence concerning the trade-off between the GHG emissions generated by AI research and the energy and resource efficiency gains that AI can offer. In light of our analysis, we argue that leveraging the opportunities offered by AI for global climate change whilst limiting its risks is a gambit which requires responsive, evidence-based, and effective governance to become a winning strategy. We conclude by identifying the European Union as being especially well-placed to play a leading role in this policy response and provide 13 recommendations that are designed to identify and harness the opportunities of AI for combatting climate change, while reducing its impact on the environment….(More)”.

For a heterodox computational social science


Paper by Petter Törnberg and Justus Uitermark: “The proliferation of digital data has been the impetus for the emergence of a new discipline for the study of social life: ‘computational social science’. Much research in this field is founded on the premise that society is a complex system with emergent structures that can be modeled or reconstructed through digital data. This paper suggests that computational social science serves practical and legitimizing functions for digital capitalism in much the same way that neoclassical economics does for neoliberalism. In recognition of this homology, this paper develops a critique of the complexity perspective of computational social science and argues for a heterodox computational social science founded on the meta-theory of critical realism that is critical, methodological pluralist, interpretative and explanative. This implies diverting computational social science’ computational methods and digital data so as to not be aimed at identifying invariant laws of social life, or optimizing state and corporate practices, but to instead be used as part of broader research strategies to identify contingent patterns, develop conjunctural explanations, and propose qualitatively different ways of organizing social life….(More)”.

Slowed canonical progress in large fields of science


Paper by Johan S. G. Chu and James A. Evans: “The size of scientific fields may impede the rise of new ideas. Examining 1.8 billion citations among 90 million papers across 241 subjects, we find a deluge of papers does not lead to turnover of central ideas in a field, but rather to ossification of canon. Scholars in fields where many papers are published annually face difficulty getting published, read, and cited unless their work references already widely cited articles. New papers containing potentially important contributions cannot garner field-wide attention through gradual processes of diffusion. These findings suggest fundamental progress may be stymied if quantitative growth of scientific endeavors—in number of scientists, institutes, and papers—is not balanced by structures fostering disruptive scholarship and focusing attention on novel ideas…(More)”.

Solutions to Plastic Pollution: A Conceptual Framework to Tackle a Wicked Problem


Chapter by Martin Wagner: “There is a broad willingness to act on global plastic pollution as well as a plethora of available technological, governance, and societal solutions. However, this solution space has not been organized in a larger conceptual framework yet. In this essay, I propose such a framework, place the available solutions in it, and use it to explore the value-laden issues that motivate the diverse problem formulations and the preferences for certain solutions by certain actors. To set the scene, I argue that plastic pollution shares the key features of wicked problems, namely, scientific, political, and societal complexity and uncertainty as well as a diversity in the views of actors. To explore the latter, plastic pollution can be framed as a waste, resource, economic, societal, or systemic problem.

Doing so results in different and sometimes conflicting sets of preferred solutions, including improving waste management; recycling and reuse; implementing levies, taxes, and bans as well as ethical consumerism; raising awareness; and a transition to a circular economy. Deciding which of these solutions is desirable is, again, not a purely rational choice. Accordingly, the social deliberations on these solution sets can be organized across four scales of change. At the geographic and time scales, we need to clarify where and when we want to solve the plastic problem. On the scale of responsibility, we need to clarify who is accountable, has the means to make change, and carries the costs. At the magnitude scale, we need to discuss which level of change we desire on a spectrum of status quo to revolution. All these issues are inherently linked to value judgments and worldviews that must, therefore, be part of an open and inclusive debate to facilitate solving the wicked problem of plastic pollution…(More)”.

Digital Technology, Politics, and Policy-Making


Open access book by Fabrizio Gilardi: “The rise of digital technology has been the best of times, and also the worst, a roller coaster of hopes and fears: “social media have gone—in the popular imagination at least—from being a way for pro-democratic forces to fight autocrats to being a tool of outside actors who want to attack democracies” (Tucker et al., 2017, 47). The 2016 US presidential election raised fundamental questions regarding the compatibility of the internet with democracy (Persily, 2017). The divergent assessments of the promises and risks of digital technology has to do, in part, with the fact that it has become such a pervasive phenomenon. Whether digital technology is, on balance, a net benefit or harm for democratic processes and institutions depends on which specific aspects we focus on. Moreover, the assessment is not value neutral, because digital technology has become inextricably linked with our politics. As Farrell (2012, 47) argued a few years ago, “[a]s the Internet becomes politically normalized, it will be ever less appropriate to study it in isolation but ever more important to think clearly, and carefully, about its relationship to politics.” Reflecting on this issue requires going beyond the headlines, which tend to focus on the most dramatic concerns and may have a negativity bias common in news reporting in general. The shortage of hard facts in this area, linked to the singular challenges of studying the connection between digital technology and politics, exacerbates the problem.
Since it affects virtually every aspect of politic and policy-making, the nature and effects of digital technology have been studied from many different angles in increasingly fragmented literatures. For example, studies of disinformation and social media usually do not acknowledge research on the usage of artificial intelligence in public administration—for good reasons, because such is the nature of specialized academic research. Similarly, media attention tends to concentrate on the most newsworthy aspects, such as the role of Facebook in elections, without connecting them to other related phenomena. The compartmentalization of academic and public attention in this area is understandable, but it obscures the relationships that exist among the different parts. Moreover, the fact that scholarly and media attention are sometimes out of sync might lead policy-makers to focus on solutions before there is a scientific consensus on the nature and scale of the problems. For example, policy-makers may emphasize curbing “fake news” while there is still no agreement in the research community about its effects on political outcomes…(More)”.

Addressing bias in big data and AI for health care: A call for open science


Paper by Natalia Norori et al: “Bias in the medical field can be dissected along with three directions: data-driven, algorithmic, and human. Bias in AI algorithms for health care can have catastrophic consequences by propagating deeply rooted societal biases. This can result in misdiagnosing certain patient groups, like gender and ethnic minorities, that have a history of being underrepresented in existing datasets, further amplifying inequalities.

Open science practices can assist in moving toward fairness in AI for health care. These include (1) participant-centered development of AI algorithms and participatory science; (2) responsible data sharing and inclusive data standards to support interoperability; and (3) code sharing, including sharing of AI algorithms that can synthesize underrepresented data to address bias. Future research needs to focus on developing standards for AI in health care that enable transparency and data sharing, while at the same time preserving patients’ privacy….(More)”.

Figure thumbnail gr1

The Survival Nexus: Science, Technology, and World Affairs


Book by Charles Weiss: “Technology and science can enable us to create a richer, healthier, sustainable, and equitable world, but they can also lead to global disaster. After all, human technical, political, economic, business, and ethical decisions determine the impact of scientific discoveries and technological innovations…

In this book, Charles Weiss explores the intertwining of science, technology, and world affairs that affects everything from climate change and global health to cybersecurity, biotechnology, and geoengineering. Compact and readable, the book ties together ideas and experiences arising from a broad range of diverse issues, ranging from the structure of the energy economy to the future of work and the freedom of the internet.

The Survival Nexus highlights opportunities to mobilize science and technology for a better world through technological innovations that address global health, poverty, and hunger. It alerts the reader to the Earth-in-the balance risks stemming from the decline in the international cooperation that once kept the dangers of pandemics, climate change, and nuclear war in check. It warns of the challenge to democracies from the multi-faceted global information and cyber-wars being waged by authoritarian powers. Central to the global problems it explores are questions of basic ethics: how much are people willing to respect scientific facts, to act today to forestall long-run dangers, and to ensure equitable sharing of the benefits, costs, and risks arising from advances in science and technology.

Weiss clearly explains the technical principles underlying these issues, showcasing why scientists, policy makers, and citizens everywhere need to understand how the mix of science and technology with politics, economics, business, ethics, law, communications, psychology, and culture will shape our future. This important nexus underpins issues critical to human survival that are overlooked in the broader context of world affairs…(More)”.

Impact Evidence and Beyond: Using Evidence to Drive Adoption of Humanitarian Innovations


Learning paper by DevLearn: “…provides guidance to humanitarian innovators on how to use evidence to enable and drive adoption of innovation.

Innovation literature and practice show time and time again that it is difficult to scale innovations. Even when an innovation is demonstrably impactful, better than the existing solution and good value for money, it does not automatically get adopted or used in mainstream humanitarian programming.

Why do evidence-based innovations face difficulties in scaling and how can innovators best position their innovation to scale?

This learning paper is for innovators who want to effectively use evidence to support and enable their journey to scale. It explores the underlying social, organisational and behavioural factors that stifle uptake of innovations.

It also provides guidance on how to use, prioritise and communicate evidence to overcome these barriers. The paper aims to help innovators generate and present their evidence in more tailored and nuanced ways to improve adoption and scaling of their innovations….(More)”.