New and updated building footprints


Bing Blogs: “…The Microsoft Maps Team has been leveraging that investment to identify map features at scale and produce high-quality building footprint data sets with the overall goal to add to the OpenStreetMap and MissingMaps humanitarian efforts.

As of this post, the following locations are available and Microsoft offers access to this data under the Open Data Commons Open Database License (ODbL).

Country/RegionMillion buildings
United States of America129.6
Nigeria and Kenya50.5
South America44.5
Uganda and Tanzania17.9
Canada11.8
Australia11.3

As you might expect, the vintage of the footprints depends on the collection date of the underlying imagery. Bing Maps Imagery is a composite of multiple sources with different capture dates (ranging 2012 to 2021). To ensure we are setting the right expectation for that building, each footprint has a capture date tag associated if we could deduce the vintage of imagery used…(More)”

Octagon Measurement: Public Attitudes toward AI Ethics


Paper by Yuko Ikkatai, Tilman Hartwig, Naohiro Takanashi & Hiromi M. Yokoyama: “Artificial intelligence (AI) is rapidly permeating our lives, but public attitudes toward AI ethics have only partially been investigated quantitatively. In this study, we focused on eight themes commonly shared in AI guidelines: “privacy,” “accountability,” “safety and security,” “transparency and explainability,” “fairness and non-discrimination,” “human control of technology,” “professional responsibility,” and “promotion of human values.” We investigated public attitudes toward AI ethics using four scenarios in Japan. Through an online questionnaire, we found that public disagreement/agreement with using AI varied depending on the scenario. For instance, anxiety over AI ethics was high for the scenario where AI was used with weaponry. Age was significantly related to the themes across the scenarios, but gender and understanding of AI differently related depending on the themes and scenarios. While the eight themes need to be carefully explained to the participants, our Octagon measurement may be useful for understanding how people feel about the risks of the technologies, especially AI, that are rapidly permeating society and what the problems might be…(More)”.

Data Re-Use and Collaboration for Development


Stefaan G. Verhulst at Data & Policy: “It is often pointed out that we live in an era of unprecedented data, and that data holds great promise for development. Yet equally often overlooked is the fact that, as in so many domains, there exist tremendous inequalities and asymmetries in where this data is generated, and how it is accessed. The gap that separates high-income from low-income countries is among the most important (or at least most persistent) of these asymmetries…

Data collaboratives are an emerging form of public-private partnership that, when designed responsibly, can offer a potentially innovative solution to this problem. Data collaboratives offer at least three key benefits for developing countries:

1. Cost Efficiencies: Data and data analytic capacity are often hugely expensive and beyond the limited capacities of many low-income countries. Data reuse, facilitated by data collaboratives, can bring down the cost of data initiatives for development projects.

2. Fresh insights for better policy: Combining data from various sources by breaking down silos has the potential to lead to new and innovative insights that can help policy makers make better decisions. Digital data can also be triangulated with existing, more traditional sources of information (e.g., census data) to generate new insights and help verify the accuracy of information.

3. Overcoming inequalities and asymmetries: Social and economic inequalities, both within and among countries, are often mapped onto data inequalities. Data collaboratives can help ease some of these inequalities and asymmetries, for example by allowing costs and analytical tools and techniques to be pooled. Cloud computing, which allows information and technical tools to be easily shared and accessed, are an important example. They can play a vital role in enabling the transfer of skills and technologies between low-income and high-income countries…(More)”. See also: Reusing data responsibly to achieve development goals (OECD Report).

A tale of two labs: Rethinking urban living labs for advancing citizen engagement in food system transformations


Paper by Anke Brons et al: “Citizen engagement is heralded as essential for food democracy and equality, yet the implementation of inclusive citizen engagement mechanisms in urban food systems governance has lagged behind. This paper aims to further the agenda of citizen engagement in the transformation towards healthy and sustainable urban food systems by offering a conceptual reflection on urban living labs (ULLs) as a methodological platform. Over the past decades, ULLs have become increasingly popular to actively engage citizens in methodological testbeds for innovations within real-world settings. The paper proposes that ULLs as a tool for inclusive citizen engagement can be utilized in two ways: (i) the ULL as the daily life of which citizens are the experts, aimed at uncovering the unreflexive agency of a highly diverse population in co-shaping the food system and (ii) the ULL as a break with daily life aimed at facilitating reflexive agency in (re)shaping food futures. We argue that both ULL approaches have the potential to facilitate inclusive citizen engagement in different ways by strengthening the breadth and the depth of citizen engagement respectively. The paper concludes by proposing a sequential implementation of the two types of ULL, paying attention to spatial configurations and the short-termed nature of ULLs….(More)”.

Why people believe misinformation and resist correction


TechPolicyPress: “…In Nature, a team of nine researchers from the fields of psychology, mass media & communication have published a review of available research on the factors that lead people to “form or endorse misinformed views, and the psychological barriers” to changing their minds….

The authors summarize what is known about a variety of drivers of false beliefs, noting that they “generally arise through the same mechanisms that establish accurate beliefs” and the human weakness for trusting the “gut”. For a variety of reasons, people develop shortcuts when processing information, often defaulting to conclusions rather than evaluating new information critically. A complex set of variables related to information sources, emotional factors and a variety of other cues can lead to the formation of false beliefs. And, people often share information with little focus on its veracity, but rather to accomplish other goals- from self-promotion to signaling group membership to simply sating a desire to ‘watch the world burn’.

Source: Nature Reviews: Psychology, Volume 1, January 022

Barriers to belief revision are also complex, since “the original information is not simply erased or replaced” once corrective information is introduced. There is evidence that misinformation can be “reactivated and retrieved” even after an individual receives accurate information that contradicts it. A variety of factors affect whether correct information can win out. One theory looks at how information is integrated in a person’s “memory network”. Another complementary theory looks at “selective retrieval” and is backed up by neuro-imaging evidence…(More)”.

Privacy Is Power: How Tech Policy Can Bolster Democracy


Essay by Andrew Imbrie, Daniel Baer, Andrew Trask, Anna Puglisi, Erik Brattberg, and Helen Toner: “…History is rarely forgiving, but as we adopt the next phase of digital tools, policymakers can avoid the errors of the past. Privacy-enhancing technologies, or PETs, are a collection of technologies with applications ranging from improved medical diagnostics to secure voting systems and messaging platforms. PETs allow researchers to harness big data to solve problems affecting billions of people while also protecting privacy. …

PETs are ripe for coordination among democratic allies and partners, offering a way for them to jointly develop standards and practical applications that benefit the public good. At an AI summit last July, U.S. Secretary of State Antony Blinken noted the United States’ interest in “increasing access to shared public data sets for AI training and testing, while still preserving privacy,” and National Security Adviser Jake Sullivan pointed to PETs as a promising area “to overcome data privacy challenges while still delivering the value of big data.” Given China’s advantages in scale, the United States and like-minded partners should foster emerging technologies that play to their strengths in medical research and discovery, energy innovation, trade facilitation, and reform around money laundering. Driving innovation and collaboration within and across democracies is important not only because it will help ensure those societies’ success but also because there will be a first-mover advantage in the adoption of PETs for governing the world’s private data–sharing networks.

Accelerating the development of PETs for the public good will require an international approach. Democratic governments will not be the trendsetters on PETs; instead, policymakers for these governments should focus on nurturing the ecosystems these technologies need to flourish. The role for policymakers is not to decide the fate of specific protocols or techniques but rather to foster a conducive environment for researchers to experiment widely and innovate responsibly.    

Democracies should identify shared priorities and promote basic research to mature the technological foundations of PETs. The underlying technologies require greater investment in algorithmic development and hardware to optimize the chips and mitigate the costs of network overhead. To support the computational requirements for PETs, for example, the National Science Foundation could create an interface through CloudBank and provide cloud compute credits to researchers without access to these resources. The United States could also help incubate an international network of research universities collaborating on these technologies.

Second, science-funding agencies in democracies should host competitions to incentivize new PETs protocols and standards—the collaboration between the United States and the United Kingdom announced in early December is a good example. The goal should be to create free, open-source protocols and avoid the fragmentation of the market and the proliferation of proprietary standards. The National Institute of Standards and Technology and other similar bodies should develop standards and measurement tools for PETs; governments and companies should form public-private partnerships to fund open-source protocols over the long term. Open-source protocols are especially important in the early days of PET development, because closed-source PET implementations by profit-seeking actors can be leveraged to build data monopolies. For example, imagine a scenario where all U.S. cancer data could be controlled by a single company because all the hospitals are running their proprietary software. And you have to become a customer to join the network…(More)”.

The Attack of Zombie Science


Article by Natalia Pasternak, Carlos Orsi, Aaron F. Mertz, & Stuart Firestein: “When we think about how science is distorted, we usually think about concepts that have ample currency in public discourse, such as pseudoscience and junk science. Practices like astrology and homeopathy come wrapped in scientific concepts and jargon that can’t meet the methodological requirements of actual sciences. During the COVID-19 pandemic, pseudoscience has had a field day. Bleach, anyone? Bear bile? Yet the pandemic has brought a newer, more subtle form of distortion to light. To the philosophy of science, we humbly submit a new concept: “zombie science.”

We think of zombie science as mindless science. It goes through the motions of scientific research without a real research question to answer, it follows all the correct methodology, but it doesn’t aspire to contribute to advance knowledge in the field. Practically all the information about hydroxychloroquine during the pandemic falls into that category, including not just the living dead found in preprint repositories, but also papers published in journals that ought to have been caught by a more discerning eye. Journals, after all, invest their reputation in every piece they choose to publish. And every investment in useless science is a net loss.

From a social and historical stance, it seems almost inevitable that the penchant for productivism in the academic and scientific world would end up encouraging zombie science. If those who do not publish perish, then publishing—even nonsense or irrelevancies—is a matter of life or death. The peer-review process and the criteria for editorial importance are filters, for sure, but they are limited. Not only do they get clogged and overwhelmed due to excess submissions, they have to deal with the weaknesses of the human condition, including feelings of personal loyalty, prejudice, and vanity. Additionally, these filters fail, as the proliferation of predatory journals shows us all too well…(More)”.

Making data for good better


Article by Caroline Buckee, Satchit Balsari, and Andrew Schroeder: “…Despite the long standing excitement about the potential for digital tools, Big Data and AI to transform our lives, these innovations–with some exceptions–have so far had little impact on the greatest public health emergency of our time.

Attempts to use digital data streams to rapidly produce public health insights that were not only relevant for local contexts in cities and countries around the world, but also available to decision makers who needed them, exposed enormous gaps across the translational pipeline. The insights from novel data streams which could help drive precise, impactful health programs, and bring effective aid to communities, found limited use among public health and emergency response systems. We share here our experience from the COVID-19 Mobility Data Network (CMDN), now Crisis Ready (crisisready.io), a global collaboration of researchers, mostly infectious disease epidemiologists and data scientists, who served as trusted intermediaries between technology companies willing to share vast amounts of digital data, and policy makers, struggling to incorporate insights from these novel data streams into their decision making. Through our experience with the Network, and using human mobility data as an illustrative example, we recognize three sets of barriers to the successful application of large digital datasets for public good.

First, in the absence of pre-established working relationships with technology companies and data brokers, the data remain primarily confined within private circuits of ownership and control. During the pandemic, data sharing agreements between large technology companies and researchers were hastily cobbled together, often without the right kind of domain expertise in the mix. Second, the lack of standardization, interoperability and information on the uncertainty and biases associated with these data, necessitated complex analytical processing by highly specialized domain experts. And finally, local public health departments, understandably unfamiliar with these novel data streams, had neither the bandwidth nor the expertise to sift noise from signal. Ultimately, most efforts did not yield consistently useful information for decision making, particularly in low resource settings, where capacity limitations in the public sector are most acute…(More)”.

From Poisons to Antidotes: Algorithms as Democracy Boosters


Paper by Paolo Cavaliere and Graziella Romeo: “Under what conditions can artificial intelligence contribute to political processes without undermining their legitimacy? Thanks to the ever-growing availability of data and the increasing power of decision-making algorithms, the future of political institutions is unlikely to be anything similar to what we have known throughout the last century, possibly with Parliaments deprived of their traditional authority and public decision-making processes largely unaccountable. This paper discusses and challenges these concerns by suggesting a theoretical framework under which algorithmic decision-making is compatible with democracy and, most relevantly, can offer a viable solution to counter the rise of populist rhetoric in the governance arena. Such a framework is based on three pillars: a. understanding the civic issues that are subjected to automated decision-making; b. controlling the issues that are assigned to AI; and c. evaluating and challenging the outputs of algorithmic decision-making….(More)”.

What Works? Developing a global evidence base for public engagement


Report by Reema Patel and Stephen Yeo: “…the Wellcome Trust commissioned OTT Consulting to recommend the best approach for enabling public engagement communities to share and gather evidence on public engagement practice globally, and in particular to assess the suitability of an approach adapted from the UK ‘What Works Centres’. This report is the output from that commission. It draws from a desk-based literature review, workshops in India, Peru and the UK, and a series of stakeholder interviews with international organisations.

The key themes that emerged from stakeholder interviews and workshops were that, in order for evidence about public engagement to help inform and shape public engagement practice, and for public engagement to be used and deployed effectively, there has to be an approach that can: understand the audiences, broaden out how ‘evidence’ is understood and generated, think strategically about how evidence affects and informs practice and understand the complexity of the system dynamics within which public engagement (and evidence about public engagement) operates….(More)”.