The Future of Nudging Will Be Personal


Essay by Stuart Mills: “Nudging, now more than a decade old as an intervention tool, has become something of a poster child for the behavioral sciences. We know that people don’t always act in their own best interest—sometimes spectacularly so—and nudges have emerged as a noncoercive way to live better in a world shaped by our behavioral foibles.

But with nudging’s maturity, we’ve also begun to understand some of the ways that it falls short. Take, for instance, research by Linda Thunström and her colleagues. They found that “successful” nudges can actually harm subgroups of a population. In their research, spendthrifts (those who spend freely) spent less when nudged, bringing them closer to optimal spending. But when given the same nudge, tightwads also spent less, taking them further from the optimal.

While a nudge might appear effective because a population benefited on average, at the individual level the story could be different. Should nudging penalize people that differ from the average just because, on the whole, a policy would benefit the population? Though individual versus population trade-offs are part and parcel to policymaking, as our ability to personalize advances, through technology and data, these trade-offs seem less and less appealing….(More)”.

The New Tech Tools in Data Sharing


Essay by Massimo Russo and Tian Feng: “…Cloud providers are integrating data-sharing capabilities into their product suites and investing in R&D that addresses new features such as data directories, trusted execution environments, and homomorphic encryption. They are also partnering with industry-specific ecosystem orchestrators to provide joint solutions.

Cloud providers are moving beyond infrastructure to enable broader data sharing. In 2018, for example, Microsoft teamed up with Oracle and SAP to kick off its Open Data Initiative, which focuses on interoperability among the three large platforms. Microsoft has also begun an Open Data Campaign to close the data divide and help smaller organizations get access to data needed for innovation in artificial intelligence (AI). Amazon Web Services (AWS) has begun a number of projects designed to promote open data, including the AWS Data Exchange and the Open Data Sponsorship Program. In addition to these large providers, specialty technology companies and startups are likewise investing in solutions that further data sharing.

Technology solutions today generally fall into three categories: mitigating risks, enhancing value, and reducing friction. The following is a noncomprehensive list of solutions in each category.

1. Mitigating the Risks of Data Sharing

Potential financial, competitive, and brand risks associated with data disclosure inhibit data sharing. To address these risks, data platforms are embedding solutions to control use, limit data access, encrypt data, and create substitute or synthetic data. (See slide 2 in the slideshow.)

Data Breaches. Here are some of the technological solutions designed toprevent data breaches and unauthorized access to sensitive or private data:

  • Data modification techniques alter individual data elements or full data sets while maintaining data integrity. They provide increasing levels of protection but at a cost: loss of granularity of the underlying data. De-identification and masking strip personal identifier information and use encryption, allowing most of the data value to be preserved. More complex encryptions can increase security, but they also remove resolution of information from the data set.
  • Secure data storage and transfer can help ensure that data stays safe both at rest and in transit. Cloud solutions such as Microsoft Azure and AWS have invested in significant platform security and interoperability.
  • Distributed ledger technologies, such as blockchain, permit data to be stored and shared in a decentralized manner that makes it very difficult to tamper with. IOTA, for example, is a distributed ledger platform for IoT applications supported by industy players such as Bosch and Software AG.
  • Secure computation enables analysis without revealing details of the underlying data. This can be done at a software level, with techniques such as secure multiparty computation (MPC) that allow potentially untrusting parties to jointly compute a function without revealing their private inputs. For example, with MPC, two parties can calculate the intersection of their respective encrypted data set while only revealing information about the intersection. Google, for one, is embedding MPC in its open-source Private Join and Compute tools.
  • Trusted execution environments (TEEs) are hardware modules separate from the operating system that allow for secure data processing within an encrypted private area on the chip. Startup Decentriq is partnering with Intel and Microsoft to explore confidential computing by means of TEEs. There is a significant opportunity for IoT equipment providers to integrate TEEs into their products….(More)”

The Techlash and Tech Crisis Communication


Book by Nirit Weiss-Blatt: “This book provides an in-depth analysis of the evolution of tech journalism. The emerging tech-backlash is a story of pendulum swings: We are currently in tech-dystopianism after a long period spent in tech-utopianism. Tech companies were used to ‘cheerleading’ coverage of product launches. This long tech-press honeymoon ended, and was replaced by a new era of mounting criticism focused on tech’s negative impact on society. When and why did tech coverage shift? How did tech companies respond to the rise of tech criticism?

The book depicts three main eras: Pre-Techlash, Techlash, and Post-Techlash. The reader is taken on a journey from computer magazines, through tech blogs to the upsurge of tech investigative reporting. It illuminates the profound changes in the power dynamics between the media and the tech giants it covers.

The interplay between tech journalism and tech PR was underexplored. Through analyses of both tech media and the corporates’ crisis responses, this book examines the roots and characteristics of the Techlash, and provides explanations to ‘How did we get here?’. Insightful observations by tech journalists and tech public relations professionals are added to the research data, and together – they tell the story of the TECHLASH. It includes theoretical and practical implications for both tech enthusiasts and critics….(More)”.

Building Digital Worlds: Where does GIS data come from?


Julie Stoner at Library of Congress: “Whether you’ve used an online map to check traffic conditions, a fitness app to track your jogging route, or found photos tagged by location on social media, many of us rely on geospatial data more and more each day. So what are the most common ways geospatial data is created and stored, and how does it differ from how we have stored geographic information in the past?

A primary method for creating geospatial data is to digitize directly from scanned analog maps. After maps are georeferenced, GIS software allows a data creator to manually digitize boundaries, place points, or define areas using the georeferenced map image as a reference layer. The goal of digitization is to capture information carefully stored in the original map and translate it into a digital format. As an example, let’s explore and then digitize a section of this 1914 Sanborn Fire Insurance Map from Eatonville, Washington.

Sanborn Fire Insurance Map from Eatonville, Pierce County, Washington. Sanborn Map Company, October 1914. Geography & Map Division, Library of Congress.

Sanborn Fire Insurance Maps were created to detail the built environment of American towns and cities through the late 19th and early 20th centuries. The creation of these information-dense maps allowed the Sanborn Fire Insurance Company to underwrite insurance agreements without needing to inspect each building in person. Sanborn maps have become incredibly valuable sources of historic information because of the rich geographic detail they store on each page.

When extracting information from analog maps, the digitizer must decide which features will be digitized and how information about those features will be stored. Behind the geometric features created through the digitization process, a table is utilized to store information about each feature on the map.  Using the table, we can store information gleaned from the analog map, such as the name of a road or the purpose of a building. We can also quickly calculate new data, such as the length of a road segment. The data in the table can then be put to work in the visual display of the new digital information that has been created. This often done through symbolization and map labels….(More)”.

A new approach to problem-solving across the Sustainable Development Goals


Alexandra Bracken, John McArthur, and Jacob Taylor at Brookings: “The economic, social, and environmental challenges embedded throughout the world’s 17 Sustainable Development Goals (SDGs) will require many breakthroughs from business as usual. COVID-19 has only underscored the SDGs’ central message that the underlying problems are both interconnected and urgent, so new mindsets are required to generate faster progress on many fronts at once. Our recent report, 17 Rooms: A new approach to spurring action for the Sustainable Development Goals, describes an effort to innovate around the process of SDG problem-solving itself.

17 Rooms aims to advance problem-solving within and across all the SDGs. As a partnership between Brookings and The Rockefeller Foundation, the first version of the undertaking was convened in September 2018, as a single meeting on the eve of the U.N. General Assembly in New York. The initiative has since evolved into a two-pronged effort: an annual flagship process focused on global-scale policy issues and a community-level process in which local actors are taking 17 Rooms methods into their own hands.

In practical terms, 17 Rooms consists of participants from disparate specialist communities each meeting in their own “Rooms,” or working groups, one for each SDG. Each Room is tasked with a common assignment of identifying cooperative actions they can take over the subsequent 12-18 months. Emerging ideas are then shared across Rooms to spot opportunities for collaboration.

The initiative continues to evolve through ongoing experimentation, so methods are not overly fixed, but three design principles help define key elements of the 17 Rooms mindset:

  1. All SDGs get a seat at the table. Insights, participants, and priorities are valued equally across all the specialist communities focused on individual dimensions of the SDGs
  2. Take a next step, not the perfect step. The process encourages participants to identify—and collaborate on—actions that are “big enough to matter, but small enough to get done”
  3. Conversations, not presentations. Discussions are structured around collaboration and peer-learning, aiming to focus on what’s best for an issue, not any individual organization

These principles appear to contribute to three distinct forms of value: the advancement of action, the generation of insights, and a strengthened sense of community among participants….(More)”.

Legislative Performance Futures


Article by Ben Podgursky on “Incentivize Good Laws by Monetizing the Verdict of History”….There are net-positive legislative policies which legislators won’t enact, because they only help people in the medium to far future.  For example:

  • Climate change policy
  • Infrastructure investments and mass-transit projects
  • Debt control and social security reform
  • Child tax credits

The (infrequent) times reforms on these issues are legislated — which happens rarely compared to their future value — they are passed not because of the value provided to future generations, but because of the immediate benefit to voters today:

  • Infrastructure investment goes to “shovel ready” projects, with an emphasis on short-term job creation, even when the prime benefit is to future GDP.  For example, Dams constructed in the 1930s (the Hoover Dam, the TVA) provide immense value today, but the projects only happened in order to create tens of thousands of jobs.
  • Climate change legislation is usually weakly directed.  Instead of policies which incur significant long-term benefits but short-term costs (ie, carbon taxes), “green legislation” aims to create green jobs and incentivize rooftop solar (reducing power bills today).
  • (small) child tax credits are passed to help parents today, even though the vastly larger benefit is incurred by children who exist because the marginal extra cash helped their parents afford an extra child.

On the other hand, reforms which provide nobenefit to today’s voter do not happen; this is why the upcoming Social Security Trust Fund shortfall will likely not be fixed until benefits are reduced and voters are directly impacted.

The issue is that while the future reaps the benefits or failures of today’s laws, people of the future cannot vote in today’s elections.  In fact, in almost no circumstances does the future have any ability to meaningfully reward or punish past lawmakers; there are debates today about whether to remove statues and rename buildings dedicated to those on the wrong side of history, actions which even proponents acknowledge as entirely symbolic….(More)”.

The Nature of Truth


Book edited by Michael P. Lynch, Jeremy Wyatt, Junyeol Kim and Nathan Kellen: “The question “What is truth?” is so philosophical that it can seem rhetorical. Yet truth matters, especially in a “post-truth” society in which lies are tolerated and facts are ignored. If we want to understand why truth matters, we first need to understand what it is. The Nature of Truth offers the definitive collection of classic and contemporary essays on analytic theories of truth. This second edition has been extensively revised and updated, incorporating both historically central readings on truth’s nature as well as up-to-the-moment contemporary essays. Seventeen new chapters reflect the current trajectory of research on truth.

Highlights include new essays by Ruth Millikan and Gila Sher on correspondence theories; a new essay on Peirce’s theory by Cheryl Misak; seven new essays on deflationism, laying out both theories and critiques; a new essay by Jamin Asay on primitivist theories; and a new defense by Kevin Scharp of his replacement theory, coupled with a probing critique of replacement theories by Alexis Burgess. Classic essays include selections by J. L. Austin, Donald Davidson, William James, W. V. O. Quine, and Alfred Tarski….(More)”.

Citizen social science in practice: the case of the Empty Houses Project


Paper by Alexandra Albert: “The growth of citizen science and participatory science, where non-professional scientists voluntarily participate in scientific activities, raises questions around the ownership and interpretation of data, issues of data quality and reliability, and new kinds of data literacy. Citizen social science (CSS), as an approach that bridges these fields, calls into question the way in which research is undertaken, as well as who can collect data, what data can be collected, and what such data can be used for. This article outlines a case study—the Empty Houses Project—to explore how CSS plays out in practice, and to reflect on the opportunities and challenges it presents. The Empty Houses Project was set up to investigate how citizens could be mobilised to collect data about empty houses in their local area, so as to potentially contribute towards tackling a pressing policy issue. The study shows how the possibilities of CSS exceed the dominant view of it as a new means of creating data repositories. Rather, it considers how the data produced in CSS is an epistemology, and a politics, not just a realist tool for analysis….(More)”.

Establishment of Sustainable Data Ecosystems


Report and Recommendations for the evolution of spatial data infrastructures by S. Martin, Gautier, P., Turki, and S., Kotsev: “The purpose of this study is to identify and analyse a set of successful data ecosystems and to address recommendations that can act as catalysts of data-driven innovation in line with the recently published European data strategy. The work presented here tries to identify to the largest extent possible actionable items.

Specifically, the study contributes with insights into the approaches that would help in the evolution of existing spatial data infrastructures (SDI), which are usually governed by the public sector and driven by data providers, to self-sustainable data ecosystems where different actors (including providers, users, intermediaries.) contribute and gain social and economic value in accordance with their specific objectives and incentives.

The overall approach described in this document is based on the identification and documentation of a set of case studies of existing data ecosystems and use cases for developing applications based on data coming from two or more data ecosystems, based on existing operational or experimental applications. Following a literature review on data ecosystem thinking and modelling, a framework consisting of three parts (Annex I) was designed. An ecosystem summary is drawn, giving an overall representation of the ecosystem key aspects. Two additional parts are detailed. One dedicated to ecosystem value dynamic illustrating how the ecosystem is structured through the resources exchanged between stakeholders, and the associated value.

Consequently, the ecosystem data flows represent the ecosystem from a complementary and more technical perspective, representing the flows and the data cycles associated to a given scenario. These two parts provide good proxies to evaluate the health and the maturity of a data ecosystem…(More)”.

The Ethics and Laws of Medical Big Data


Chapter by Hrefna Gunnarsdottir et al: “The COVID-19 pandemic has highlighted that leveraging medical big data can help to better predict and control outbreaks from the outset. However, there are still challenges to overcome in the 21st century to efficiently use medical big data, promote innovation and public health activities and, at the same time, adequately protect individuals’ privacy. The metaphor that property is a “bundle of sticks”, each representing a different right, applies equally to medical big data. Understanding medical big data in this way raises a number of questions, including: Who has the right to make money off its buying and selling, or is it inalienable? When does medical big data become sufficiently stripped of identifiers that the rights of an individual concerning the data disappear? How have different regimes such as the General Data Protection Regulation in Europe and the Health Insurance Portability and Accountability Act in the US answered these questions differently? In this chapter, we will discuss three topics: (1) privacy and data sharing, (2) informed consent, and (3) ownership. We will identify and examine ethical and legal challenges and make suggestions on how to address them. In our discussion of each of the topics, we will also give examples related to the use of medical big data during the COVID-19 pandemic, though the issues we raise extend far beyond it….(More)”.