Are Repeat Nudges Effective? For Tardy Tax Filers, It Seems So


Paper by Nicole Robitaille, Nina Mažar, and Julian House: “While behavioral scientists sometimes aim to nudge one-time actions, such as registering as an organ donor or signing up for a 401K, there are many other behaviors—making healthy food choices, paying bills, filing taxes, getting a flu shot—that are repeated on a daily, monthly, or annual basis. If you want to target these recurrent behaviors, can introducing a nudge once lead to consistent changes in behavior? What if you presented the same nudge several times—would seeing it over and over make its effects stronger, or just the opposite?

Decades of research from behavioral science has taught us a lot about nudges, but the field as a whole still doesn’t have a great understanding of the temporal dimensions of most interventions, including how long nudge effects last and whether or not they remain effective when repeated.

If you want an intervention to lead to lasting behavior change, prior research argues that it should target people’s beliefs, habits or the future costs of engaging in the behavior. Many nudges, however, focus instead on manipulating relatively small factors in the immediate choice environment to influence behavior, such as changing the order in which options are presented. In addition, relatively few field experiments have been able to administer and measure an intervention’s effects more than once, making it hard to know how long the effects of nudges are likely to persist.

While there is some research on what to expect when repeating nudges, the results are mixed. On the one hand, there is an extensive body of research in psychology on habituation, finding that, over time, people show decreased responses to the same stimuli. It wouldn’t be a giant leap to presume that seeing the same nudge again might decrease how much attention we pay to it, and thus hinder its ability to change our behavior. On the other hand, being exposed to the same nudge multiple times might help strengthen desired associations. Research on the mere exposure effect, for example, illustrates how the more times we see something, the more easily it is processed and the more we like it. It is also possible that being nudged multiple times could help foster enduring change, such as through new habit formation. Behavioral nudges aren’t going away, and their use will likely grow among policymakers and practitioners. It is critical to understand the temporal dimensions of these interventions, including how long one-off effects will last and if they will continue to be effective when seen multiple times….(More)”

Data-driven environmental decision-making and action in armed conflict


Essay by Wim Zwijnenburg: “Our understanding of how severely armed conflicts have impacted natural resources, eco-systems, biodiversity and long-term implications on climate has massively improved over the last decade. Without a doubt, cataclysmic events such as the 1991 Gulf War oil fires contributed to raising awareness on the conflict-environment nexus, and the images of burning wells are engraved into our collective mind. But another more recent, under-examined yet major contributor to this growing cognizance is the digital revolution, which has provided us with a wealth of data and information from conflict-affected countries quickly made available through the internet. With just a few clicks, anyone with a computer or smartphone and a Wi-Fi connection can follow, often in near-real time, events shared through social media in warzones or satellite imagery showing what is unfolding on the ground.

These developments have significantly deepened our understanding of how military activities, both historically and in current conflicts, contribute to environmental damage and can impact the lives and livelihoods of civilians. Geospatial analysis through earth observation (EO) is now widely used to document international humanitarian law (IHL) violations, improve humanitarian response and inform post-conflict assessments.

These new insights on conflict-environment dynamics have driven humanitarian, military and political responses. The latter are essential for the protection of the environment in armed conflict: with knowledge and understanding also comes a responsibility to prevent, mitigate and minimize environmental damage, in line with existing international obligations. Of particular relevance, under international humanitarian law, militaries must take into account incidental environmental damage that is reasonably foreseeable based on an assessment of information from all sources available to them at the relevant time (ICRC Guidelines on the Protection of the Environment, Rule 7Customary IHL Rule 43). Excessive harm is prohibited, and all feasible precautions must be taken to reduce incidental damage (Guidelines Rule 8, Customary IHL Rule 44).

How do we ensure that the data-driven strides forward in understanding conflict-driven environmental damage translate into proper military training and decision-making, humanitarian response and reconstruction efforts? How can this influence behaviour change and improve accountability for military actions and targeting decisions?…(More)”.

Next-generation nowcasting to improve decision making in a crisis


Frank Gerhard, Marie-Paule Laurent, Kyriakos Spyrounakos, and Eckart Windhagen at McKinsey: “In light of the limitations of the traditional models, we recommend a modified approach to nowcasting that uses country- and industry-specific expertise to boil down the number of variables to a selected few for each geography or sector, depending on the individual economic setting. Given the specific selection of each core variable, the relationships between the variables will be relatively stable over time, even during a major crisis. Admittedly, the more variables used, the easier it is to explain an economic shift; however, using more variables also means a greater chance of a break in some of the statistical relationships, particularly in response to an exogenous shock.

This revised nowcasting model will be more flexible and robust in periods of economic stress. It will provide economically intuitive outcomes, include the consideration of complementary, high-frequency data, and offer access to economic insights that are at once timely and unique.

Nowcast for Q1 2021 shows differing recovery speeds by sector and geography.

For example, consumer spending can be estimated in different US cities by combining data such as wages from business applications and footfall from mobility trend reports. As a more complex example: eurozone capitalization rates are, at the time of the writing of this article, available only through January 2021. However, a revamped nowcasting model can estimate current capitalization rates in various European countries by employing a handful of real-time and high-frequency variables for each, such as retail confidence indicators, stock-exchange indices, price expectations, construction estimates, base-metals prices and output, and even deposits into financial institutions. The choice of variable should, of course, be guided by industry and sector experts.

Similarly, published figures for gross value added (GVA) at the sector level in Europe are available only up to the second quarter of 2020. However, by utilizing selected variables, the new approach to nowcasting can provide an estimate of GVA through the first quarter of 2021. It can also highlight the different experiences of each region and industry sector in the recent recovery. Note that the sectors reliant on in-person interactions and of a nonessential nature have been slow to recover, as have the countries more reliant on international markets (exhibit)….(More)”.

Establishing a Data Trust: From Concept to Reality


Blog by Stefaan Verhulst, Aditi Ramesh & Andrew Young, Peter Rabley & Christopher Keefe: “As ever-more areas of our public and private lives succumb to a process of datafication, it is becoming increasingly urgent to find new ways of managing the data lifecycle: how data is collected, stored, used, and reused. In particular, legacy notions of control and data access need to be reimagined for the twenty-first century, in ways that give more prominence to the public good and common interests – in a manner that is responsible and sustainable. That is particularly true for mapping data which is why The GovLab and FutureState, with the support of The Rockefeller Foundation, are partnering with PLACE to assist them in designing a new operational and governance approach for creating, storing and accessing mapping data: a Data Trust. 

PLACE is a non-profit formed out of a belief that mapping data is an integral part of the modern digital ecosystem and critical to unlocking economic, social and environmental opportunities for sustainable and equitable growth, development and climate resiliency; however, this data is not available or affordable in too many places around the world. PLACE’s goal is to bridge this part of the digital divide.

Blog#1 Infographic B.png

Five key considerations inform the design of such a new framework:

  • Governing Data as a Commons: The work of Elinor Ostrom (among others) has highlighted models that go beyond private ownership and management. As a non-excludable and non-rivalrous asset, data fits this model well: one entity’s control or “ownership” of data doesn’t limit another entity’s (non-excludable); and one entity’s consumption or use of data doesn’t prevent another entity from similarly doing so (non-rivalrous). A new framework for governance would emphasize the central role of  “data as a commons.”
  • Avoiding a “Tragedy of the Commons”: Any commons is susceptible to a “tragedy of the commons”: a phenomenon in which entities or individuals free-ride on shared resources, depleting their value or usability for all, resulting in a failure to invest in maintenance, improvement and innovation and in the process contributing negatively to the public interest . Any reimagined model for data governance needs to acknowledge this risk, and build in methods and processes to avoid a tragedy of the commons and ensure “data sustainability.” As further described below we believe that sustainability can best be achieved through a membership model.
  • Tackling Data Asymmetries and Re-Distribution of Responsibilities: Everyone is a participant in today’s “data commons,” but not all stakeholders benefit equally. One way to ensure the sustainability of a data commons is to require that larger players—e.g., the most profitable platforms, and other entities that disproportionately benefit from network effects—assume greater responsibilities to maintain the commons. These responsibilities can take many forms—financial, technical know-how, regulatory or legal prowess—and will vary by entity and each entity’s specialization. The general idea is that all stakeholders should have equal rights and access—but some will have greater responsibilities and may be required to contribute more.
  • Independent Trustees and Strong Engagement: Who should govern the data as a commons? Another way to avoid a tragedy of the commons is to ensure that a clear set of rules, principles and guidelines determine what is acceptable (and not), and what constitutes fair play and reasonable data access and use. These guidelines should be designed and administered by independent trustees, whose responsibilities, powers, terms and selection mechanisms are clearly defined and bounded. The trustees should be drawn from across geographies and sectors, representing as wide a range of interests and expertise as possible.In addition, trustees should steer responsible data access in a manner that is informed by input from experts, stakeholders, data subjects, and intended beneficiaries, using innovative ways of engagement and deliberations.
  • Inclusion and Protection: A data trust designed for the commons must “work” for all and especially the most vulnerable and marginalized among us. The identity of some people and communities is inextricably linked to location and, therefore, requires us to be especially mindful of the risks of abuse for such communities. How can we prevent surveillance or bias against indigenous groups, for example? Equally important, how can we empower communities with more understanding of and voice in how data is collected and used about their place? Such communities are front-and-center in the design of the Trust and its governance….(More)”.

Three ways to supercharge your city’s open-data portal


Bloomberg Cities: “…Three open data approaches cities are finding success with:

Map it

Much of the data that people seem to be most interested in is location-based, local data leaders say. That includes everything from neighborhood crime stats and police data used by journalists and activists to property data regularly mined by real estate companies. Rather than simply making spatial data available, many cities have begun mapping it themselves, allowing users to browse information that’s useful to them.

At atlas.phila.gov, for example, Philadelphians can type in their own addresses to find property deeds, historic photos, nearby 311 complaints and service requests, and their polling place and date of the next local election, among other information. Los Angeles city’s GeoHub collects maps showing the locations of marijuana dispensariesreports of hate crimes, and five years of severe and fatal crashes between drivers and bikers or pedestrians, and dozens more.

A CincyInsights map highlighting cleaned up greens-aces across the city.
A CincyInsights map highlighting cleaned up green spaces across the city.

….

Train residents on how to use it

Cities with open-data policies learn from best practices in other city halls. In the last few years, many have begun offering trainings to equip residents with rudimentary data analysis skills. Baton Rouge, for example, offered a free, three-part Citizen Data Academy instructing residents on “how to find [open data], what it includes, and how to use it to understand trends and improve quality of life in our community.” …

In some communities, open-data officials work with city workers and neighborhood leaders to learn to help their communities access the benefits of public data even if only a small fraction of residents are accessing the data itself.

In Philadelphia, city teams work with the Citizens Planning Institute, an educational initiative of the city planning commission, to train neighborhood organizers in how to use city data around things like zoning and construction permits to keep up with development in their neighborhoods, says Kistine Carolan, open data program manager in the Office of Innovation and Technology. The Los Angeles Department of Neighborhood Empowerment runs a Data Literacy Program to help neighborhood groups make better use of the city’s data. So far, officials say, representatives of 50 of the city’s 99 neighborhood councils have signed up as part of the Data Liaisons program to learn new GIS and data-analysis skills to benefit their neighborhoods. 

Leverage the COVID moment

The COVID-19 pandemic has disrupted cities’ open-data plans, just like it has complicated every other aspect of society. Cities had to cancel scheduled in-person trainings and programs that help them reach some of their less-connected residents. But the pandemic has also demonstrated the fundamental role that data can play in helping to manage public emergencies. Cities large and small have hosted online tools that allow residents to track where cases are spiking—tools that have gotten many new people to interact with public data, officials say….(More)”.

Side-Stepping Safeguards, Data Journalists Are Doing Science Now


Article by Irineo Cabreros: “News stories are increasingly told through data. Witness the Covid-19 time series that decorate the homepages of every major news outlet; the red and blue heat maps of polling predictions that dominate the runup to elections; the splashy, interactive plots that dance across the screen.

As a statistician who handles data for a living, I welcome this change. News now speaks my favorite language, and the general public is developing a healthy appetite for data, too.

But many major news outlets are no longer just visualizing data, they are analyzing it in ever more sophisticated ways. For example, at the height of the second wave of Covid-19 cases in the United States, The New York Times ran a piece declaring that surging case numbers were not attributable to increased testing rates, despite President Trump’s claims to the contrary. The thrust of The Times’ argument was summarized by a series of plots that showed the actual rise in Covid-19 cases far outpacing what would be expected from increased testing alone. These weren’t simple visualizations; they involved assumptions and mathematical computations, and they provided the cornerstone for the article’s conclusion. The plots themselves weren’t sourced from an academic study (although the author on the byline of the piece is a computer science Ph.D. student); they were produced through “an analysis by The New York Times.”

The Times article was by no means an anomaly. News outlets have asserted, on the basis of in-house data analyses, that Covid-19 has killed nearly half a million more people than official records report; that Black and minority populations are overrepresented in the Covid-19 death toll; and that social distancing will usually outperform attempted quarantine. That last item, produced by The Washington Post and buoyed by in-house computer simulations, was the most read article in the history of the publication’s website, according to Washington Post media reporter Paul Farhi.

In my mind, a fine line has been crossed. Gone are the days when science journalism was like sports journalism, where the action was watched from the press box and simply conveyed. News outlets have stepped onto the field. They are doing the science themselves….(More)”.

Harnessing collective intelligence to find missing children


Cordis: “It is estimated that over 250 000 children go missing every year in the EU. Statistics on their recovery is scant, but based on data from the EU-wide 116 000 hotline, 14 % of runaways and 57 % of migrant minors reported missing in 2019 had not been found by the end of the year. The EU-supported ChildRescue project has developed a collective intelligence and stakeholder communication approach for missing children investigations. It consists of a collaborative platform and two mobile apps available for organisations, verified volunteers and the general public. “ChildRescue is being used by our piloting organisations and is already becoming instrumental in missing children investigations. The public response has exceeded our expectations, with over 22 000 app downloads,” says project coordinator Christos Ntanos from the Decision Support Systems Laboratory at the National Technical University of Athens. ChildRescue has also published a white paper on the need for a comprehensive legal framework on missing unaccompanied migrant minors in the EU….

To assist in missing children investigations, ChildRescue trained machine learning algorithms to find underlying patterns useful for investigations. As input, they used structured information about individual cases combined with open data from multiple sources, alongside data from similar past cases. The ChildRescue community mobile app issues real-time alerts near places of interest, such as where a child was last seen. Citizens can respond with information, including photos, exclusively accessible by the organisation involved in the case. The quality, relevance and credibility of this feedback are assessed by an algorithm. The organisation can then pass information to the police and engage its own volunteers. Team members can share real-time information through a dedicated private collaboration space….(More)”.

Citizen Science Is Helping Tackle Stinky Cities


Article by Lucrezia Lozza: “Marta has lived with a bad smell lingering in her hometown in central Spain, Villanueva del Pardillo, for a long time. Fed up, in 2017 she and her neighbors decided to pursue the issue. “The smell is disgusting,” Marta says, pointing a finger at a local yeast factory.

Originally, she thought of recording the “bad smell days” on a spreadsheet. When this didn’t work out, after some research she found Odour Collect, a crowdsourced map that allows users to enter a geolocalized timestamp of bad smells in their neighborhood.

After noise, odor nuisances are the second cause of environmental complaints. Odor regulations vary among countries and there’s little legislation about how to manage smells. For instance, in Spain some municipalities regulate odors, but others do not. In the United States, the Environmental Protection Agency does not regulate odor as a pollutant, so states and local jurisdictions are in charge of the issue.

Only after Marta started using Odour Collect to record the unpleasant smells in her town did she discover that the map was part of ‘D-NOSES’, a European project aimed at bringing citizens, industries and local authorities together to monitor and minimize odor nuisances. D-NOSES relies heavily on citizen science: Affected communities gather odor observations through two maps — Odour Collect and Community Maps — with the goal of implementing new policies in their area. D-NOSES launched several pilots in Europe — in Spain, Greece, Bulgaria, and Portugal — and two outside the continent in Uganda and in Chile.

“Citizen science promotes transparency between all the actors,” said Nora Salas Seoane, Social Sciences Researcher at Fundación Ibercivis, one of the partners of D-NOSES…(More)”.

Assessing the social and emotional costs of mass shootings with Twitter data


Article by Mary Blankenship and Carol Graham: “Mass shootings that result in mass casualties are almost a weekly occasion in the United States, which—not coincidentally—also has the most guns per capita in the world. Viewed from outside the U.S., it seems that Americans are not bothered by the constant deadly gun violence and have simply adapted to it. Yet, our analysis of the well-being costs of gun violence—using Twitter data to track real-time responses throughout the course of these appalling events—suggest that is not necessarily the case. We focus on the two March 2021 shootings in Atlanta and Boulder, and compare to similar data for the “1 October” (Las Vegas) and El Paso shootings a few years prior. (Details on our methodology can be found at the end of this blog.)

A reason for the one-sided debate on guns is that beyond the gruesome body counts, we do not have many tools for assessing the large—but unobservable—effects of this violence on family members, friends, and neighbors of the victims, as well as on society in general. By assessing how emotions evolve over time, real changes can be seen in Twitter messages. Our analysis shows that society is increasingly angered by gun violence, rather than simply adapting to it.

A striking characteristic of the response to the 1 October shooting is the immediate influx of users sending their thoughts and players to the victims and the Las Vegas community. Figure 1 shows the top emoji usage and “praying hands” being the most frequently used emoji. Although that is still the most used emoji in response to the other shootings, the margin between “praying hands” and other emojis has substantially decreased in recent responses to Atlanta and Boulder. Our focus is on the “yellow face” emojis, which can correlate to six primary emotions categories: surprise, sadness, disgust, fear, anger, and neutral. While the majority of face emojis reflect emotions of sadness in the 1 October and El Paso shooting, new emojis like the “red angry face” show greater feelings of anger in the Atlanta and Boulder shootings shown in Figure 3….(More)”.

Figure 1. Top 10 emojis used in response to the 1 October shooting

Top 10 emojis used in response to the 1 October shooting

Source: Authors

Building on a year of open data: progress and promise


Jennifer Yokoyama at Microsoft: “…The biggest takeaway from our work this past year – and the one thing I hope any reader of this post will take away – is that data collaboration is a spectrum. From the presence (or absence) of data to how open that data is to the trust level of the collaboration participants, these factors may necessarily lead to different configurations and different goals, but they can all lead to more open data and innovative insights and discoveries.

Here are a few other lessons we have learned over the last year:

  1. Principles set the foundation for stakeholder collaboration: When we launched the Open Data Campaign, we adopted five principles that guide our contributions and commitments to trusted data collaborations: Open, Usable, Empowering, Secure and Private. These principles underpin our participation, but importantly, organizations can build on them to establish responsible ways to share and collaborate around their data. The London Data Commission, for example, established a set of data sharing principles for public- and private-sector organizations to ensure alignment and to guide the participating groups in how they share data.
  2. There is value in pilot projects: Traditionally, data collaborations with several stakeholders require time – often including a long runway for building the collaboration, plus the time needed to execute on the project and learn from it. However, our learnings show short-term projects that experiment and test data collaborations can provide valuable insights. The London Data Commission did exactly that with the launch of four short-term pilot projects. Due to the success of the pilots, the partners are exploring how they can be expanded upon.
  3. Open data doesn’t require new data: Identifying data to share does not always mean it must be newly shared data; sometimes the data was narrowly shared, but can be shared more broadly, made more accessible or analyzed for a different purpose. Microsoft’s environmental indicator data is an example of data that was already disclosed in certain venues, but was then made available to the Linux Foundation’s OS-Climate Initiative to be consumed through analytics, thereby extending its reach and impact…

To get started, we suggest that emerging data collaborations make use of the wealth of existing resources. When embarking on data collaborations, we leveraged many of the definitions, toolkits and guides from leading organizations in this space. As examples, resources such as the Open Data Institute’s Data Ethics Canvas are extremely useful as a framework to develop ethical guidance. Additionally, The GovLab’s Open Data Policy Lab and Executive Course on Data Stewardship, both supported by Microsoft, highlight important case studies, governance considerations and frameworks when sharing data. If you want to learn more about the exciting work our partners are doing, check out the latest posts from the Open Data Institute and GovLab…(More)”. See also Open Data Policy Lab.