Nagging misconceptions about nudge theory


Cass Sunstein at The Hill: “Nudges are private or public initiatives that steer people in particular directions but that also allow them to go their own way.

A reminder is a nudge; so is a warning. A GPS device nudges; a default rule, automatically enrolling people in some program, is a nudge.

To qualify as a nudge, an initiative must not impose significant economic incentives. A subsidy is not a nudge; a tax is not a nudge; a fine or a jail sentence is not a nudge. To count as such, a nudge must fully preserve freedom of choice.

In 2009, University of Chicago economist Richard Thaler and I co-wrote a book that drew on research in psychology and behavioral economics to help people and institutions, both public and private, improve their decision-making.

In the 10 years since “Nudge” was published, there has been an extraordinary outpouring of new thought and action, with particular reference to public policy.

Behavioral insight teams, or “nudge units” of various sorts, can be found in many nations, including Australia, Canada, Denmark, United Kingdom, the United States, the Netherlands, Germany, Singapore, Japan and Qatar.

Those teams are delivering. By making government more efficient, and by improving safety and health, they are helping to save a lot of money and a lot of lives. And in many countries, including the U.S., they don’t raise partisan hackles; both Democrats and Republicans have enthusiastically embraced them.   

Still, there are a lot of mistakes and misconceptions out there, and they are diverting attention and hence stalling progress. Here are the three big ones:

1. Nudges do not respect freedom. …

2. Nudges are based on excessive trust in government...

3. Nudges cannot achieve a whole lot.…(More)”.

Nowcasting the Local Economy: Using Yelp Data to Measure Economic Activity


Paper by Edward L. Glaeser, Hyunjin Kim and Michael Luca: “Can new data sources from online platforms help to measure local economic activity? Government datasets from agencies such as the U.S. Census Bureau provide the standard measures of local economic activity at the local level. However, these statistics typically appear only after multi-year lags, and the public-facing versions are aggregated to the county or ZIP code level. In contrast, crowdsourced data from online platforms such as Yelp are often contemporaneous and geographically finer than official government statistics. Glaeser, Kim, and Luca present evidence that Yelp data can complement government surveys by measuring economic activity in close to real time, at a granular level, and at almost any geographic scale. Changes in the number of businesses and restaurants reviewed on Yelp can predict changes in the number of overall establishments and restaurants in County Business Patterns. An algorithm using contemporaneous and lagged Yelp data can explain 29.2 percent of the residual variance after accounting for lagged CBP data, in a testing sample not used to generate the algorithm. The algorithm is more accurate for denser, wealthier, and more educated ZIP codes….(More)”.

See all papers presented at the NBER Conference on Big Data for 21st Century Economic Statistics here.

Data Pools: Wi-Fi Geolocation Spoofing


AH Projects: “DataPools is a Wi-Fi geolocation spoofing project that virtually relocates your phone to the latitudes and longitudes of Silicon Valley success. It includes a catalog and a SkyLift device with 12 pre-programmed locations. DataPools was produced for the Tropez summer art event in Berlin and in collaboration with Anastasia Kubrak.

DataPools catalog pool index

DataPools catalog pool index

Weren’t invited to Jeff Bezos’s summer pool party? No problem. DataPools uses the SkyLift device to mimick the Wi-Fi network infrastructure at 12 of the top Silicon Valley CEOs causing your phone to show up, approximately, at their pool. Because Wi-Fi spoofing affects the core geolocation services of iOS and Android smartphones, all apps on phone and the metadata they generate, will be located in the spoofed location…

Data Pools is a metaphor for a store of wealth that is private. The luxurious pools and mansions of Silicon Valley are financed by the mechanisms of economic surveillance and ownership of our personal information. Yet, the geographic locations of these premises are often concealed, hidden, and removed from open source databases. What if we could reverse this logic and plunge into the pools of ludicrous wealth, both virtually and physically? Could we apply the same methods of data extraction to highlight the ridiculous inequalities between CEOs and platform users?

Comparison of wealth distribution among top Silicon Valley CEOs

Comparison of wealth distribution among top Silicon Valley CEOs

Data

Technically, DataPools uses a Wi-Fi microcontroller programmed with the BSSIDs and SSIDs from the target locations, which were all obtained using openly published information from web searches and wigle.net. This data is then programmed onto the firmware of the SkyLift device. One SkyLift device contains all 12 pool locations. However, throughout the installation improvements were made and the updated firmware now uses one main location with multiple sub-locations to cover a larger area during installations. This method was more effective at spoofing many phones in large area and is ideal for installations….(More)”.

A weather tech startup wants to do forecasts based on cell phone signals


Douglas Heaven at MIT Technology Review: “On 14 April more snow fell on Chicago than it had in nearly 40 years. Weather services didn’t see it coming: they forecast one or two inches at worst. But when the late winter snowstorm came it caused widespread disruption, dumping enough snow that airlines had to cancel more than 700 flights across all of the city’s airports.

One airline did better than most, however. Instead of relying on the usual weather forecasts, it listened to ClimaCell – a Boston-based “weather tech” start-up that claims it can predict the weather more accurately than anyone else. According to the company, its correct forecast of the severity of the coming snowstorm allowed the airline to better manage its schedules and minimize losses due to delays and diversions. 

Founded in 2015, ClimaCell has spent the last few years developing the technology and business relationships that allow it to tap into millions of signals from cell phones and other wireless devices around the world. It uses the quality of these signals as a proxy for local weather conditions, such as precipitation and air quality. It also analyzes images from street cameras. It is offering a weather forecasting service to subscribers that it claims is 60 percent more accurate than that of existing providers, such as NOAA.

The internet of weather

The approach makes sense, in principle. Other forecasters use proxies, such as radar signals. But by using information from millions of everyday wireless devices, ClimaCell claims it has a far more fine-grained view of most of the globe than other forecasters get from the existing network of weather sensors, which range from ground-based devices to satellites. (ClimaCell also taps into these, too.)…(More)”.

How Technology Could Revolutionize Refugee Resettlement


Krishnadev Calamur in The Atlantic: “… For nearly 70 years, the process of interviewing, allocating, and accepting refugees has gone largely unchanged. In 1951, 145 countries came together in Geneva, Switzerland, to sign the Refugee Convention, the pact that defines who is a refugee, what refugees’ rights are, and what legal obligations states have to protect them.

This process was born of the idealism of the postwar years—an attempt to make certain that those fleeing war or persecution could find safety so that horrific moments in history, such as the Holocaust, didn’t recur. The pact may have been far from perfect, but in successive years, it was a lifeline to Afghans, Bosnians, Kurds, and others displaced by conflict.

The world is a much different place now, though. The rise of populism has brought with it a concomitant hostility toward immigrants in general and refugees in particular. Last October, a gunman who had previously posted anti-Semitic messages online against HIAS killed 11 worshippers in a Pittsburgh synagogue. Many of the policy arguments over resettlement have shifted focus from humanitarian relief to security threats and cost. The Trump administration has drastically cut the number of refugees the United States accepts, and large parts of Europe are following suit.

If it works, Annie could change that dynamic. Developed at Worcester Polytechnic Institute in Massachusetts, Lund University in Sweden, and the University of Oxford in Britain, the software uses what’s known as a matching algorithm to allocate refugees with no ties to the United States to their new homes. (Refugees with ties to the United States are resettled in places where they have family or community support; software isn’t involved in the process.)

Annie’s algorithm is based on a machine learning model in which a computer is fed huge piles of data from past placements, so that the program can refine its future recommendations. The system examines a series of variables—physical ailments, age, levels of education and languages spoken, for example—related to each refugee case. In other words, the software uses previous outcomes and current constraints to recommend where a refugee is most likely to succeed. Every city where HIAS has an office or an affiliate is given a score for each refugee. The higher the score, the better the match.

This is a drastic departure from how refugees are typically resettled. Each week, HIAS and the eight other agencies that allocate refugees in the United States make their decisions based largely on local capacity, with limited emphasis on individual characteristics or needs….(More)”.

How to Argue with an Algorithm: Lessons from the COMPAS ProPublica Debate


Paper by Anne L. Washington: “The United States optimizes the efficiency of its growing criminal justice system with algorithms however, legal scholars have overlooked how to frame courtroom debates about algorithmic predictions. In State v Loomis, the defense argued that the court’s consideration of risk assessments during sentencing was a violation of due process because the accuracy of the algorithmic prediction could not be verified. The Wisconsin Supreme Court upheld the consideration of predictive risk at sentencing because the assessment was disclosed and the defendant could challenge the prediction by verifying the accuracy of data fed into the algorithm.

Was the court correct about how to argue with an algorithm?

The Loomis court ignored the computational procedures that processed the data within the algorithm. How algorithms calculate data is equally as important as the quality of the data calculated. The arguments in Loomis revealed a need for new forms of reasoning to justify the logic of evidence-based tools. A “data science reasoning” could provide ways to dispute the integrity of predictive algorithms with arguments grounded in how the technology works.

This article’s contribution is a series of arguments that could support due process claims concerning predictive algorithms, specifically the Correctional Offender Management Profiling for Alternative Sanctions (“COMPAS”) risk assessment. As a comprehensive treatment, this article outlines the due process arguments in Loomis, analyzes arguments in an ongoing academic debate about COMPAS, and proposes alternative arguments based on the algorithm’s organizational context….(More)”

Trust, Control, and the Economics of Governance


Book by Philipp Herold: “In today’s world, we cooperate across legal and cultural systems in order to create value. However, this increases volatility, uncertainty, complexity, and ambiguity as challenges for societies, politics, and business. This has made governance a scarce resource. It thus is inevitable that we understand the means of governance available to us and are able to economize on them. Trends like the increasing role of product labels and a certification industry as well as political movements towards nationalism and conservatism may be seen as reaction to disappointments from excessive cooperation. To avoid failures of cooperation, governance is important – control through e.g. contracts is limited and in governance economics trust is widely advertised without much guidance on its preconditions or limits.

This book draws on the rich insight from research on trust and control, and accommodates the key results for governance considerations in an institutional economics framework. It provides a view on the limits of cooperation from the required degree of governance, which can be achieved through extrinsic motivation or building on intrinsic motivation. Trust Control Economics thus inform a more realistic expectation about the net value added from cooperation by providing a balanced view including the cost of governance. It then becomes clear how complex cooperation is about ‘governance accretion’ where limited trustworthiness is substituted by control and these control instances need to be governed in turn.

Trust, Control, and the Economics of Governance is a highly necessary development of institutional economics to reflect progress made in trust research and is a relevant addition for practitioners to better understand the role of trust in the governance of contemporary cooperation-structures. It will be of interest to researchers, academics, and students in the fields of economics and business management, institutional economics, and business ethics….(More)”.

Report on Algorithmic Risk Assessment Tools in the U.S. Criminal Justice System


Press release: “The Partnership on AI (PAI) has today published a report gathering the views of the multidisciplinary artificial intelligence and machine learning research and ethics community which documents the serious shortcomings of algorithmic risk assessment tools in the U.S. criminal justice system. These kinds of AI tools for deciding on whether to detain or release defendants are in widespread use around the United States, and some legislatures have begun to mandate their use. Lessons drawn from the U.S. context have widespread applicability in other jurisdictions, too, as the international policymaking community considers the deployment of similar tools.

While criminal justice risk assessment tools are often simpler than the deep neural networks used in many modern artificial intelligence systems, they are basic forms of AI. As such, they present a paradigmatic example of the high-stakes social and ethical consequences of automated AI decision-making….

Across the report, challenges to using these tools fell broadly into three primary categories:

  1. Concerns about the accuracy, bias, and validity in the tools themselves
    • Although the use of these tools is in part motivated by the desire to mitigate existing human fallibility in the criminal justice system, this report suggests that it is a serious misunderstanding to view tools as objective or neutral simply because they are based on data.
  2. Issues with the interface between the tools and the humans who interact with them
    • In addition to technical concerns, these tools must be held to high standards of interpretability and explainability to ensure that users (including judges, lawyers, and clerks, among others) can understand how the tools’ predictions are reached and make reasonable decisions based on these predictions.
  3. Questions of governance, transparency, and accountability
    • To the extent that such systems are adapted to make life-changing decisions, tools and decision-makers who specify, mandate, and deploy them must meet high standards of transparency and accountability.

This report highlights some of the key challenges with the use of risk assessment tools for criminal justice applications. It also raises some deep philosophical and procedural issues which may not be easy to resolve. Surfacing and addressing those concerns will require ongoing research and collaboration between policymakers, the AI research community, civil society groups, and affected communities, as well as new types of data collection and transparency. It is PAI’s mission to spur and facilitate these conversations and to produce research to bridge such gaps….(More)”

LAPD moving away data-driven crime programs over potential racial bias


Mark Puente in The Los Angeles Times: “The Los Angeles Police Department pioneered the controversial use of data to pinpoint crime hot spots and track violent offenders.

Complex algorithms and vast databases were supposed to revolutionize crime fighting, making policing more efficient as number-crunching computers helped to position scarce resources.

But critics long complained about inherent bias in the data — gathered by officers — that underpinned the tools.

They claimed a partial victory when LAPD Chief Michel Moore announced he would end one highly touted program intended to identify and monitor violent criminals. On Tuesday, the department’s civilian oversight panel raised questions about whether another program, aimed at reducing property crime, also disproportionately targets black and Latino communities.

Members of the Police Commission demanded more information about how the agency plans to overhaul a data program that helps predict where and when crimes will likely occur. One questioned why the program couldn’t be suspended.

“There is very limited information” on the program’s impact, Commissioner Shane Murphy Goldsmith said.

The action came as so-called predictive policing— using search tools, point scores and other methods — is under increasing scrutiny by privacy and civil liberties groups that say the tactics result in heavier policing of black and Latino communities. The argument was underscored at Tuesday’s commission meeting when several UCLA academics cast doubt on the research behind crime modeling and predictive policing….(More)”.

Introducing the Contractual Wheel of Data Collaboration


Blog by Andrew Young and Stefaan Verhulst: “Earlier this year we launched the Contracts for Data Collaboration (C4DC) initiative — an open collaborative with charter members from The GovLab, UN SDSN Thematic Research Network on Data and Statistics (TReNDS), University of Washington and the World Economic Forum. C4DC seeks to address the inefficiencies of developing contractual agreements for public-private data collaboration by informing and guiding those seeking to establish a data collaborative by developing and making available a shared repository of relevant contractual clauses taken from existing legal agreements. Today TReNDS published “Partnerships Founded on Trust,” a brief capturing some initial findings from the C4DC initiative.

The Contractual Wheel of Data Collaboration [beta]

The Contractual Wheel of Data Collaboration [beta] — Stefaan G. Verhulst and Andrew Young, The GovLab

As part of the C4DC effort, and to support Data Stewards in the private sector and decision-makers in the public and civil sectors seeking to establish Data Collaboratives, The GovLab developed the Contractual Wheel of Data Collaboration [beta]. The Wheel seeks to capture key elements involved in data collaboration while demystifying contracts and moving beyond the type of legalese that can create confusion and barriers to experimentation.

The Wheel was developed based on an assessment of existing legal agreements, engagement with The GovLab-facilitated Data Stewards Network, and analysis of the key elements of our Data Collaboratives Methodology. It features 22 legal considerations organized across 6 operational categories that can act as a checklist for the development of a legal agreement between parties participating in a Data Collaborative:…(More)”.