The Voluntariness of Voluntary Consent: Consent Searches and the Psychology of Compliance


Paper by Roseanna Sommers and Vanessa K. Bohns: “Consent-based searches are by far the most ubiquitous form of search undertaken by police. A key legal inquiry in these cases is whether consent was granted voluntarily. This Essay suggests that fact finders’ assessments of voluntariness are likely to be impaired by a systematic bias in social perception. Fact finders are likely to underappreciate the degree to which suspects feel pressure to comply with police officers’ requests to perform searches.

In two preregistered laboratory studies, we approached a total of 209 participants (“Experi- encers”) with a highly intrusive request: to unlock their password-protected smartphones and hand them over to an experimenter to search through while they waited in another room. A sepa- rate 194 participants (“Forecasters”) were brought into the lab and asked whether a reasonable person would agree to the same request if hypothetically approached by the same researcher. Both groups then reported how free they felt, or would feel, to refuse the request.

Study 1 found that whereas most Forecasters believed a reasonable person would refuse the experimenter’s request, most Experiencers—100 out of 103 people—promptly unlocked their phones and handed them over. Moreover, Experiencers reported feeling significantly less free to refuse than did Forecasters contemplating the same situation hypothetically.

Study 2 tested an intervention modeled after a commonly proposed reform of consent searches, in which the experimenter explicitly advises participants that they have the right to with- hold consent. We found that this advisory did not significantly reduce compliance rates or make Experiencers feel more free to say no. At the same time, the gap between Experiencers and Fore- casters remained significant.

These findings suggest that decision makers judging the voluntariness of consent consistently underestimate the pressure to comply with intrusive requests. This is problematic because it indi- cates that a key justification for suspicionless consent searches—that they are voluntary—relies on an assessment that is subject to bias. The results thus provide support to critics who would like to see consent searches banned or curtailed, as they have been in several states.

The results also suggest that a popular reform proposal—requiring police to advise citizens of their right to refuse consent—may have little effect. This corroborates previous observational studies, which find negligible effects of Miranda warnings on confession rates among interrogees, and little change in rates of consent once police start notifying motorists of their right to refuse vehicle searches. We suggest that these warnings are ineffective because they fail to address the psychology of compliance. The reason people comply with police, we contend, is social, not informational. The social demands of police-citizen interactions persist even when people are informed of their rights. It is time to abandon the myth that notifying people of their rights makes them feel empowered to exercise those rights…(More)”.

Habeas Data: Privacy vs. The Rise of Surveillance Tech


Book by Cyrus Farivar: “Habeas Data shows how the explosive growth of surveillance technology has outpaced our understanding of the ethics, mores, and laws of privacy.

Award-winning tech reporter Cyrus Farivar makes the case by taking ten historic court decisions that defined our privacy rights and matching them against the capabilities of modern technology. It’s an approach that combines the charge of a legal thriller with the shock of the daily headlines.

Chapters include: the 1960s prosecution of a bookie that established the “reasonable expectation of privacy” in nonpublic places beyond your home (but how does that ruling apply now, when police can chart your every move and hear your every conversation within your own home — without even having to enter it?); the 1970s case where the police monitored a lewd caller — the decision of which is now the linchpin of the NSA’s controversial metadata tracking program revealed by Edward Snowden; and a 2010 low-level burglary trial that revealed police had tracked a defendant’s past 12,898 locations before arrest — an invasion of privacy grossly out of proportion to the alleged crime, which showed how authorities are all too willing to take advantage of the ludicrous gap between the slow pace of legal reform and the rapid transformation of technology.

A dazzling exposé that journeys from Oakland, California to the halls of the Supreme Court to the back of a squad car, Habeas Data combines deft reportage, deep research, and original interviews to offer an X-ray diagnostic of our current surveillance state….(More)”.

How to Argue with an Algorithm: Lessons from the COMPAS ProPublica Debate


Paper by Anne L. Washington: “The United States optimizes the efficiency of its growing criminal justice system with algorithms however, legal scholars have overlooked how to frame courtroom debates about algorithmic predictions. In State v Loomis, the defense argued that the court’s consideration of risk assessments during sentencing was a violation of due process because the accuracy of the algorithmic prediction could not be verified. The Wisconsin Supreme Court upheld the consideration of predictive risk at sentencing because the assessment was disclosed and the defendant could challenge the prediction by verifying the accuracy of data fed into the algorithm.

Was the court correct about how to argue with an algorithm?

The Loomis court ignored the computational procedures that processed the data within the algorithm. How algorithms calculate data is equally as important as the quality of the data calculated. The arguments in Loomis revealed a need for new forms of reasoning to justify the logic of evidence-based tools. A “data science reasoning” could provide ways to dispute the integrity of predictive algorithms with arguments grounded in how the technology works.

This article’s contribution is a series of arguments that could support due process claims concerning predictive algorithms, specifically the Correctional Offender Management Profiling for Alternative Sanctions (“COMPAS”) risk assessment. As a comprehensive treatment, this article outlines the due process arguments in Loomis, analyzes arguments in an ongoing academic debate about COMPAS, and proposes alternative arguments based on the algorithm’s organizational context….(More)”

Report on Algorithmic Risk Assessment Tools in the U.S. Criminal Justice System


Press release: “The Partnership on AI (PAI) has today published a report gathering the views of the multidisciplinary artificial intelligence and machine learning research and ethics community which documents the serious shortcomings of algorithmic risk assessment tools in the U.S. criminal justice system. These kinds of AI tools for deciding on whether to detain or release defendants are in widespread use around the United States, and some legislatures have begun to mandate their use. Lessons drawn from the U.S. context have widespread applicability in other jurisdictions, too, as the international policymaking community considers the deployment of similar tools.

While criminal justice risk assessment tools are often simpler than the deep neural networks used in many modern artificial intelligence systems, they are basic forms of AI. As such, they present a paradigmatic example of the high-stakes social and ethical consequences of automated AI decision-making….

Across the report, challenges to using these tools fell broadly into three primary categories:

  1. Concerns about the accuracy, bias, and validity in the tools themselves
    • Although the use of these tools is in part motivated by the desire to mitigate existing human fallibility in the criminal justice system, this report suggests that it is a serious misunderstanding to view tools as objective or neutral simply because they are based on data.
  2. Issues with the interface between the tools and the humans who interact with them
    • In addition to technical concerns, these tools must be held to high standards of interpretability and explainability to ensure that users (including judges, lawyers, and clerks, among others) can understand how the tools’ predictions are reached and make reasonable decisions based on these predictions.
  3. Questions of governance, transparency, and accountability
    • To the extent that such systems are adapted to make life-changing decisions, tools and decision-makers who specify, mandate, and deploy them must meet high standards of transparency and accountability.

This report highlights some of the key challenges with the use of risk assessment tools for criminal justice applications. It also raises some deep philosophical and procedural issues which may not be easy to resolve. Surfacing and addressing those concerns will require ongoing research and collaboration between policymakers, the AI research community, civil society groups, and affected communities, as well as new types of data collection and transparency. It is PAI’s mission to spur and facilitate these conversations and to produce research to bridge such gaps….(More)”

LAPD moving away data-driven crime programs over potential racial bias


Mark Puente in The Los Angeles Times: “The Los Angeles Police Department pioneered the controversial use of data to pinpoint crime hot spots and track violent offenders.

Complex algorithms and vast databases were supposed to revolutionize crime fighting, making policing more efficient as number-crunching computers helped to position scarce resources.

But critics long complained about inherent bias in the data — gathered by officers — that underpinned the tools.

They claimed a partial victory when LAPD Chief Michel Moore announced he would end one highly touted program intended to identify and monitor violent criminals. On Tuesday, the department’s civilian oversight panel raised questions about whether another program, aimed at reducing property crime, also disproportionately targets black and Latino communities.

Members of the Police Commission demanded more information about how the agency plans to overhaul a data program that helps predict where and when crimes will likely occur. One questioned why the program couldn’t be suspended.

“There is very limited information” on the program’s impact, Commissioner Shane Murphy Goldsmith said.

The action came as so-called predictive policing— using search tools, point scores and other methods — is under increasing scrutiny by privacy and civil liberties groups that say the tactics result in heavier policing of black and Latino communities. The argument was underscored at Tuesday’s commission meeting when several UCLA academics cast doubt on the research behind crime modeling and predictive policing….(More)”.

Tracking Phones, Google Is a Dragnet for the Police


Jennifer Valentino-DeVries at the New York Times: “….The warrants, which draw on an enormous Google database employees call Sensorvault, turn the business of tracking cellphone users’ locations into a digital dragnet for law enforcement. In an era of ubiquitous data gathering by tech companies, it is just the latest example of how personal information — where you go, who your friends are, what you read, eat and watch, and when you do it — is being used for purposes many people never expected. As privacy concerns have mounted among consumers, policymakers and regulators, tech companies have come under intensifying scrutiny over their data collection practices.

The Arizona case demonstrates the promise and perils of the new investigative technique, whose use has risen sharply in the past six months, according to Google employees familiar with the requests. It can help solve crimes. But it can also snare innocent people.

Technology companies have for years responded to court orders for specific users’ information. The new warrants go further, suggesting possible suspects and witnesses in the absence of other clues. Often, Google employees said, the company responds to a single warrant with location information on dozens or hundreds of devices.

Law enforcement officials described the method as exciting, but cautioned that it was just one tool….

The technique illustrates a phenomenon privacy advocates have long referred to as the “if you build it, they will come” principle — anytime a technology company creates a system that could be used in surveillance, law enforcement inevitably comes knocking. Sensorvault, according to Google employees, includes detailed location records involving at least hundreds of millions of devices worldwide and dating back nearly a decade….(More)”.

Open Justice: Public Entrepreneurs Learn to Use New Technology to Increase the Efficiency, Legitimacy, and Effectiveness of the Judiciary


The GovLab: “Open justice is a growing movement to leverage new technologies – including big data, digital platforms, blockchain and more – to improve legal systems by making the workings of courts easier to understand, scrutinize and improve. Through the use of new technology, open justice innovators are enabling greater efficiency, fairness, accountability and a reduction in corruption in the third branch of government. For example, the open data portal ‘Atviras Teismas’ Lithuania (translated ‘open court’ Lithuania) is a platform for monitoring courts and judges through performance metrics’. This portal serves to make the courts of Lithuania transparent and benefits both courts and citizens by presenting comparative data on the Lithuanian Judiciary.

To promote more Open Justice projects, the GovLab in partnership with the Electoral Tribunal of the Federal Judiciary (TEPJF) of Mexico, launched an historic, first of its kind, online course on Open Justice. Designed primarily for lawyers, judges, and public officials – but also intended to appeal to technologists, and members of the public – the Spanish-language course consists of 10 modules.

Each of the ten modules comprises:

  1. A short video-based lecture
  2. An original Open Justice reader
  3. Associated additional readings
  4. A self-assessment quiz
  5. A demonstration of a platform or tool
  6. An interview with a global practitioner

Among those featured in the interviews are Felipe Moreno of Jusbrasil, Justin Erlich of OpenJustice California, Liam Hayes of Aurecon, UK, Steve Ghiassi of Legaler, Australia, and Sara Castillo of Poder Judicial, Chile….(More)”.

The Automated Administrative State


Paper by Danielle Citron and Ryan Calo: “The administrative state has undergone radical change in recent decades. In the twentieth century, agencies in the United States generally relied on computers to assist human decision-makers. In the twenty-first century, computers are making agency decisions themselves. Automated systems are increasingly taking human beings out of the loop. Computers terminate Medicaid to cancer patients and deny food stamps to individuals. They identify parents believed to owe child support and initiate collection proceedings against them. Computers purge voters from the rolls and deem small businesses ineligible for federal contracts [1].

Automated systems built in the early 2000s eroded procedural safeguards at the heart of the administrative state. When government makes important decisions that affect our lives, liberty, and property, it owes us “due process”— understood as notice of, and a chance to object to, those decisions. Automated systems, however, frustrate these guarantees. Some systems like the “no-fly” list were designed and deployed in secret; others lacked record-keeping audit trails, making review of the law and facts supporting a system’s decisions impossible. Because programmers working at private contractors lacked training in the law, they distorted policy when translating it into code [2].

Some of us in the academy sounded the alarm as early as the 1990s, offering an array of mechanisms to ensure the accountability and transparency of automated administrative state [3]. Yet the same pathologies continue to plague government decision-making systems today. In some cases, these pathologies have deepened and extended. Agencies lean upon algorithms that turn our personal data into predictions, professing to reflect who we are and what we will do. The algorithms themselves increasingly rely upon techniques, such as deep learning, that are even less amenable to scrutiny than purely statistical models. Ideals of what the administrative law theorist Jerry Mashaw has called “bureaucratic justice” in the form of efficiency with a “human face” feel impossibly distant [4].

The trend toward more prevalent and less transparent automation in agency decision-making is deeply concerning. For a start, we have yet to address in any meaningful way the widening gap between the commitments of due process and the actual practices of contemporary agencies [5]. Nonetheless, agencies rush to automate (surely due to the influence and illusive promises of companies seeking lucrative contracts), trusting algorithms to tell us if criminals should receive probation, if public school teachers should be fired, or if severely disabled individuals should receive less than the maximum of state-funded nursing care [6]. Child welfare agencies conduct intrusive home inspections because some system, which no party to the interaction understands, has rated a poor mother as having a propensity for violence. The challenges of preserving due process in light of algorithmic decision-making is an area of renewed and active attention within academia, civil society, and even the courts [7].

Second, and routinely overlooked, we are applying the new affordances of artificial intelligence in precisely the wrong contexts…(More)”.

Social capital predicts corruption risk in towns


Paper by Johannes Wachs, Taha Yasseri, Balázs Lengyel and János Kertész: “Corruption is a social plague: gains accrue to small groups, while its costs are borne by everyone. Significant variation in its level between and within countries suggests a relationship between social structure and the prevalence of corruption, yet, large-scale empirical studies thereof have been missing due to lack of data. In this paper, we relate the structural characteristics of social capital of settlements with corruption in their local governments. Using datasets from Hungary, we quantify corruption risk by suppressed competition and lack of transparency in the settlement’s awarded public contracts. We characterize social capital using social network data from a popular online platform. Controlling for social, economic and political factors, we find that settlements with fragmented social networks, indicating an excess of bonding social capital has higher corruption risk, and settlements with more diverse external connectivity, suggesting a surplus of bridging social capital is less exposed to corruption. We interpret fragmentation as fostering in-group favouritism and conformity, which increase corruption, while diversity facilitates impartiality in public life and stifles corruption….(More)”.

How the NYPD is using machine learning to spot crime patterns


Colin Wood at StateScoop: “Civilian analysts and officers within the New York City Police Department are using a unique computational tool to spot patterns in crime data that is closing cases.

A collection of machine-learning models, which the department calls Patternizr, was first deployed in December 2016, but the department only revealed the system last month when its developers published a research paper in the Informs Journal on Applied Analytics. Drawing on 10 years of historical data about burglary, robbery and grand larceny, the tool is the first of its kind to be used by law enforcement, the developers wrote.

The NYPD hired 100 civilian analysts in 2017 to use Patternizr. It’s also available to all officers through the department’s Domain Awareness System, a citywide network of sensors, databases, devices, software and other technical infrastructure. Researchers told StateScoop the tool has generated leads on several cases that traditionally would have stretched officers’ memories and traditional evidence-gathering abilities.

Connecting similar crimes into patterns is a crucial part of gathering evidence and eventually closing in on an arrest, said Evan Levine, the NYPD’s assistant commissioner of data analytics and one of Patternizr’s developers. Taken independently, each crime in a string of crimes may not yield enough evidence to identify a perpetrator, but the work of finding patterns is slow and each officer only has a limited amount of working knowledge surrounding an incident, he said.

“The goal here is to alleviate all that kind of busywork you might have to do to find hits on a pattern,” said Alex Chohlas-Wood, a Patternizr researcher and deputy director of the Computational Policy Lab at Stanford University.

The knowledge of individual officers is limited in scope by dint of the NYPD’s organizational structure. The department divides New York into 77 precincts, and a person who commits crimes across precincts, which often have arbitrary boundaries, is often more difficult to catch because individual beat officers are typically focused on a single neighborhood.

There’s also a lot of data to sift through. In 2016 alone, about 13,000 burglaries, 15,000 robberies and 44,000 grand larcenies were reported across the five boroughs.

Levine said that last month, police used Patternizr to spot a pattern of three knife-point robberies around a Bronx subway station. It would have taken police much longer to connect those crimes manually, Levine said.

The software works by an analyst feeding it “seed” case, which is then compared against a database of hundreds of thousands of crime records that Patternizr has already processed. The tool generates a “similarity score” and returns a rank-ordered list and a map. Analysts can read a few details of each complaint before examining the seed complaint and similar complaints in a detailed side-by-side view or filtering results….(More)”.