Letters and cards telling people about local police reduce crime


Article by Elicia John & Shawn D. Bushway: “Community policing is often held up as an instrumental part of reforms to make policing less harmful, particularly in low-income communities that have high rates of violence. But building collaborative relationships between communities and police is hard. Writing in Nature, Shah and LaForest describe a large field experiment revealing that giving residents cards and letters with basic information about local police officers can prevent crime. Combining these results with those from Internet-based experiments, the authors attribute the observed reduction in crime to perceived ‘information symmetry’.

Known strangers are individuals whom we’ve never met but still know something about, such as celebrities. We tend to assume, erroneously, that known strangers know as much about us as we do about them. This tendency to see information symmetry when there is none is referred to as a social heuristic — a shortcut in our mental processing…

Collaborating with the New York Police Department, the authors sent letters and cards to residents of 39 public-housing developments, providing information about the developments’ local community police officers, called neighbourhood coordination officers. These flyers included personal details, such as the officers’ favourite food, sports team or superhero. Thirty control developments had neighbourhood coordination officers, but did not receive flyers….

This field experiment provided convincing evidence that a simple intervention can reduce crime. Indeed, in the three months after the intervention, the researchers observed a 5–7% drop in crime in the developments that received the information compared with neighbourhoods that did not. This level of reduction is similar to that of more-aggressive policing policies4. The drop in crime lessened after three months, which the authors suggest is due to the light touch and limited duration of the intervention. Interventions designed to keep officers’ information at the top of residents’ minds (such as flyers sent over a longer period at a greater frequency) might therefore result in longer-term effects.

The authors attribute the reduction in crime to a heightened perception among residents receiving flyers that the officer would find out if they committed a crime. The possibilities of such findings are potentially exciting, because the work implies that a police officer who is perceived as a real person can prevent crime without tactics such as the New York City police department’s ‘stop, question and frisk’ policy, which tended to create animosity between community members and the police….(More)”

The Immaculate Conception of Data: Agribusiness, Activists, and Their Shared Politics of the Future


Book by  Kelly Bronson: “Every new tractor now contains built-in sensors that collect data and stream it to cloud-based infrastructure. Seed and chemical companies are using these data, and these agribusinesses are a form of big tech alongside firms like Google and Facebook.

The Immaculate Conception of Data peeks behind the secretive legal agreements surrounding agricultural big data to trace how it is used and with what consequences. Agribusinesses are among the oldest oligopoly corporations in the world, and their concentration gives them an advantage over other food system actors. Kelly Bronson explores what happens when big data get caught up in pre-existing arrangements of power. Her richly ethnographic account details the work of corporate scientists, farmers using the data, and activist “hackers” building open-source data platforms. Actors working in private and public contexts have divergent views on whom new technology is for, how it should be developed, and what kinds of agriculture it should support. Surprisingly, despite their differences, these groups share a way of speaking about data and its value for the future. Bronson calls this the immaculate conception of data, arguing that this phenomenon is a dangerous framework for imagining big data and what it might do for society.

Drawing our attention to agriculture as an important new site for big tech criticism, The Immaculate Conception of Data uniquely bridges science and technology studies, critical data studies, and food studies, bringing to light salient issues related to data justice and a sustainable food system…(More)”.

Where Do My Tax Dollars Go? Tax Morale Effects of Perceived Government Spending


Paper by Matias Giaccobasso, Brad C. Nathan, Ricardo Perez-Truglia & Alejandro Zentner: “Do perceptions about how the government spends tax dollars affect the willingness to pay taxes? We designed a field experiment to test this hypothesis in a natural, high-stakes context and via revealed preferences. We measure perceptions about the share of property tax revenues that fund public schools and the share of property taxes that are redistributed to disadvantaged districts. We find that even though information on where tax dollars go is publicly available and easily accessible, taxpayers still have significant misperceptions. We use an information-provision experiment to induce exogenous shocks to these perceptions. Using administrative data on tax appeals, we measure the causal effect of perceived government spending on the willingness to pay taxes. We find that some perceptions about government spending have a significant effect on the probability of filing a tax appeal and in a manner that is consistent with the classical theory of benefit-based taxation. We discuss implications for researchers and policy makers…(More)”.

NIH issues a seismic mandate: share data publicly


Max Kozlov at Nature: “In January 2023, the US National Institutes of Health (NIH) will begin requiring most of the 300,000 researchers and 2,500 institutions it funds annually to include a data-management plan in their grant applications — and to eventually make their data publicly available.

Researchers who spoke to Nature largely applaud the open-science principles underlying the policy — and the global example it sets. But some have concerns about the logistical challenges that researchers and their institutions will face in complying with it. Namely, they worry that the policy might exacerbate existing inequities in the science-funding landscape and could be a burden for early-career scientists, who do the lion’s share of data collection and are already stretched thin.

Because the vast majority of laboratories and institutions don’t have data managers who organize and curate data, the policy — although well-intentioned — will probably put a heavy burden on trainees and early-career principal investigators, says Lynda Coughlan, a vaccinologist at the University of Maryland School of Medicine in Baltimore, who has been leading a research team for fewer than two years and is worried about what the policy will mean for her.

Jorgenson says that, although the policy might require researchers to spend extra time organizing their data, it’s an essential part of conducting research, and the potential long-term boost in public trust for science will justify the extra effort…(More)”.

Turning the Principle of Participation into Practice: Empowering Parents to Engage on Data and Tech


Guest Blog by Elizabeth Laird at Responsible Data for Children: “Two years into the pandemic, questions about parental rights in school have taken center stage in public debates, particularly in school board meetings and state houses across the United States. Not surprisingly, this extends to the use of data and technology in schools.

CDT recently released research that found that parental concerns around student privacy and security protection have risen since the spring, growing from 60% in February 2021 to 69% in July 2021. Far from being ambivalent, we also found that parents and students expressed eagerness to play a role in decisions about technology and data but indicate these desires are going unmet. Most parents and students want to be consulted but few have been asked for input: 93% of surveyed parents feel that schools should engage them regarding how student data is collected and used, but only 44% say their school has asked for their input on these issues.

While much of this debate has focused on the United States and similar countries, these issues have global resonance as all families have a stake in how their children are educated. Engaging students and families has always been an important component of primary and secondary education, from involving parents in their children’s individual experiences to systemic decision-making; however, there is significant room for improvement, especially as it relates to the use of education data and technology. Done well, community engagement (aligned with the Participatory principle in the Responsible Data for Children (RD4C) initiative) is a two-way, mutually beneficial partnership between public agencies and community members in which questions and concerns are identified, discussed, and decided jointly. It benefits public agencies by building trust, helping them achieve their mission, and minimizing risks, including community pushback. It helps communities by assisting agencies to better meet community needs and increasing transparency and accountability.

To assist education practitioners in improving their community engagement efforts, CDT recently released guidance that focuses on four important steps…(More)”.

End the State Monopoly on Facts


Essay by Adam J. White: “…This Covid-era dynamic has accelerated broader trends toward the consolidation of informational power among a few centralized authorities. And it has further deformed the loose set of institutions and norms that Jonathan Rauch, in a 2018 National Affairs article, identified as Western civilization’s “constitution of knowledge.” This is an arrangement in science, journalism, and the courts in which “any hypothesis can be floated” but “can join reality only insofar as it persuades people after withstanding vigorous questioning and criticism.” The more that Americans delegate the hard work of developing and distributing information to a small number of regulatory institutions, the less capable we all will be of correcting the system’s mistakes — and the more likely the system will be to make mistakes in the first place.

In a 1999 law review article, Timur Kuran and Cass Sunstein warned of availability cascades, a process in which activists promote factual assertions and narratives that in a self-reinforcing dynamic become more plausible the more widely available they are, and can eventually overwhelm the public’s perception. The Covid-19 era has been marked by the opposite problem: unavailability cascades, in which media institutions and social media platforms swiftly erase disfavored narratives and dissenting contentions from the marketplace of ideas, making them seem implausible by their near unavailability. Such cascades occur because legacy media and social media platforms have come to rely overwhelmingly, even exclusively, on federal regulatory agencies’ factual assertions and the pronouncements of a small handful of other favored institutions, such as the World Health Organization, as the gold standard of facts. But availability and unavailability cascades, even when intended in good faith to prevent the spread of disinformation among the public, risk misinforming the very people they purport to inform. A more diverse and vibrant ecosystem of informational institutions would disincentivize the platforms’ and media’s reflexive, cascading reactions to dissenting views.

This second problem — the concentration of informational power — exacerbates the first one: how to counterbalance the executive branch’s power after an emergency. In order for Congress, the courts, and other governing institutions to reassert their own constitutional roles after the initial weeks and months of crisis, they (and the public) need credible sources of information outside the administration itself. An informational ecosystem not overweighted so heavily toward administrative agencies, one that benefits more from the independent contributions of experts in universities, think tanks, journalism, and other public and private institutions, would improve the quality of information that it produces. It would also be less susceptible to the reflexively partisan skepticism that has become endemic in the polarization of modern president-centric government…(More)”.

Algorithm vs. Algorithm


Paper by Cary Coglianese and Alicia Lai: “Critics raise alarm bells about governmental use of digital algorithms, charging that they are too complex, inscrutable, and prone to bias. A realistic assessment of digital algorithms, though, must acknowledge that government is already driven by algorithms of arguably greater complexity and potential for abuse: the algorithms implicit in human decision-making. The human brain operates algorithmically through complex neural networks. And when humans make collective decisions, they operate via algorithms too—those reflected in legislative, judicial, and administrative processes. Yet these human algorithms undeniably fail and are far from transparent.

On an individual level, human decision-making suffers from memory limitations, fatigue, cognitive biases, and racial prejudices, among other problems. On an organizational level, humans succumb to groupthink and free-riding, along with other collective dysfunctionalities. As a result, human decisions will in some cases prove far more problematic than their digital counterparts. Digital algorithms, such as machine learning, can improve governmental performance by facilitating outcomes that are more accurate, timely, and consistent. Still, when deciding whether to deploy digital algorithms to perform tasks currently completed by humans, public officials should proceed with care on a case-by-case basis. They should consider both whether a particular use would satisfy the basic preconditions for successful machine learning and whether it would in fact lead to demonstrable improvements over the status quo. The question about the future of public administration is not whether digital algorithms are perfect. Rather, it is a question about what will work better: human algorithms or digital ones….(More)”.

This Is the Difference Between a Family Surviving and a Family Sinking


Article by Bryce Covert: “…The excitement around policymaking is almost always in the moments after ink dries on a bill creating something new. But if a benefit fails to reach the people it’s designed for, it may as well not exist at all. Making government benefits more accessible and efficient doesn’t usually get the spotlight. But it’s often the difference between a family getting what it needs to survive and falling into hardship and destitution. It’s the glue of our democracy.

President Biden appears to have taken note of this. Late last year, he issued an executive order meant to improve the “customer experience and service delivery” of the entire federal government. He put forward some ideas, including moving Social Security benefit claims and passport renewals online, reducing paperwork for student loan forgiveness and certifying low-income people for all the assistance they qualify for at once, rather than making them seek out benefits program by program. More important, he shifted the focus of government toward whether or not the customers — that’s us — are having a good experience getting what we deserve.

It’s a direction all lawmakers, from the federal level down to counties and cities, should follow.

One of the biggest barriers to government benefits is all of the red tape to untangle, particularly for programs that serve low-income people. They were the ones wrangling with the I.R.S.’s nonfiler portal while others got their payments automatically. Benefits delivered through the tax code, which flow so easily that many people don’t think of them as government benefits at all, mostly help the already well-off. Programs for the poor, on the other hand, tend to be bloated with barriers like income tests, work requirements and in-person interviews. It’s not just about applying once, either; many require people to continually recertify, going through the process over and over again.

The hassle doesn’t just cost time and effort. It comes with a psychological cost. “You get mad at the D.M.V. because it takes hours to do something that should only take minutes,” Pamela Herd, a sociologist at Georgetown, said. “These kind of stresses can be really large when you’re talking about people who are on a knife’s edge in terms of their ability to pay their rent or feed their children.”…(More)”.

The Behavioral Code


Book by Benjamin van Rooij and Adam Fine: “Why do most Americans wear seatbelts but continue to speed even though speeding fines are higher? Why could park rangers reduce theft by removing “no stealing” signs? Why was a man who stole 3 golf clubs sentenced to 25 years in prison?

Some laws radically change behavior whereas others are consistently ignored and routinely broken. And yet we keep relying on harsh punishment against crime despite its continued failure.

Professors Benjamin van Rooij and Adam Fine draw on decades of research to uncover the behavioral code: the root causes and hidden forces that drive human behavior and our responses to society’s laws. In doing so, they present the first accessible analysis of behavioral jurisprudence, which will fundamentally alter how we understand the connection between law and human behavior.

The Behavioral Code offers a necessary and different approach to battling crime and injustice that is based in understanding the science of human misconduct—rather than relying on our instinctual drive to punish as a way to shape behavior. The book reveals the behavioral code’s hidden role through illustrative examples like:

   • The illusion of the US’s beloved tax refund
   • German walls that “pee back” at public urinators
   • The $1,000 monthly “good behavior” reward that reduced gun violence
   • Uber’s backdoor “Greyball” app that helped the company evade Seattle’s taxi regulators
   • A $2.3 billion legal settlement against Pfizer that revealed how whistleblower protections fail to reduce corporate malfeasance
   • A toxic organizational culture playing a core role in Volkswagen’s emissions cheating scandal
   • How Peter Thiel helped Hulk Hogan sue Gawker into oblivion…(More)”.

Shared Measures: Collective Performance Data Use in Collaborations


Paper by Alexander Kroll: “Traditionally, performance metrics and data have been used to hold organizations accountable. But public service provision is not merely hierarchical anymore. Increasingly, we see partnerships among government agencies, private or nonprofit organizations, and civil society groups. Such collaborations may also use goals, measures, and data to manage group efforts, however, the application of performance practices here will likely follow a different logic. This Element introduces the concepts of “shared measures” and “collective data use” to add collaborative, relational elements to existing performance management theory. It draws on a case study of collaboratives in North Carolina that were established to develop community responses to the opioid epidemic. To explain the use of shared performance measures and data within these collaboratives, this Element studies the role of factors such as group composition, participatory structures, social relationships, distributed leadership, group culture, and value congruence…(More)”.