Commentary in Governance: “In the United States, the presidential race is heating up, and one result is an increasing number of assaults on century-old ideas about the merit-based civil service. “The merit principle is under fierce attack,” says Donald Kettl, in a new commentary for Governance. Kettl outlines five “tough questions” that are raised by attacks on the civil service system — and says that the US research community “has been largely asleep at the switch” on all of them. Within major public policy schools, courses on the public service have been “pushed to the side.” A century ago, American academics helped to build the American state. Kettl warns that “scholarly neglect in the 2000s could undermine it.” Read the commentary.
The Fundamentals of Policy Crowdsourcing
Article by John Prpić, Araz Taeihagh and James Melton at Policy and Internet: “What is the state of the research on crowdsourcing for policymaking? This article begins to answer this question by collecting, categorizing, and situating an extensive body of the extant research investigating policy crowdsourcing, within a new framework built on fundamental typologies from each field. We first define seven universal characteristics of the three general crowdsourcing techniques (virtual labor markets, tournament crowdsourcing, open collaboration), to examine the relative trade-offs of each modality. We then compare these three types of crowdsourcing to the different stages of the policy cycle, in order to situate the literature spanning both domains. We finally discuss research trends in crowdsourcing for public policy and highlight the research gaps and overlaps in the literature….(More)”
Ethics in Public Policy and Management: A global research companion
New book edited by Alan Lawton, Zeger van der Wal, and Leo Huberts: “Ethics in Public Policy and Management: A global research companion showcases the latest research from established and newly emerging scholars in the fields of public management and ethics. This collection examines the profound changes of the last 25 years, including the rise of New Public Management, New Public Governance and Public Value; how these have altered practitioners’ delivery of public services; and how academics think about those services.
Drawing on research from a broad range of disciplines, Ethics in Public Policy and Management looks to reflect on this changing landscape. With contributions from Asia, Australasia, Europe and the USA, the collection is grouped into five main themes:
- theorising the practice of ethics;
- understanding and combating corruption;
- managing integrity;
- ethics across boundaries;
- expanding ethical policy domains.
This volume will prove thought-provoking for educators, administrators, policy makers and researchers across the fields of public management, public administration and ethics….(More)”
Big data algorithms can discriminate, and it’s not clear what to do about it
“This program had absolutely nothing to do with race…but multi-variable equations.”
That’s what Brett Goldstein, a former policeman for the Chicago Police Department (CPD) and current Urban Science Fellow at the University of Chicago’s School for Public Policy, said about a predictive policing algorithm he deployed at the CPD in 2010. His algorithm tells police where to look for criminals based on where people have been arrested previously. It’s a “heat map” of Chicago, and the CPD claims it helps them allocate resources more effectively.
Chicago police also recently collaborated with Miles Wernick, a professor of electrical engineering at Illinois Institute of Technology, to algorithmically generate a “heat list” of 400 individuals it claims have thehighest chance of committing a violent crime. In response to criticism, Wernick said the algorithm does not use “any racial, neighborhood, or other such information” and that the approach is “unbiased” and “quantitative.” By deferring decisions to poorly understood algorithms, industry professionals effectively shed accountability for any negative effects of their code.
But do these algorithms discriminate, treating low-income and black neighborhoods and their inhabitants unfairly? It’s the kind of question many researchers are starting to ask as more and more industries use algorithms to make decisions. It’s true that an algorithm itself is quantitative – it boils down to a sequence of arithmetic steps for solving a problem. The danger is that these algorithms, which are trained on data produced by people, may reflect the biases in that data, perpetuating structural racism and negative biases about minority groups.
There are a lot of challenges to figuring out whether an algorithm embodies bias. First and foremost, many practitioners and “computer experts” still don’t publicly admit that algorithms can easily discriminate.More and more evidence supports that not only is this possible, but it’s happening already. The law is unclear on the legality of biased algorithms, and even algorithms researchers don’t precisely understand what it means for an algorithm to discriminate….
While researchers clearly understand the theoretical dangers of algorithmic discrimination, it’s difficult to cleanly measure the scope of the issue in practice. No company or public institution is willing to publicize its data and algorithms for fear of being labeled racist or sexist, or maybe worse, having a great algorithm stolen by a competitor.
Even when the Chicago Police Department was hit with a Freedom of Information Act request, they did not release their algorithms or heat list, claiming a credible threat to police officers and the people on the list. This makes it difficult for researchers to identify problems and potentially provide solutions.
Legal hurdles
Existing discrimination law in the United States isn’t helping. At best, it’s unclear on how it applies to algorithms; at worst, it’s a mess. Solon Barocas, a postdoc at Princeton, and Andrew Selbst, a law clerk for the Third Circuit US Court of Appeals, argued together that US hiring law fails to address claims about discriminatory algorithms in hiring.
The crux of the argument is called the “business necessity” defense, in which the employer argues that a practice that has a discriminatory effect is justified by being directly related to job performance….(More)”
Modernizing Informed Consent: Expanding the Boundaries of Materiality
Paper by Nadia N. Sawicki: “Informed consent law’s emphasis on the disclosure of purely medical information – such as diagnosis, prognosis, and the risks and benefits of various treatment alternatives – does not accurately reflect modern understandings of how patients make medical decisions. Existing common law disclosure duties fail to capture a variety of non-medical factors relevant to patients, including information about the physician’s personal characteristics; the cost of treatment; the social implications of various health care interventions; and the legal consequences associated with diagnosis and treatment. Although there is a wealth of literature analyzing the merits of such disclosures in a few narrow contexts, there is little broader discussion and no consensus about whether there the doctrine of informed consent should be expanded to include information that may be relevant to patients but falls outside the traditional scope of medical materiality. This article seeks to fill that gap.
I offer a normative argument for expanding the scope of informed consent disclosure to include non-medical information that is within the physician’s knowledge and expertise, where the information would be material to the reasonable patient and its disclosure does not violate public policy. This proposal would result in a set of disclosure requirements quite different from the ones set by modern common law and legislation. In many ways, the range of required disclosures may become broader, particularly with respect to physician-specific information about qualifications, health status, and financial conflicts of interests. However, some disclosures that are currently required by statute (or have been proposed by commentators) would fall outside the scope of informed consent – most notably, information about support resources available in the abortion context; about the social, ethical, and legal implications of treatment; and about health care costs….(More)”
Advancing Collaboration Theory: Models, Typologies, and Evidence
New book edited by John C. Morris, Katrina Miller-Stevens: “The term collaboration is widely used but not clearly understood or operationalized. However, collaboration is playing an increasingly important role between and across public, nonprofit, and for-profit sectors. Collaboration has become a hallmark in both intragovernmental and intergovernmental relationships. As collaboration scholarship rapidly emerges, it diverges into several directions, resulting in confusion about what collaboration is and what it can be used to accomplish. This book provides much needed insight into existing ideas and theories of collaboration, advancing a revised theoretical model and accompanying typologies that further our understanding of collaborative processes within the public sector.
Organized into three parts, each chapter presents a different theoretical approach to public problems, valuing the collective insights that result from honoring many individual perspectives. Case studies in collaboration, split across three levels of government, offer additional perspectives on unanswered questions in the literature. Contributions are made by authors from a variety of backgrounds, including an attorney, a career educator, a federal executive, a human resource administrator, a police officer, a self-employed entrepreneur, as well as scholars of public administration and public policy. Drawing upon the individual experiences offered by these perspectives, the book emphasizes the commonalities of collaboration. It is from this common ground, the shared experiences forged among seemingly disparate interactions that advances in collaboration theory arise.
Advancing Collaboration Theory offers a unique compilation of collaborative models and typologies that enhance the existing understanding of public sector collaboration….(More)”
Field experimenting in economics: Lessons learned for public policy
Robert Metcalfe at OUP Blog: “Do neighbourhoods matter to outcomes? Which classroom interventions improve educational attainment? How should we raise money to provide important and valued public goods? Do energy prices affect energy demand? How can we motivate people to become healthier, greener, and more cooperative? These are some of the most challenging questions policy-makers face. Academics have been trying to understand and uncover these important relationships for decades.
Many of the empirical tools available to economists to answer these questions do not allow causal relationships to be detected. Field experiments represent a relatively new methodological approach capable of measuring the causal links between variables. By overlaying carefully designed experimental treatments on real people performing tasks common to their daily lives, economists are able to answer interesting and policy-relevant questions that were previously intractable. Manipulation of market environments allows these economists to uncover the hidden motivations behind economic behaviour more generally. A central tenet of field experiments in the policy world is that governments should understand the actual behavioural responses of their citizens to changes in policies or interventions.
Field experiments represent a departure from laboratory experiments. Traditionally, laboratory experiments create experimental settings with tight control over the decision environment of undergraduate students. While these studies also allow researchers to make causal statements, policy-makers are often concerned subjects in these experiments may behave differently in settings where they know they are being observed or when they are permitted to sort out of the market.
For example, you might expect a college student to contribute more to charity when she is scrutinized in a professor’s lab than when she can avoid the ask altogether. Field experiments allow researchers to make these causal statements in a setting that is more generalizable to the behaviour policy-makers are directly interested in.
To date, policy-makers traditionally gather relevant information and data by using focus groups, qualitative evidence, or observational data without a way to identify causal mechanisms. It is quite easy to elicit people’s intentions about how they behave with respect to a new policy or intervention, but there is increasing evidence that people’s intentions are a poor guide to predicting their behaviour.
However, we are starting to see a small change in how governments seek to answer pertinent questions. For instance, the UK tax office (Her Majesty’s Revenue and Customs) now uses field experiments across some of its services to improve the efficacy of scarce taxpayers money. In the US, there are movements toward gathering more evidence from field experiments.
In the corporate world, experimenting is not new. Many of the current large online companies—such as Amazon, Facebook, Google, and Microsoft—are constantly using field experiments matched with big data to improve their products and deliver better services to their customers. More and more companies will use field experiments over time to help them better set prices, tailor advertising, provide a better customer journey to increase welfare, and employ more productive workers…(More).
See also Field Experiments in the Developed World: An Introduction (Oxford Review of Economic Policy)
The Missing Statistics of Criminal Justice
Matt Ford at the Atlantic: “An abundance of data has fueled the reform movement, but from prisons to prosecutors, crucial questions remain unquantified.
After Ferguson, a noticeable gap in criminal-justice statistics emerged: the use of lethal force by the police. The federal government compiles a wealth of data on homicides, burglaries, and arson, but no official, reliable tabulation of civilian deaths by law enforcement exists. A partial database kept by the FBI is widely considered to be misleading and inaccurate. (The Washington Post has just released a more expansive total of nearly 400 police killings this year.) “It’s ridiculous that I can’t tell you how many people were shot by the police last week, last month, last year,” FBI Director James Comey told reporters in April.
This raises an obvious question: If the FBI can’t tell how many people were killed by law enforcement last year, what other kinds of criminal-justice data are missing? Statistics are more than just numbers: They focus the attention of politicians, drive the allocation of resources, and define the public debate. Public officials—from city councilors to police commanders to district attorneys—are often evaluated based on how these numbers change during their terms in office. But existing statistical measures only capture part of the overall picture, and the problems that go unmeasured are often also unaddressed. What changes could the data that isn’t currently collected produce if it were gathered?….
Without reliable official statistics, scholars often must gather and compile necessary data themselves. “A few years ago, I was struck at how many police killings of civilians we seemed to be having in Philadelphia,” Gottschalk said as an example. “They would be buried in the newspaper, and I was stunned by how difficult it was to compile that information and compare it to New York and do it on a per-capita basis. It wasn’t readily available.” As a result, criminal-justice researchers often spend more time gathering data than analyzing it.
This data’s absence shapes the public debate over mass incarceration in the same way that silence between notes of music gives rhythm to a song. Imagine debating the economy without knowing the unemployment rate, or climate change without knowing the sea level, or healthcare reform without knowing the number of uninsured Americans. Legislators and policymakers heavily rely on statistics when crafting public policy. Criminal-justice statistics can also influence judicial rulings, including those by the Supreme Court, with implications for the entire legal system.
Beyond their academic and policymaking value, there’s also a certain power to statistics. They have the irreplaceable ability to both clarify social issues and structure the public’s understanding of them. A wealth of data has allowed sociologists, criminologists, and political scientists to diagnose serious problems with the American criminal-justice system over the past twenty years. Now that a growing bipartisan consensus recognizes the problem exists, gathering the right facts and figures could help point the way towards solutions…(More)”
How the Internet, the Sharing Economy, and Reputational Feedback Mechanisms Solve the ‘Lemons Problem’
Paper by Thierer, Adam D. and Koopman, Christopher and Hobson, Anne and Kuiper, Chris: “This paper argues that the sharing economy — through the use of the Internet and real time reputational feedback mechanisms — is providing a solution to the “lemons problem” that many regulations, and regulators, have spent decades attempting to overcome. Section I provides an overview of the sharing economy and traces its rapid growth. Section II revisits “lemons problem” theory as well as the various regulatory solutions proposed to deal with the problem of asymmetric information, and provides some responses. Section III discusses the relationship between reputation and trust and analyzes how reputational incentives have been used historically in commercial interactions. Section IV discusses how information asymmetries were addressed in the pre-Internet era. Section V surveys how the evolution of the Internet and information systems (especially sharing economy reputational feedback mechanisms) addresses the “lemons problem” concern. Section VI explains how these new realities affect public policy and concludes that asymmetric information is not a legitimate rationale for policy intervention in light of technological changes. We also argue continued use of this rationale to regulate in the name of consumer protection might, in fact, make consumers worse off. This has ramifications for the current debate over regulation of the sharing economy….(More)”
Participatory Governance
Book chapter by Stephanie L. McNulty and Brian Wample in “Emerging Trends in the Social and Behavioral Sciences: An Interdisciplinary, Searchable, and Linkable Resource”: “Efforts to engage new actors in political decision-making through innovative participatory programs have exploded around the world in the past 25 years. This trend, called participatory governance, involves state-sanctioned institutional processes that allow citizens to exercise voice and vote in public policy decisions that produce real changes in citizens’ lives. Billions of dollars are spent supporting these efforts around the world. The concept, which harks back to theorists such as Jean-Jacques Rousseau and John Stuart Mill, has only recently become prominent in theories about democracy. After presenting the foundational research on participatory governance, the essay notes that newer research on this issues falls into three areas: (i) the broader impact of these experiments; (ii) new forms of engagement, with a focus on representation, deliberation, and intermediation; and (iii) scaling up and diffusion. The essay concludes with a research agenda for future work on this topic….(More)”