Paper by Victoria L. Lemieux: “This paper discusses blockchain technology as a public record keeping system, linking record keeping to power of authority, veneration (temples), and control (prisons) that configure and reconfigure social, economic, and political relations. It discusses blockchain technology as being constructed as a mechanism to counter institutions and social actors that currently hold power, but whom are nowadays often viewed with mistrust. It explores claims for blockchain as a record keeping force of resistance to those powers using an archival theoretic analytic lens. The paper evaluates claims that blockchain technology can support the creation and preservation of trustworthy records able to serve as alternative sources of evidence of rights, entitlements and actions with the potential to unseat the institutional power of the nation-state….(More)”.
Secrecy, Privacy and Accountability: Challenges for Social Research
Book by Mike Sheaff: “Public mistrust of those in authority and failings of public organisations frame disputes over attribution of responsibility between individuals and systems. Exemplified with examples, including the Aberfan disaster, the death of Baby P, and Mid Staffs Hospital, this book explores parallel conflicts over access to information and privacy.
The Freedom of Information Act (FOIA) allows access to information about public organisations but can be in conflict with the Data Protection Act, protecting personal information. Exploring the use of the FOIA as a research tool, Sheaff offers a unique contribution to the development of sociological research methods, and debates connected to privacy and secrecy in the information age. This book will provide sociologists and social scientists with a fresh perspective on contemporary issues of power and control….(More)”.
Supreme Court rules against newspaper seeking access to food stamp data
Josh Gerstein at Politico: “The Supreme Court on Monday handed a victory to businesses seeking to block their information from being disclosed to the public after it winds up in the hands of the federal government.
The justices ruled in favor of retailers seeking to prevent a South Dakota newspaper from obtaining store-level data on the redemption of food stamp benefits, now officially known as the Supplemental Nutrition Assistance Program, or SNAP.
The high court ruling rejected a nearly half-century-old appeals court precedent that allowed the withholding of business records under the Freedom of Information Act only in cases where harm would result either to the business or to the government’s ability to acquire information in the future.
The latest case was set into motion when the U.S. Department of Agriculture refused to disclose the store-level SNAP data in response to a 2011 FOIA request from the Argus Leader, the daily newspaper in Sioux Falls, South Dakota. The newspaper sued, but a federal district court ruled in favor of the USDA.
The Argus Leader appealed, and the U.S. Appeals Court for the 8th Circuit ruled that the exemption the USDA was citing did not apply in this case, sending the issue back to a lower court. The district court was tasked with determining whether the USDA was covered by a separate FOIA exemption governing information that would cause competitive injury if released.
That court ruled in favor of the newspaper, at which point the Food Marketing Institute, a trade group that represents retailers such as grocery stores, filed an appeal in lieu of the USDA….(More)”.
Developing Artificially Intelligent Justice
Paper by Richard M. Re and Alicia Solow-Niederman: “Artificial intelligence, or AI, promises to assist, modify, and replace human decision-making, including in court. AI already supports many aspects of how judges decide cases, and the prospect of “robot judges” suddenly seems plausible—even imminent. This Article argues that AI adjudication will profoundly affect the adjudicatory values held by legal actors as well as the public at large. The impact is likely to be greatest in areas, including criminal justice and appellate decision-making, where “equitable justice,” or discretionary moral judgment, is frequently considered paramount. By offering efficiency and at least an appearance of impartiality, AI adjudication will both foster and benefit from a turn toward “codified justice,” an adjudicatory paradigm that favors standardization above discretion. Further, AI adjudication will generate a range of concerns relating to its tendency to make the legal system more incomprehensible, data-based, alienating, and disillusioning. And potential responses, such as crafting a division of labor between human and AI adjudicators, each pose their own challenges. The single most promising response is for the government to play a greater role in structuring the emerging market for AI justice, but auspicious reform proposals would borrow several interrelated approaches. Similar dynamics will likely extend to other aspects of government, such that choices about how to incorporate AI in the judiciary will inform the future path of AI development more broadly….(More)”.
France Bans Judge Analytics, 5 Years In Prison For Rule Breakers
Artificial Lawyer: “In a startling intervention that seeks to limit the emerging litigation analytics and prediction sector, the French Government has banned the publication of statistical information about judges’ decisions – with a five year prison sentence set as the maximum punishment for anyone who breaks the new law.
Owners of legal tech companies focused on litigation analytics are the most likely to suffer from this new measure.
The new law, encoded in Article 33 of the Justice Reform Act, is aimed at preventing anyone – but especially legal tech companies focused on litigation prediction and analytics – from publicly revealing the pattern of judges’ behaviour in relation to court decisions.
A key passage of the new law states:
‘The identity data of magistrates and members of the judiciary cannot be reused with the purpose or effect of evaluating, analysing, comparing or predicting their actual or alleged professional practices.’ *
As far as Artificial Lawyer understands, this is the very first example of such a ban anywhere in the world.
Insiders in France told Artificial Lawyer that the new law is a direct result of an earlier effort to make all case law easily accessible to the general public, which was seen at the time as improving access to justice and a big step forward for transparency in the justice sector.
However, judges in France had not reckoned on NLP and machine learning companies taking the public data and using it to model how certain judges behave in relation to particular types of legal matter or argument, or how they compare to other judges.
In short, they didn’t like how the pattern of their decisions – now relatively easy to model – were potentially open for all to see.
Unlike in the US and the UK, where judges appear to have accepted the fait accompli of legal AI companies analysing their decisions in extreme detail and then creating models as to how they may behave in the future, French judges have decided to stamp it out….(More)”.
Beyond Bias: Re-Imagining the Terms of ‘Ethical AI’ in Criminal Law
Paper by Chelsea Barabas: “Data-driven decision-making regimes, often branded as “artificial intelligence,” are rapidly proliferating across the US criminal justice system as a means of predicting and managing the risk of crime and addressing accusations of discriminatory practices. These data regimes have come under increased scrutiny, as critics point out the myriad ways that they can reproduce or even amplify pre-existing biases in the criminal justice system. This essay examines contemporary debates regarding the use of “artificial intelligence” as a vehicle for criminal justice reform, by closely examining two general approaches to, what has been widely branded as, “algorithmic fairness” in criminal law: 1) the development of formal fairness criteria and accuracy measures that illustrate the trade-offs of different algorithmic interventions and 2) the development of “best practices” and managerialist standards for maintaining a baseline of accuracy, transparency and validity in these systems.
The essay argues that attempts to render AI-branded tools more accurate by addressing narrow notions of “bias,” miss the deeper methodological and epistemological issues regarding the fairness of these tools. The key question is whether predictive tools reflect and reinforce punitive practices that drive disparate outcomes, and how data regimes interact with the penal ideology to naturalize these practices. The article concludes by calling for an abolitionist understanding of the role and function of the carceral state, in order to fundamentally reformulate the questions we ask, the way we characterize existing data, and how we identify and fill gaps in existing data regimes of the carceral state….(More)”
Social Media Monitoring: How the Department of Homeland Security Uses Digital Data in the Name of National Security
Report by the Brennan Center for Justice: “The Department of Homeland Security (DHS) is rapidly expanding its collection of social media information and using it to evaluate the security risks posed by foreign and American travelers. This year marks a major expansion. The visa applications vetted by DHS will include social media handles that the State Department is set to collect from some 15 million travelers per year.1 Social media can provide a vast trove of information about individuals, including their personal preferences, political and religious views, physical and mental health, and the identity of their friends and family. But it is susceptible to misinterpretation, and wholesale monitoring of social media creates serious risks to privacy and free speech. Moreover, despite the rush to implement these programs, there is scant evidence that they actually meet the goals for which they are deployed…(More)”
The Voluntariness of Voluntary Consent: Consent Searches and the Psychology of Compliance
Paper by Roseanna Sommers and Vanessa K. Bohns: “Consent-based searches are by far the most ubiquitous form of search undertaken by police. A key legal inquiry in these cases is whether consent was granted voluntarily. This Essay suggests that fact finders’ assessments of voluntariness are likely to be impaired by a systematic bias in social perception. Fact finders are likely to underappreciate the degree to which suspects feel pressure to comply with police officers’ requests to perform searches.
In two preregistered laboratory studies, we approached a total of 209 participants (“Experi- encers”) with a highly intrusive request: to unlock their password-protected smartphones and hand them over to an experimenter to search through while they waited in another room. A sepa- rate 194 participants (“Forecasters”) were brought into the lab and asked whether a reasonable person would agree to the same request if hypothetically approached by the same researcher. Both groups then reported how free they felt, or would feel, to refuse the request.
Study 1 found that whereas most Forecasters believed a reasonable person would refuse the experimenter’s request, most Experiencers—100 out of 103 people—promptly unlocked their phones and handed them over. Moreover, Experiencers reported feeling significantly less free to refuse than did Forecasters contemplating the same situation hypothetically.
Study 2 tested an intervention modeled after a commonly proposed reform of consent searches, in which the experimenter explicitly advises participants that they have the right to with- hold consent. We found that this advisory did not significantly reduce compliance rates or make Experiencers feel more free to say no. At the same time, the gap between Experiencers and Fore- casters remained significant.
These findings suggest that decision makers judging the voluntariness of consent consistently underestimate the pressure to comply with intrusive requests. This is problematic because it indi- cates that a key justification for suspicionless consent searches—that they are voluntary—relies on an assessment that is subject to bias. The results thus provide support to critics who would like to see consent searches banned or curtailed, as they have been in several states.
The results also suggest that a popular reform proposal—requiring police to advise citizens of their right to refuse consent—may have little effect. This corroborates previous observational studies, which find negligible effects of Miranda warnings on confession rates among interrogees, and little change in rates of consent once police start notifying motorists of their right to refuse vehicle searches. We suggest that these warnings are ineffective because they fail to address the psychology of compliance. The reason people comply with police, we contend, is social, not informational. The social demands of police-citizen interactions persist even when people are informed of their rights. It is time to abandon the myth that notifying people of their rights makes them feel empowered to exercise those rights…(More)”.
Habeas Data: Privacy vs. The Rise of Surveillance Tech
Book by Cyrus Farivar: “Habeas Data shows how the explosive growth of surveillance technology has outpaced our understanding of the ethics, mores, and laws of privacy.
Award-winning tech reporter Cyrus Farivar makes the case by taking ten historic court decisions that defined our privacy rights and matching them against the capabilities of modern technology. It’s an approach that combines the charge of a legal thriller with the shock of the daily headlines.
Chapters include: the 1960s prosecution of a bookie that established the “reasonable expectation of privacy” in nonpublic places beyond your home (but how does that ruling apply now, when police can chart your every move and hear your every conversation within your own home — without even having to enter it?); the 1970s case where the police monitored a lewd caller — the decision of which is now the linchpin of the NSA’s controversial metadata tracking program revealed by Edward Snowden; and a 2010 low-level burglary trial that revealed police had tracked a defendant’s past 12,898 locations before arrest — an invasion of privacy grossly out of proportion to the alleged crime, which showed how authorities are all too willing to take advantage of the ludicrous gap between the slow pace of legal reform and the rapid transformation of technology.
A dazzling exposé that journeys from Oakland, California to the halls of the Supreme Court to the back of a squad car, Habeas Data combines deft reportage, deep research, and original interviews to offer an X-ray diagnostic of our current surveillance state….(More)”.
How to Argue with an Algorithm: Lessons from the COMPAS ProPublica Debate
Paper by Anne L. Washington: “The United States optimizes the efficiency of its growing criminal justice system with algorithms however, legal scholars have overlooked how to frame courtroom debates about algorithmic predictions. In State v Loomis, the defense argued that the court’s consideration of risk assessments during sentencing was a violation of due process because the accuracy of the algorithmic prediction could not be verified. The Wisconsin Supreme Court upheld the consideration of predictive risk at sentencing because the assessment was disclosed and the defendant could challenge the prediction by verifying the accuracy of data fed into the algorithm.
Was the court correct about how to argue with an algorithm?
The Loomis court ignored the computational procedures that processed the data within the algorithm. How algorithms calculate data is equally as important as the quality of the data calculated. The arguments in Loomis revealed a need for new forms of reasoning to justify the logic of evidence-based tools. A “data science reasoning” could provide ways to dispute the integrity of predictive algorithms with arguments grounded in how the technology works.
This article’s contribution is a series of arguments that could support due process claims concerning predictive algorithms, specifically the Correctional Offender Management Profiling for Alternative Sanctions (“COMPAS”) risk assessment. As a comprehensive treatment, this article outlines the due process arguments in Loomis, analyzes arguments in an ongoing academic debate about COMPAS, and proposes alternative arguments based on the algorithm’s organizational context….(More)”