Trace Labs is a nonprofit organization whose mission is to accelerate
the family reunification of missing persons while training members in
the trade craft of open source intelligence (OSINT)….We crowdsource open source intelligence through both the Trace Labs OSINT Search Party CTFs and Ongoing Operations with our global community. Our highly skilled intelligence analysts then triage the data collected to produce actionable intelligence reports on each missing persons subject. These intelligence reports allow the law enforcement agencies that we work with the ability to quickly see any new details required to reopen a cold case and/or take immediate action on a missing subject.(More)”
Predict and Surveil: Data, Discretion, and the Future of Policing
Book by Sarah Brayne: “The scope of criminal justice surveillance has expanded rapidly in recent decades. At the same time, the use of big data has spread across a range of fields, including finance, politics, healthcare, and marketing. While law enforcement’s use of big data is hotly contested, very little is known about how the police actually use it in daily operations and with what consequences.
In Predict and Surveil, Sarah Brayne offers an unprecedented, inside look at how police use big data and new surveillance technologies, leveraging on-the-ground fieldwork with one of the most technologically advanced law enforcement agencies in the world-the Los Angeles Police Department. Drawing on original interviews and ethnographic observations, Brayne examines the causes and consequences of algorithmic control. She reveals how the police use predictive analytics to deploy resources, identify suspects, and conduct investigations; how the adoption of big data analytics transforms police organizational practices; and how the police themselves respond to these new data-intensive practices. Although big data analytics holds potential to reduce bias and increase efficiency, Brayne argues that it also reproduces and deepens existing patterns of social inequality, threatens privacy, and challenges civil liberties.
A groundbreaking examination of the growing role of the private sector in public policing, this book challenges the way we think about the data-heavy supervision law enforcement increasingly imposes upon civilians in the name of objectivity, efficiency, and public safety….(More)”.
High-Stakes AI Decisions Need to Be Automatically Audited
Oren Etzioni and Michael Li in Wired: “…To achieve increased transparency, we advocate for auditable AI, an AI system that is queried externally with hypothetical cases. Those hypothetical cases can be either synthetic or real—allowing automated, instantaneous, fine-grained interrogation of the model. It’s a straightforward way to monitor AI systems for signs of bias or brittleness: What happens if we change the gender of a defendant? What happens if the loan applicants reside in a historically minority neighborhood?
Auditable AI has several advantages over explainable AI. Having a neutral third-party investigate these questions is a far better check on bias than explanations controlled by the algorithm’s creator. Second, this means the producers of the software do not have to expose trade secrets of their proprietary systems and data sets. Thus, AI audits will likely face less resistance.
Auditing is complementary to explanations. In fact, auditing can help to investigate and validate (or invalidate) AI explanations. Say Netflix recommends The Twilight Zone because I watched Stranger Things. Will it also recommend other science fiction horror shows? Does it recommend The Twilight Zone to everyone who’s watched Stranger Things?
Early examples of auditable AI are already having a positive impact. The ACLU recently revealed that Amazon’s auditable facial-recognition algorithms were nearly twice as likely to misidentify people of color. There is growing evidence that public audits can improve model accuracy for under-represented groups.
In the future, we can envision a robust ecosystem of auditing systems that provide insights into AI. We can even imagine “AI guardians” that build external models of AI systems based on audits. Instead of requiring AI systems to provide low-fidelity explanations, regulators can insist that AI systems used for high-stakes decisions provide auditing interfaces.
Auditable AI is not a panacea. If an AI system is performing a cancer diagnostic, the patient will still want an accurate and understandable explanation, not just an audit. Such explanations are the subject of ongoing research and will hopefully be ready for commercial use in the near future. But in the meantime, auditable AI can increase transparency and combat bias….(More)”.
Behavioral nudges reduce failure to appear for court
Paper by Alissa Fishbane, Aurelie Ouss and Anuj K. Shah: “Each year, millions of Americans fail to appear in court for low-level offenses, and warrants are then issued for their arrest. In two field studies in New York City, we make critical information salient by redesigning the summons form and providing text message reminders. These interventions reduce failures to appear by 13-21% and lead to 30,000 fewer arrest warrants over a 3-year period. In lab experiments, we find that while criminal justice professionals see failures to appear as relatively unintentional, laypeople believe they are more intentional. These lay beliefs reduce support for policies that make court information salient and increase support for punishment. Our findings suggest that criminal justice policies can be made more effective and humane by anticipating human error in unintentional offenses….(More)”
Prediction paradigm: the human price of instrumentalism
Editorial by Karamjit S. Gill at AI&Society: “Reflecting on the rise of instrumentalism, we learn how it has travelled across the academic boundary to the high-tech culture of Silicon Valley. At its core lies the prediction paradigm. Under the cloak of inevitability of technology, we are being offered the prediction paradigm as the technological dream of public safety, national security, fraud detection, and even disease control and diagnosis. For example, there are offers of facial recognition systems for predicting behaviour of citizens, offers of surveillance drones for ’biometric readings’, ‘Predictive Policing’ is offered as an effective tool to predict and reduce crime rates. A recent critical review of the prediction technology (Coalition for Critical Technology 2020), brings to our notice the discriminatory consequences of predicting “criminality” using biometric and/or criminal legal data.
The review outlines the specific ways crime prediction technology reproduces, naturalizes and amplifies discriminatory outcomes, and why exclusively technical criteria are insufficient for evaluating their risks. We learn that neither predication architectures nor machine learning programs are neutral, they often uncritically inherit, accept and incorporate dominant cultural and belief systems, which are then normalised. For example, “Predictions” based on finding correlations between facial features and criminality are accepted as valid, interpreted as the product of intelligent and “objective” technical assessments. Furthermore, the data from predictive outcomes and recommendations are fed back into the system, thereby reproducing and confirming biased correlations. The consequence of this feedback loop, especially in facial recognition architectures, combined with a belief in “evidence based” diagnosis, is that it leads to ‘widespread mischaracterizations of criminal justice data’ that ‘justifies the exclusion and repression of marginalized populations through the construction of “risky” or “deviant” profiles’…(More).
Why Hundreds of Mathematicians Are Boycotting Predictive Policing
Courtney Linder at Popular Mechanics: “Several prominent academic mathematicians want to sever ties with police departments across the U.S., according to a letter submitted to Notices of the American Mathematical Society on June 15. The letter arrived weeks after widespread protests against police brutality, and has inspired over 1,500 other researchers to join the boycott.
These mathematicians are urging fellow researchers to stop all work related to predictive policing software, which broadly includes any data analytics tools that use historical data to help forecast future crime, potential offenders, and victims. The technology is supposed to use probability to help police departments tailor their neighborhood coverage so it puts officers in the right place at the right time….
RAND
According to a 2013 research briefing from the RAND Corporation, a nonprofit think tank in Santa Monica, California, predictive policing is made up of a four-part cycle (shown above). In the first two steps, researchers collect and analyze data on crimes, incidents, and offenders to come up with predictions. From there, police intervene based on the predictions, usually taking the form of an increase in resources at certain sites at certain times. The fourth step is, ideally, reducing crime.
“Law enforcement agencies should assess the immediate effects of the intervention to ensure that there are no immediately visible problems,” the authors note. “Agencies should also track longer-term changes by examining collected data, performing additional analysis, and modifying operations as needed.”
In many cases, predictive policing software was meant to be a tool to augment police departments that are facing budget crises with less officers to cover a region. If cops can target certain geographical areas at certain times, then they can get ahead of the 911 calls and maybe even reduce the rate of crime.
But in practice, the accuracy of the technology has been contested—and it’s even been called racist….(More)”.
Wrongfully Accused by an Algorithm
Kashmir Hill at the New York Times: “In what may be the first known case of its kind, a faulty facial recognition match led to a Michigan man’s arrest for a crime he did not commit….
The Shinola shoplifting occurred in October 2018. Katherine Johnston, an investigator at Mackinac Partners, a loss prevention firm, reviewed the store’s surveillance video and sent a copy to the Detroit police, according to their report.
Five months later, in March 2019, Jennifer Coulson, a digital image examiner for the Michigan State Police, uploaded a “probe image” — a still from the video, showing the man in the Cardinals cap — to the state’s facial recognition database. The system would have mapped the man’s face and searched for similar ones in a collection of 49 million photos.
The state’s technology is supplied for $5.5 million by a company called DataWorks Plus. Founded in South Carolina in 2000, the company first offered mug shot management software, said Todd Pastorini, a general manager. In 2005, the firm began to expand the product, adding face recognition tools developed by outside vendors.
When one of these subcontractors develops an algorithm for recognizing faces, DataWorks attempts to judge its effectiveness by running searches using low-quality images of individuals it knows are present in a system. “We’ve tested a lot of garbage out there,” Mr. Pastorini said. These checks, he added, are not “scientific” — DataWorks does not formally measure the systems’ accuracy or bias.
“We’ve become a pseudo-expert in the technology,” Mr. Pastorini said.
In Michigan, the DataWorks software used by the state police incorporates components developed by the Japanese tech giant NEC and by Rank One Computing, based in Colorado, according to Mr. Pastorini and a state police spokeswoman. In 2019, algorithms from both companies were included in a federal study of over 100 facial recognition systems that found they were biased, falsely identifying African-American and Asian faces 10 times to 100 times more than Caucasian faces….(More)“.
What Nobel Laureate Elinor Ostrom’s early work tells us about defunding the police
Blog by Aaron Vansintjan: “…As she concluded in her autobiographical reflections published two years before she died in 2012, “For policing, increasing the size of governmental units consistently had a negative impact on the level of output generated as well as on efficiency of service provision… smaller police departments… consistently outperformed their better trained and better financed larger neighbors.”
But why did this happen? To explain this, Ostrom showed how, in small communities with small police forces, citizens are more active in monitoring their neighborhoods. Officers in smaller police forces also have more knowledge of the local area and better connections with the community.
She also found that larger, more centralized police forces also have a negative effect on other public services. With a larger police bureaucracy, other local frontline professionals with less funding — social workers, mental health support centers, clinics, youth support services — have less of a say in how to respond to a community’s issues such as drug use or domestic violence. The bigger the police department, the less citizens — especially those that are already marginalized, like migrants or Black communities — have a say in how policing should be conducted.
This finding became a crucial step in Ostrom’s groundbreaking work on how communities manage their resources sustainably without outside help — through deliberation, resolving conflict and setting clear community agreements. This is what she ended up becoming famous for, and what won her the Nobel Memorial Prize in Economic Sciences, placing her next to some of the foremost economists in the world.
But her research on policing shouldn’t be forgotten: It shows that, when it comes to safer communities, having more funding or larger services is not important. What’s important is the connections and trust between the community and the service provider….(More)”.
IRS Used Cellphone Location Data to Try to Find Suspects
Byron Tau at the Wall Street Journal: “The Internal Revenue Service attempted to identify and track potential criminal suspects by purchasing access to a commercial database that records the locations of millions of American cellphones.
The IRS Criminal Investigation unit, or IRS CI, had a subscription to access the data in 2017 and 2018, and the way it used the data was revealed last week in a briefing by IRS CI officials to Sen. Ron Wyden’s (D., Ore.) office. The briefing was described to The Wall Street Journal by an aide to the senator.
IRS CI officials told Mr. Wyden’s office that their lawyers had given verbal approval for the use of the database, which is sold by a Virginia-based government contractor called Venntel Inc. Venntel obtains anonymized location data from the marketing industry and resells it to governments. IRS CI added that it let its Venntel subscription lapse after it failed to locate any targets of interest during the year it paid for the service, according to Mr. Wyden’s aide.
Justin Cole, a spokesman for IRS CI, said it entered into a “limited contract with Venntel to test their services against the law enforcement requirements of our agency.” IRS CI pursues the most serious and flagrant violations of tax law, and it said it used the Venntel database in “significant money-laundering, cyber, drug and organized-crime cases.”
The episode demonstrates a growing law enforcement interest in reams of anonymized cellphone movement data collected by the marketing industry. Government entities can try to use the data to identify individuals—which in many cases isn’t difficult with such databases.
It also shows that data from the marketing industry can be used as an alternative to obtaining data from cellphone carriers, a process that requires a court order. Until 2018, prosecutors needed “reasonable grounds” to seek cell tower records from a carrier. In June 2018, the U.S. Supreme Court strengthened the requirement to show probable cause a crime has been committed before such data can be obtained from carriers….(More)”
Scraping Court Records Data to Find Dirty Cops
Article by Lawsuit.org: “In the 2002 dystopian sci-fi film “Minority Report,” law enforcement can manage crime by “predicting” illegal behavior before it happens. While fiction, the plot is intriguing and contributes to the conversation on advanced crime-fighting technology. However, today’s world may not be far off.
Data’s role in our lives and more accessibility to artificial intelligence is changing the way we approach topics such as research, real estate, and law enforcement. In fact, recent investigative reporting has shown that “dozens of [American] cities” are now experimenting with predictive policing technology.
Despite the current controversy surrounding predictive policing, it seems to be a growing trend that has been met with little real resistance. We may be closer to policing that mirrors the frightening depictions in “Minority Report” than we ever thought possible.
Fighting Fire With Fire
In its current state, predictive policing is defined as:
“The usage of mathematical, predictive analytics, and other analytical techniques in law enforcement to identify potential criminal activity. Predictive policing methods fall into four general categories: methods for predicting crimes, methods for predicting offenders, methods for predicting perpetrators’ identities, and methods for predicting victims of crime.”
While it might not be possible to prevent predictive policing from being employed by the criminal justice system, perhaps there are ways we can create a more level playing field: One where the powers of big data analysis aren’t just used to predict crime, but also are used to police law enforcement themselves.
Below, we’ve provided a detailed breakdown of what this potential reality could look like when applied to one South Florida county’s public databases, along with information on how citizens and communities can use public data to better understand the behaviors of local law enforcement and even individual police officers….(More)”.