Assessing and Suing an Algorithm


Report by Elina Treyger, Jirka Taylor, Daniel Kim, and Maynard A. Holliday: “Artificial intelligence algorithms are permeating nearly every domain of human activity, including processes that make decisions about interests central to individual welfare and well-being. How do public perceptions of algorithmic decisionmaking in these domains compare with perceptions of traditional human decisionmaking? What kinds of judgments about the shortcomings of algorithmic decisionmaking processes underlie these perceptions? Will individuals be willing to hold algorithms accountable through legal channels for unfair, incorrect, or otherwise problematic decisions?

Answers to these questions matter at several levels. In a democratic society, a degree of public acceptance is needed for algorithms to become successfully integrated into decisionmaking processes. And public perceptions will shape how the harms and wrongs caused by algorithmic decisionmaking are handled. This report shares the results of a survey experiment designed to contribute to researchers’ understanding of how U.S. public perceptions are evolving in these respects in one high-stakes setting: decisions related to employment and unemployment…(More)”.

Can Large Language Models Capture Public Opinion about Global Warming? An Empirical Assessment of Algorithmic Fidelity and Bias


Paper by S. Lee et all: “Large language models (LLMs) have demonstrated their potential in social science research by emulating human perceptions and behaviors, a concept referred to as algorithmic fidelity. This study assesses the algorithmic fidelity and bias of LLMs by utilizing two nationally representative climate change surveys. The LLMs were conditioned on demographics and/or psychological covariates to simulate survey responses. The findings indicate that LLMs can effectively capture presidential voting behaviors but encounter challenges in accurately representing global warming perspectives when relevant covariates are not included. GPT-4 exhibits improved performance when conditioned on both demographics and covariates. However, disparities emerge in LLM estimations of the views of certain groups, with LLMs tending to underestimate worry about global warming among Black Americans. While highlighting the potential of LLMs to aid social science research, these results underscore the importance of meticulous conditioning, model selection, survey question format, and bias assessment when employing LLMs for survey simulation. Further investigation into prompt engineering and algorithm auditing is essential to harness the power of LLMs while addressing their inherent limitations…(More)”.

Can Indigenous knowledge and Western science work together? New center bets yes


Article by Jeffrey Mervis: “For millennia, the Passamaquoddy people used their intimate understanding of the coastal waters along the Gulf of Maine to sustainably harvest the ocean’s bounty. Anthropologist Darren Ranco of the University of Maine hoped to blend their knowledge of tides, water temperatures, salinity, and more with a Western approach in a project to study the impact of coastal pollution on fish, shellfish, and beaches.

But the Passamaquoddy were never really given a seat at the table, says Ranco, a member of the Penobscot Nation, which along with the Passamaquoddy are part of the Wabanaki Confederacy of tribes in Maine and eastern Canada. The Passamaquoddy thought water quality and environmental protection should be top priority; the state emphasized forecasting models and monitoring. “There was a disconnect over who were the decision-makers, what knowledge would be used in making decisions, and what participation should look like,” Ranco says about the 3-year project, begun in 2015 and funded by the National Science Foundation (NSF).

Last month, NSF aimed to bridge such disconnects, with a 5-year, $30 million grant designed to weave together traditional ecological knowledge (TEK) and Western science. Based at the University of Massachusetts (UMass) Amherst, the Center for Braiding Indigenous Knowledges and Science (CBIKS) aims to fundamentally change the way scholars from both traditions select and carry out joint research projects and manage data…(More)”.

A Feasibility Study of Differentially Private Summary Statistics and Regression Analyses with Evaluations on Administrative and Survey Data


Report by Andrés F. Barrientos, Aaron R. Williams, Joshua Snoke, Claire McKay Bowen: “Federal administrative data, such as tax data, are invaluable for research, but because of privacy concerns, access to these data is typically limited to select agencies and a few individuals. An alternative to sharing microlevel data is to allow individuals to query statistics without directly accessing the confidential data. This paper studies the feasibility of using differentially private (DP) methods to make certain queries while preserving privacy. We also include new methodological adaptations to existing DP regression methods for using new data types and returning standard error estimates. We define feasibility as the impact of DP methods on analyses for making public policy decisions and the queries accuracy according to several utility metrics. We evaluate the methods using Internal Revenue Service data and public-use Current Population Survey data and identify how specific data features might challenge some of these methods. Our findings show that DP methods are feasible for simple, univariate statistics but struggle to produce accurate regression estimates and confidence intervals. To the best of our knowledge, this is the first comprehensive statistical study of DP regression methodology on real, complex datasets, and the findings have significant implications for the direction of a growing research field and public policy…(More)”.

Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence


The White House: “Today, President Biden is issuing a landmark Executive Order to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence (AI). The Executive Order establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.

As part of the Biden-Harris Administration’s comprehensive strategy for responsible innovation, the Executive Order builds on previous actions the President has taken, including work that led to voluntary commitments from 15 leading companies to drive safe, secure, and trustworthy development of AI…(More)”.

Predictive Policing Software Terrible At Predicting Crimes


Article by Aaron Sankin and Surya Mattu: “A software company sold a New Jersey police department an algorithm that was right less than 1% of the time

Crime predictions generated for the police department in Plainfield, New Jersey, rarely lined up with reported crimes, an analysis by The Markup has found, adding new context to the debate over the efficacy of crime prediction software.

Geolitica, known as PredPol until a 2021 rebrand, produces software that ingests data from crime incident reports and produces daily predictions on where and when crimes are most likely to occur.

We examined 23,631 predictions generated by Geolitica between Feb. 25 to Dec. 18, 2018 for the Plainfield Police Department (PD). Each prediction we analyzed from the company’s algorithm indicated that one type of crime was likely to occur in a location not patrolled by Plainfield PD. In the end, the success rate was less than half a percent. Fewer than 100 of the predictions lined up with a crime in the predicted category, that was also later reported to police.

Diving deeper, we looked at predictions specifically for robberies or aggravated assaults that were likely to occur in Plainfield and found a similarly low success rate: 0.6 percent. The pattern was even worse when we looked at burglary predictions, which had a success rate of 0.1 percent.

“Why did we get PredPol? I guess we wanted to be more effective when it came to reducing crime. And having a prediction where we should be would help us to do that. I don’t know that it did that,” said Captain David Guarino of the Plainfield PD. “I don’t believe we really used it that often, if at all. That’s why we ended up getting rid of it.”…(More)’.

Artificial Intelligence and the Labor Force


Report by by Tobias Sytsma, and Éder M. Sousa: “The rapid development of artificial intelligence (AI) has the potential to revolutionize the labor force with new generative AI tools that are projected to contribute trillions of dollars to the global economy by 2040. However, this opportunity comes with concerns about the impact of AI on workers and labor markets. As AI technology continues to evolve, there is a growing need for research to understand the technology’s implications for workers, firms, and markets. This report addresses this pressing need by exploring the relationship between occupational exposure and AI-related technologies, wages, and employment.

Using natural language processing (NLP) to identify semantic similarities between job task descriptions and U.S. technology patents awarded between 1976 and 2020, the authors evaluate occupational exposure to all technology patents in the United States, as well as to specific AI technologies, including machine learning, NLP, speech recognition, planning control, AI hardware, computer vision, and evolutionary computation.

The authors’ findings suggest that exposure to both general technology and AI technology patents is not uniform across occupational groups, over time, or across technology categories. They estimate that up to 15 percent of U.S. workers were highly exposed to AI technology patents by 2019 and find that the correlation between technology exposure and employment growth can depend on the routineness of the occupation. This report contributes to the growing literature on the labor market implications of AI and provides insights that can inform policy discussions around this emerging issue…(More)”

How Americans View Data Privacy


Pew Research: “…Americans – particularly Republicans – have grown more concerned about how the government uses their data. The share who say they are worried about government use of people’s data has increased from 64% in 2019 to 71% today. That reflects rising concern among Republicans (from 63% to 77%), while Democrats’ concern has held steady. (Each group includes those who lean toward the respective party.)

The public increasingly says they don’t understand what companies are doing with their data. Some 67% say they understand little to nothing about what companies are doing with their personal data, up from 59%.

Most believe they have little to no control over what companies or the government do with their data. While these shares have ticked down compared with 2019, vast majorities feel this way about data collected by companies (73%) and the government (79%).

We’ve studied Americans’ views on data privacy for years. The topic remains in the national spotlight today, and it’s particularly relevant given the policy debates ranging from regulating AI to protecting kids on social media. But these are far from abstract concepts. They play out in the day-to-day lives of Americans in the passwords they choose, the privacy policies they agree to and the tactics they take – or not – to secure their personal information. We surveyed 5,101 U.S. adults using Pew Research Center’s American Trends Panel to give voice to people’s views and experiences on these topics.

In addition to the key findings covered on this page, the three chapters of this report provide more detail on:

How to share data — not just equally, but equitably


Editorial in Nature: “Two decades ago, scientists asked more than 150,000 people living in Mexico City to provide medical data for research. Each participant gave time, blood and details of their medical history. For the researchers, who were based at the National Autonomous University of Mexico in Mexico City and the University of Oxford, UK, this was an opportunity to study a Latin American population for clues about factors contributing to disease and health. For the participants, it was a chance to contribute to science so that future generations might one day benefit from access to improved health care. Ultimately, the Mexico City Prospective Study was an exercise in trust — scientists were trusted with some of people’s most private information because they promised to use it responsibly.

Over the years, the researchers have repaid the communities through studies investigating the effects of tobacco and other risk factors on participants’ health. They have used the data to learn about the impact of diabetes on mortality rates, and they have found that rare forms of a gene called GPR75 lower the risk of obesity. And on 11 October, researchers added to the body of knowledge on the population’s ancestry.

But this project also has broader relevance — it can be seen as a model of trust and of how the power structures of science can be changed to benefit the communities closest to it.

Mexico’s population is genetically wealthy. With a complex history of migration and mixing of several populations, the country’s diverse genetic resources are valuable to the study of the genetic roots of diseases. Most genetic databases are stocked with data from people with European ancestry. If genomics is to genuinely benefit the global community — and especially under-represented groups — appropriately diverse data sets are needed. These will improve the accuracy of genetic tests, such as those for disease risk, and will make it easier to unearth potential drug targets by finding new genetic links to medical conditions…(More)”.

NYC Releases Plan to Embrace AI, and Regulate It


Article by Sarah Holder: “New York City Mayor Eric Adams unveiled a plan for adopting and regulating artificial intelligence on Monday, highlighting the technology’s potential to “improve services and processes across our government” while acknowledging the risks.

The city also announced it is piloting an AI chatbot to answer questions about opening or operating a business through its website MyCity Business.

NYC agencies have reported using more than 30 tools that fit the city’s definition of algorithmic technology, including to match students with public schools, to track foodborne illness outbreaks and to analyze crime patterns. As the technology gets more advanced, and the implications of algorithmic bias, misinformation and privacy concerns become more apparent, the city plans to set policy around new and existing applications…

New York’s strategy, developed by the Office of Technology and Innovation with the input of city agency representatives and outside technology policy experts, doesn’t itself establish any rules and regulations around AI, but lays out a timeline and blueprint for creating them. It emphasizes the need for education and buy-in both from New York constituents and city employees. Within the next year, the city plans to start to hold listening sessions with the public, and brief city agencies on how and why to use AI in their daily operations. The city has also given itself a year to start work on piloting new AI tools, and two to create standards for AI contracts….

Stefaan Verhulst, a research professor at New York University and the co-founder of The GovLab, says that especially during a budget crunch, leaning on AI offers cities opportunities to make evidence-based decisions quickly and with fewer resources. Among the potential use cases he cited are identifying areas most in need of affordable housing, and responding to public health emergencies with data…(More) (Full plan)”.