Report by Andrés F. Barrientos, Aaron R. Williams, Joshua Snoke, Claire McKay Bowen: “Federal administrative data, such as tax data, are invaluable for research, but because of privacy concerns, access to these data is typically limited to select agencies and a few individuals. An alternative to sharing microlevel data is to allow individuals to query statistics without directly accessing the confidential data. This paper studies the feasibility of using differentially private (DP) methods to make certain queries while preserving privacy. We also include new methodological adaptations to existing DP regression methods for using new data types and returning standard error estimates. We define feasibility as the impact of DP methods on analyses for making public policy decisions and the queries accuracy according to several utility metrics. We evaluate the methods using Internal Revenue Service data and public-use Current Population Survey data and identify how specific data features might challenge some of these methods. Our findings show that DP methods are feasible for simple, univariate statistics but struggle to produce accurate regression estimates and confidence intervals. To the best of our knowledge, this is the first comprehensive statistical study of DP regression methodology on real, complex datasets, and the findings have significant implications for the direction of a growing research field and public policy…(More)”.
Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence
The White House: “Today, President Biden is issuing a landmark Executive Order to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence (AI). The Executive Order establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.
As part of the Biden-Harris Administration’s comprehensive strategy for responsible innovation, the Executive Order builds on previous actions the President has taken, including work that led to voluntary commitments from 15 leading companies to drive safe, secure, and trustworthy development of AI…(More)”.
Predictive Policing Software Terrible At Predicting Crimes
Article by Aaron Sankin and Surya Mattu: “A software company sold a New Jersey police department an algorithm that was right less than 1% of the time
Crime predictions generated for the police department in Plainfield, New Jersey, rarely lined up with reported crimes, an analysis by The Markup has found, adding new context to the debate over the efficacy of crime prediction software.
Geolitica, known as PredPol until a 2021 rebrand, produces software that ingests data from crime incident reports and produces daily predictions on where and when crimes are most likely to occur.
We examined 23,631 predictions generated by Geolitica between Feb. 25 to Dec. 18, 2018 for the Plainfield Police Department (PD). Each prediction we analyzed from the company’s algorithm indicated that one type of crime was likely to occur in a location not patrolled by Plainfield PD. In the end, the success rate was less than half a percent. Fewer than 100 of the predictions lined up with a crime in the predicted category, that was also later reported to police.
Diving deeper, we looked at predictions specifically for robberies or aggravated assaults that were likely to occur in Plainfield and found a similarly low success rate: 0.6 percent. The pattern was even worse when we looked at burglary predictions, which had a success rate of 0.1 percent.
“Why did we get PredPol? I guess we wanted to be more effective when it came to reducing crime. And having a prediction where we should be would help us to do that. I don’t know that it did that,” said Captain David Guarino of the Plainfield PD. “I don’t believe we really used it that often, if at all. That’s why we ended up getting rid of it.”…(More)’.
Facilitating Data Flows through Data Collaboratives
A Practical Guide “to Designing Valuable, Accessible, and Responsible Data Collaboratives” by Uma Kalkar, Natalia González Alarcón, Arturo Muente Kunigami and Stefaan Verhulst: “Data is an indispensable asset in today’s society, but its production and sharing are subject to well-known market failures. Among these: neither economic nor academic markets efficiently reward costly data collection and quality assurance efforts; data providers cannot easily supervise the appropriate use of their data; and, correspondingly, users have weak incentives to pay for, acknowledge, and protect data that they receive from providers. Data collaboratives are a potential non-market solution to this problem, bringing together data providers and users to address these market failures. The governance frameworks for these collaboratives are varied and complex and their details are not widely known. This guide proposes a methodology and a set of common elements that facilitate experimentation and creation of collaborative environments. It offers guidance to governments on implementing effective data collaboratives as a means to promote data flows in Latin America and the Caribbean, harnessing their potential to design more effective services and improve public policies…(More)”.
Artificial Intelligence and the Labor Force
Report by by Tobias Sytsma, and Éder M. Sousa: “The rapid development of artificial intelligence (AI) has the potential to revolutionize the labor force with new generative AI tools that are projected to contribute trillions of dollars to the global economy by 2040. However, this opportunity comes with concerns about the impact of AI on workers and labor markets. As AI technology continues to evolve, there is a growing need for research to understand the technology’s implications for workers, firms, and markets. This report addresses this pressing need by exploring the relationship between occupational exposure and AI-related technologies, wages, and employment.
Using natural language processing (NLP) to identify semantic similarities between job task descriptions and U.S. technology patents awarded between 1976 and 2020, the authors evaluate occupational exposure to all technology patents in the United States, as well as to specific AI technologies, including machine learning, NLP, speech recognition, planning control, AI hardware, computer vision, and evolutionary computation.
The authors’ findings suggest that exposure to both general technology and AI technology patents is not uniform across occupational groups, over time, or across technology categories. They estimate that up to 15 percent of U.S. workers were highly exposed to AI technology patents by 2019 and find that the correlation between technology exposure and employment growth can depend on the routineness of the occupation. This report contributes to the growing literature on the labor market implications of AI and provides insights that can inform policy discussions around this emerging issue…(More)”
How Americans View Data Privacy
Pew Research: “…Americans – particularly Republicans – have grown more concerned about how the government uses their data. The share who say they are worried about government use of people’s data has increased from 64% in 2019 to 71% today. That reflects rising concern among Republicans (from 63% to 77%), while Democrats’ concern has held steady. (Each group includes those who lean toward the respective party.)
The public increasingly says they don’t understand what companies are doing with their data. Some 67% say they understand little to nothing about what companies are doing with their personal data, up from 59%.
Most believe they have little to no control over what companies or the government do with their data. While these shares have ticked down compared with 2019, vast majorities feel this way about data collected by companies (73%) and the government (79%).
We’ve studied Americans’ views on data privacy for years. The topic remains in the national spotlight today, and it’s particularly relevant given the policy debates ranging from regulating AI to protecting kids on social media. But these are far from abstract concepts. They play out in the day-to-day lives of Americans in the passwords they choose, the privacy policies they agree to and the tactics they take – or not – to secure their personal information. We surveyed 5,101 U.S. adults using Pew Research Center’s American Trends Panel to give voice to people’s views and experiences on these topics.
In addition to the key findings covered on this page, the three chapters of this report provide more detail on:
- Views of data privacy risks, personal data and digital privacy laws (Chapter 1). Concerns, feelings and trust, plus children’s online privacy, social media companies and views of law enforcement.
- How Americans protect their online data (Chapter 2). Data breaches and hacks, passwords, cybersecurity and privacy policies.
- A deep dive into online privacy choices (Chapter 3). How knowledge, confidence and concern relate to online privacy choices…(More)”.
How to share data — not just equally, but equitably
Editorial in Nature: “Two decades ago, scientists asked more than 150,000 people living in Mexico City to provide medical data for research. Each participant gave time, blood and details of their medical history. For the researchers, who were based at the National Autonomous University of Mexico in Mexico City and the University of Oxford, UK, this was an opportunity to study a Latin American population for clues about factors contributing to disease and health. For the participants, it was a chance to contribute to science so that future generations might one day benefit from access to improved health care. Ultimately, the Mexico City Prospective Study was an exercise in trust — scientists were trusted with some of people’s most private information because they promised to use it responsibly.
Over the years, the researchers have repaid the communities through studies investigating the effects of tobacco and other risk factors on participants’ health. They have used the data to learn about the impact of diabetes on mortality rates, and they have found that rare forms of a gene called GPR75 lower the risk of obesity. And on 11 October, researchers added to the body of knowledge on the population’s ancestry.
But this project also has broader relevance — it can be seen as a model of trust and of how the power structures of science can be changed to benefit the communities closest to it.
Mexico’s population is genetically wealthy. With a complex history of migration and mixing of several populations, the country’s diverse genetic resources are valuable to the study of the genetic roots of diseases. Most genetic databases are stocked with data from people with European ancestry. If genomics is to genuinely benefit the global community — and especially under-represented groups — appropriately diverse data sets are needed. These will improve the accuracy of genetic tests, such as those for disease risk, and will make it easier to unearth potential drug targets by finding new genetic links to medical conditions…(More)”.
NYC Releases Plan to Embrace AI, and Regulate It
Article by Sarah Holder: “New York City Mayor Eric Adams unveiled a plan for adopting and regulating artificial intelligence on Monday, highlighting the technology’s potential to “improve services and processes across our government” while acknowledging the risks.
The city also announced it is piloting an AI chatbot to answer questions about opening or operating a business through its website MyCity Business.
NYC agencies have reported using more than 30 tools that fit the city’s definition of algorithmic technology, including to match students with public schools, to track foodborne illness outbreaks and to analyze crime patterns. As the technology gets more advanced, and the implications of algorithmic bias, misinformation and privacy concerns become more apparent, the city plans to set policy around new and existing applications…
New York’s strategy, developed by the Office of Technology and Innovation with the input of city agency representatives and outside technology policy experts, doesn’t itself establish any rules and regulations around AI, but lays out a timeline and blueprint for creating them. It emphasizes the need for education and buy-in both from New York constituents and city employees. Within the next year, the city plans to start to hold listening sessions with the public, and brief city agencies on how and why to use AI in their daily operations. The city has also given itself a year to start work on piloting new AI tools, and two to create standards for AI contracts….
Stefaan Verhulst, a research professor at New York University and the co-founder of The GovLab, says that especially during a budget crunch, leaning on AI offers cities opportunities to make evidence-based decisions quickly and with fewer resources. Among the potential use cases he cited are identifying areas most in need of affordable housing, and responding to public health emergencies with data…(More) (Full plan)”.
How a billionaire-backed network of AI advisers took over Washington
Article by Brendan Bordelon: “An organization backed by Silicon Valley billionaires and tied to leading artificial intelligence firms is funding the salaries of more than a dozen AI fellows in key congressional offices, across federal agencies and at influential think tanks.
The fellows funded by Open Philanthropy, which is financed primarily by billionaire Facebook co-founder and Asana CEO Dustin Moskovitz and his wife Cari Tuna, are already involved in negotiations that will shape Capitol Hill’s accelerating plans to regulate AI. And they’re closely tied to a powerful influence network that’s pushing Washington to focus on the technology’s long-term risks — a focus critics fear will divert Congress from more immediate rules that would tie the hands of tech firms.
Acting through the little-known Horizon Institute for Public Service, a nonprofit that Open Philanthropy effectively created in 2022, the group is funding the salaries of tech fellows in key Senate offices, according to documents and interviews…Current and former Horizon AI fellows with salaries funded by Open Philanthropy are now working at the Department of Defense, the Department of Homeland Security and the State Department, as well as in the House Science Committee and Senate Commerce Committee, two crucial bodies in the development of AI rules. They also populate key think tanks shaping AI policy, including the RAND Corporation and Georgetown University’s Center for Security and Emerging Technology, according to the Horizon web site…
In the high-stakes Washington debate over AI rules, Open Philanthropy has long been focused on one slice of the problem — the long-term threats that future AI systems might pose to human survival. Many AI thinkers see those as science-fiction concerns far removed from the current AI harms that Washington should address. And they worry that Open Philanthropy, in concert with its web of affiliated organizations and experts, is shifting the policy conversation away from more pressing issues — including topics some leading AI firms might prefer to keep off the policy agenda…(More)”.
Google’s Expanded ‘Flood Hub’ Uses AI to Help Us Adapt to Extreme Weather
Article by Jeff Young: “Google announced Tuesday that a tool using artificial intelligence to better predict river floods will be expanded to the U.S. and Canada, covering more than 800 North American riverside communities that are home to more than 12 million people. Google calls it Flood Hub, and it’s the latest example of how AI is being used to help adapt to extreme weather events associated with climate change.
“We see tremendous opportunity for AI to solve some of the world’s biggest challenges, and climate change is very much one of those,” Google’s Chief Sustainability Officer, Kate Brandt, told Newsweek in an interview.
At an event in Brussels on Tuesday, Google announced a suite of new and expanded sustainability initiatives and products. Many of them involve the use of AI, such as tools to help city planners find the best places to plant trees and modify rooftops to buffer against city heat, and a partnership with the U.S. Forest Service to use AI to improve maps related to wildfires.
Brandt said Flood Hub’s engineers use advanced AI, publicly available data sources and satellite imagery, combined with hydrologic models of river flows. The results allow flooding predictions with a longer lead time than was previously available in many instances…(More)”.