Paper by Megan C. Roberts et al: “Precision public health (PPH) considers the interplay between genetics, lifestyle and the environment to improve disease prevention, diagnosis and treatment on a population level—thereby delivering the right interventions to the right populations at the right time. In this Review, we explore the concept of PPH as the next generation of public health. We discuss the historical context of using individual-level data in public health interventions and examine recent advancements in how data from human and pathogen genomics and social, behavioral and environmental research, as well as artificial intelligence, have transformed public health. Real-world examples of PPH are discussed, emphasizing how these approaches are becoming a mainstay in public health, as well as outstanding challenges in their development, implementation and sustainability. Data sciences, ethical, legal and social implications research, capacity building, equity research and implementation science will have a crucial role in realizing the potential for ‘precision’ to enhance traditional public health approaches…(More)”.
An Algorithm Told Police She Was Safe. Then Her Husband Killed Her.
Article by Adam Satariano and Roser Toll Pifarré: “Spain has become dependent on an algorithm to combat gender violence, with the software so woven into law enforcement that it is hard to know where its recommendations end and human decision-making begins. At its best, the system has helped police protect vulnerable women and, overall, has reduced the number of repeat attacks in domestic violence cases. But the reliance on VioGén has also resulted in victims, whose risk levels are miscalculated, getting attacked again — sometimes leading to fatal consequences.
Spain now has 92,000 active cases of gender violence victims who were evaluated by VioGén, with most of them — 83 percent — classified as facing little risk of being hurt by their abuser again. Yet roughly 8 percent of women who the algorithm found to be at negligible risk and 14 percent at low risk have reported being harmed again, according to Spain’s Interior Ministry, which oversees the system.
At least 247 women have also been killed by their current or former partner since 2007 after being assessed by VioGén, according to government figures. While that is a tiny fraction of gender violence cases, it points to the algorithm’s flaws. The New York Times found that in a judicial review of 98 of those homicides, 55 of the slain women were scored by VioGén as negligible or low risk for repeat abuse…(More)”.
10 profound answers about the math behind AI
Article by Ethan Siegel: “Why do machines learn? Even in the recent past, this would have been a ridiculous question, as machines — i.e., computers — were only capable of executing whatever instructions a human programmer had programmed into them. With the rise of generative AI, or artificial intelligence, however, machines truly appear to be gifted with the ability to learn, refining their answers based on continued interactions with both human and non-human users. Large language model-based artificial intelligence programs, such as ChatGPT, Claude, Gemini and more, are now so widespread that they’re replacing traditional tools, including Google searches, in applications all across the world.
How did this come to be? How did we so swiftly come to live in an era where many of us are happy to turn over aspects of our lives that traditionally needed a human expert to a computer program? From financial to medical decisions, from quantum systems to protein folding, and from sorting data to finding signals in a sea of noise, many programs that leverage artificial intelligence (AI) and machine learning (ML) are far superior at these tasks compared with even the greatest human experts.
In his new book, Why Machines Learn: The Elegant Math Behind Modern AI, science writer Anil Ananthaswamy explores all of these aspects and more. I was fortunate enough to get to do a question-and-answer interview with him, and here are the 10 most profound responses he was generous enough to give….(More)”
The Great Scrape: The Clash Between Scraping and Privacy
Paper by Daniel J. Solove and Woodrow Hartzog: “Artificial intelligence (AI) systems depend on massive quantities of data, often gathered by “scraping” – the automated extraction of large amounts of data from the internet. A great deal of scraped data is about people. This personal data provides the grist for AI tools such as facial recognition, deep fakes, and generative AI. Although scraping enables web searching, archival, and meaningful scientific research, scraping for AI can also be objectionable or even harmful to individuals and society.
Organizations are scraping at an escalating pace and scale, even though many privacy laws are seemingly incongruous with the practice. In this Article, we contend that scraping must undergo a serious reckoning with privacy law. Scraping violates nearly all of the key principles in privacy laws, including fairness; individual rights and control; transparency; consent; purpose specification and secondary use restrictions; data minimization; onward transfer; and data security. With scraping, data protection laws built around these requirements are ignored.
Scraping has evaded a reckoning with privacy law largely because scrapers act as if all publicly available data were free for the taking. But the public availability of scraped data shouldn’t give scrapers a free pass. Privacy law regularly protects publicly available data, and privacy principles are implicated even when personal data is accessible to others.
This Article explores the fundamental tension between scraping and privacy law. With the zealous pursuit and astronomical growth of AI, we are in the midst of what we call the “great scrape.” There must now be a great reconciliation…(More)”.
Digital Ethology
Book edited by Tomáš Paus and Hye-Chung Kum: “Countless permutations of physical, built, and social environments surround us in space and time, influencing the air we breathe, how hot or cold we are, how many steps we take, and with whom we interact as we go about our daily lives. Assessing the dynamic processes that play out between humans and the environment is challenging. …explores how aggregate area-level data, produced at multiple locations and points in time, can reveal bidirectional—and iterative—relationships between human behavior and the environment through their digital footprints.
Experts from geospatial and data science, behavioral and brain science, epidemiology and public health, ethics, law, and urban planning consider how humans transform their environments and how environments shape human behavior…(More)”.
Mapping the Landscape of AI-Powered Nonprofits
Article by Kevin Barenblat: “Visualize the year 2050. How do you see AI having impacted the world? Whatever you’re picturing… the reality will probably be quite a bit different. Just think about the personal computer. In its early days circa the 1980s, tech companies marketed the devices for the best use cases they could imagine: reducing paperwork, doing math, and keeping track of forgettable things like birthdays and recipes. It was impossible to imagine that decades later, the larger-than-a-toaster-sized devices would be smaller than the size of Pop-Tarts, connect with billions of other devices, and respond to voice and touch.
It can be hard for us to see how new technologies will ultimately be used. The same is true of artificial intelligence. With new use cases popping up every day, we are early in the age of AI. To make sense of all the action, many landscapes have been published to organize the tech stacks and private sector applications of AI. We could not, however, find an overview of how nonprofits are using AI for impact…
AI-powered nonprofits (APNs) are already advancing solutions to many social problems, and Google.org’s recent research brief AI in Action: Accelerating Progress Towards the Sustainable Development Goals shows that AI is driving progress towards all 17 SDGs. Three goals that stand out with especially strong potential to be transformed by AI are SDG 3 (Good Health and Well-Being), SDG 4 (Quality Education), and SDG 13 (Climate Action). As such, this series focuses on how AI-powered nonprofits are transforming the climate, health care, and education sectors…(More)”.

Everyone Has A Price — And Corporations Know Yours
Article by David Dayen: “Six years ago, I was at a conference at the University of Chicago, the intellectual heart of corporate-friendly capitalism, when my eyes found the cover of the Chicago Booth Review, the business school’s flagship publication. “Are You Ready for Personalized Pricing?” the headline asked. I wasn’t, so I started reading.
The story looked at how online shopping, persistent data collection, and machine-learning algorithms could combine to generate the stuff of economists’ dreams: individual prices for each customer. It even recounted an experiment in 2015, where online employment website ZipRecruiter essentially outsourced its pricing strategy to two University of Chicago economists, Sanjog Misra and Jean-Pierre Dubé…(More)”.
(Almost) 200 Years of News-Based Economic Sentiment
Paper by Jules H. van Binsbergen, Svetlana Bryzgalova, Mayukh Mukhopadhyay & Varun Sharma: “Using text from 200 million pages of 13,000 US local newspapers and machine learning methods, we construct a 170-year-long measure of economic sentiment at the country and state levels, that expands existing measures in both the time series (by more than a century) and the cross-section. Our measure predicts GDP (both nationally and locally), consumption, and employment growth, even after controlling for commonly-used predictors, as well as monetary policy decisions. Our measure is distinct from the information in expert forecasts and leads its consensus value. Interestingly, news coverage has become increasingly negative across all states in the past half-century…(More)”.
The Collaboverse: A Collaborative Data-Sharing and Speech Analysis Platform
Paper by Justin D. Dvorak and Frank R. Boutsen: “Collaboration in the field of speech-language pathology occurs across a variety of digital devices and can entail the usage of multiple software tools, systems, file formats, and even programming languages. Unfortunately, gaps between the laboratory, clinic, and classroom can emerge in part because of siloing of data and workflows, as well as the digital divide between users. The purpose of this tutorial is to present the Collaboverse, a web-based collaborative system that unifies these domains, and describe the application of this tool to common tasks in speech-language pathology. In addition, we demonstrate its utility in machine learning (ML) applications…
This tutorial outlines key concepts in the digital divide, data management, distributed computing, and ML. It introduces the Collaboverse workspace for researchers, clinicians, and educators in speech-language pathology who wish to improve their collaborative network and leverage advanced computation abilities. It also details an ML approach to prosodic analysis….
The Collaboverse shows promise in narrowing the digital divide and is capable of generating clinically relevant data, specifically in the area of prosody, whose computational complexity has limited widespread analysis in research and clinic alike. In addition, it includes an augmentative and alternative communication app allowing visual, nontextual communication…(More)”.
The MAGA Plan to End Free Weather Reports
Article by Zoë Schlanger: “In the United States, as in most other countries, weather forecasts are a freely accessible government amenity. The National Weather Service issues alerts and predictions, warning of hurricanes and excessive heat and rainfall, all at the total cost to American taxpayers of roughly $4 per person per year. Anyone with a TV, smartphone, radio, or newspaper can know what tomorrow’s weather will look like, whether a hurricane is heading toward their town, or if a drought has been forecast for the next season. Even if they get that news from a privately owned app or TV station, much of the underlying weather data are courtesy of meteorologists working for the federal government.
Charging for popular services that were previously free isn’t generally a winning political strategy. But hard-right policy makers appear poised to try to do just that should Republicans gain power in the next term. Project 2025—a nearly 900-page book of policy proposals published by the conservative think tank the Heritage Foundation—states that an incoming administration should all but dissolve the National Oceanic and Atmospheric Administration, under which the National Weather Service operates….NOAA “should be dismantled and many of its functions eliminated, sent to other agencies, privatized, or placed under the control of states and territories,” Project 2025 reads. … “The preponderance of its climate-change research should be disbanded,” the document says. It further notes that scientific agencies such as NOAA are “vulnerable to obstructionism of an Administration’s aims,” so appointees should be screened to ensure that their views are “wholly in sync” with the president’s…(More)”.