Digital Health Data And Information Sharing: A New Frontier For Health Care Competition?


Paper by Lucia Savage, Martin Gaynor and Julie Adler-Milstein: “There are obvious benefits to having patients’ health information flow across health providers. Providers will have more complete information about patients’ health and treatment histories, allowing them to make better treatment recommendations, and avoid unnecessary and duplicative testing or treatment. This should result in better and more efficient treatment, and better health outcomes. Moreover, the federal government has provided substantial incentives for the exchange of health information. Since 2009, the federal government has spent more than $40 billion to ensure that most physicians and hospitals use electronic health records, and to incentivize the use of electronic health information and health information exchange (the enabling statute is the Health Information Technology for Clinical Health Act), and in 2016 authorized substantial fines for failing to share appropriate information.

Yet, in spite of these incentives and the clear benefits to patients, the exchange of health information remains limited. There is evidence that this limited exchange in due in part to providers and platforms attempting to retain, rather than share, information (“information blocking”). In this article we examine legal and business reasons why health information may not be flowing. In particular, we discuss incentives providers and platforms can have for information blocking as a means to maintain or enhance their market position and thwart competition. Finally, we recommend steps to better understand whether the absence of information exchange, is due to information blocking that harms competition and consumers….(More)”

Credit denial in the age of AI


Paper by Aaron Klein: “Banks have been in the business of deciding who is eligible for credit for centuries. But in the age of artificial intelligence (AI), machine learning (ML), and big data, digital technologies have the potential to transform credit allocation in positive as well as negative directions. Given the mix of possible societal ramifications, policymakers must consider what practices are and are not permissible and what legal and regulatory structures are necessary to protect consumers against unfair or discriminatory lending practices.

In this paper, I review the history of credit and the risks of discriminatory practices. I discuss how AI alters the dynamics of credit denials and what policymakers and banking officials can do to safeguard consumer lending. AI has the potential to alter credit practices in transformative ways and it is important to ensure that this happens in a safe and prudent manner….(More)”.

Characterizing the cultural niches of North American birds


Justin G. Schuetz and Alison Johnston at PNAS: “Efforts to mitigate the current biodiversity crisis require a better understanding of how and why humans value other species. We use Internet query data and citizen science data to characterize public interest in 621 bird species across the United States. We estimate the relative popularity of different birds by quantifying how frequently people use Google to search for species, relative to the rates at which they are encountered in the environment.

In intraspecific analyses, we also quantify the degree to which Google searches are limited to, or extend beyond, the places in which people encounter each species. The resulting metrics of popularity and geographic specificity of interest allow us to define aspects of relationships between people and birds within a cultural niche space. We then estimate the influence of species traits and socially constructed labels on niche positions to assess the importance of observations and ideas in shaping public interest in birds.

Our analyses show clear effects of migratory strategy, color, degree of association with bird feeders, and, especially, body size on niche position. They also indicate that cultural labels, including “endangered,” “introduced,” and, especially, “team mascot,” are strongly associated with the magnitude and geographic specificity of public interest in birds. Our results provide a framework for exploring complex relationships between humans and other species and enable more informed decision-making across diverse bird conservation strategies and goals….(More)”.

New Data-Driven Map Shows Spread of Participation in Democracy


Loren Peabody at the Participatory Budgeting Project: “As we celebrate the first 30 years of participatory budgeting (PB) in the world and the first 10 years of the Participatory Budgeting Project (PBP), we reflect on how far and wide PB has spread–and how it continues to grow! We’re thrilled to introduce a new tool to help us look back as we plan for the next 30+ years of PB. And so we’re introducing a map of PB across the U.S. and Canada. Each dot on the map represents a place where democracy has been deepened by bringing people together to decide together how to invest public resources in their community….

This data sheds light on larger questions, such as what is the relationship between the size of PB budgets and the number of people who participate? Looking at PBP data on processes in counties, cities, and urban districts, we find a positive correlation between the size of the PB budget per person and the number of people who take part in a PB vote (r=.22, n=245). In other words, where officials make a stronger commitment to funding PB, more people take part in the process–all the more reason to continue growing PB!….(More)”.

The Wrong Kind of AI? Artificial Intelligence and the Future of Labor Demand


NBER Paper by Daron Acemoglu and Pascual Restrepo: “Artificial Intelligence is set to influence every aspect of our lives, not least the way production is organized. AI, as a technology platform, can automate tasks previously performed by labor or create new tasks and activities in which humans can be productively employed. Recent technological change has been biased towards automation, with insufficient focus on creating new tasks where labor can be productively employed. The consequences of this choice have been stagnating labor demand, declining labor share in national income, rising inequality and lower productivity growth. The current tendency is to develop AI in the direction of further automation, but this might mean missing out on the promise of the “right” kind of AI with better economic and social outcomes….(More)”.

The Automated Administrative State


Paper by Danielle Citron and Ryan Calo: “The administrative state has undergone radical change in recent decades. In the twentieth century, agencies in the United States generally relied on computers to assist human decision-makers. In the twenty-first century, computers are making agency decisions themselves. Automated systems are increasingly taking human beings out of the loop. Computers terminate Medicaid to cancer patients and deny food stamps to individuals. They identify parents believed to owe child support and initiate collection proceedings against them. Computers purge voters from the rolls and deem small businesses ineligible for federal contracts [1].

Automated systems built in the early 2000s eroded procedural safeguards at the heart of the administrative state. When government makes important decisions that affect our lives, liberty, and property, it owes us “due process”— understood as notice of, and a chance to object to, those decisions. Automated systems, however, frustrate these guarantees. Some systems like the “no-fly” list were designed and deployed in secret; others lacked record-keeping audit trails, making review of the law and facts supporting a system’s decisions impossible. Because programmers working at private contractors lacked training in the law, they distorted policy when translating it into code [2].

Some of us in the academy sounded the alarm as early as the 1990s, offering an array of mechanisms to ensure the accountability and transparency of automated administrative state [3]. Yet the same pathologies continue to plague government decision-making systems today. In some cases, these pathologies have deepened and extended. Agencies lean upon algorithms that turn our personal data into predictions, professing to reflect who we are and what we will do. The algorithms themselves increasingly rely upon techniques, such as deep learning, that are even less amenable to scrutiny than purely statistical models. Ideals of what the administrative law theorist Jerry Mashaw has called “bureaucratic justice” in the form of efficiency with a “human face” feel impossibly distant [4].

The trend toward more prevalent and less transparent automation in agency decision-making is deeply concerning. For a start, we have yet to address in any meaningful way the widening gap between the commitments of due process and the actual practices of contemporary agencies [5]. Nonetheless, agencies rush to automate (surely due to the influence and illusive promises of companies seeking lucrative contracts), trusting algorithms to tell us if criminals should receive probation, if public school teachers should be fired, or if severely disabled individuals should receive less than the maximum of state-funded nursing care [6]. Child welfare agencies conduct intrusive home inspections because some system, which no party to the interaction understands, has rated a poor mother as having a propensity for violence. The challenges of preserving due process in light of algorithmic decision-making is an area of renewed and active attention within academia, civil society, and even the courts [7].

Second, and routinely overlooked, we are applying the new affordances of artificial intelligence in precisely the wrong contexts…(More)”.

This tech tells cities when floods are coming–and what they will destroy


Ben Paynter at FastCompany: “Several years ago, one of the eventual founders of One Concern nearly died in a tragic flood. Today, the company specializes in using artificial intelligence to predict how natural disasters are unfolding in real time on a city-block-level basis, in order to help disaster responders save as many lives as possible….

To fix that, One Concern debuted Flood Concern in late 2018. It creates map-based visualizations of where water surges may hit hardest, up to five days ahead of an impending storm. For cities, that includes not just time-lapse breakdowns of how the water will rise, how fast it could move, and what direction it will be flowing, but also what structures will get swamped or washed away, and how differing mitigation efforts–from levy building to dam releases–will impact each scenario. It’s the winner of Fast Company’s 2019 World Changing Ideas Awards in the AI and Data category.

[Image: One Concern]

So far, Flood Concern has been retroactively tested against events like Hurricane Harvey to show that it could have predicted what areas would be most impacted well ahead of the storm. The company, which was founded in Silicon Valley in 2015, started with one of that region’s pressing threats: earthquakes. It’s since earned contracts with cities like San Francisco, Los Angeles, and Cupertino, as well as private insurance companies….

One Concern’s first offering, dubbed Seismic Concern, takes existing information from satellite images and building permits to figure out what kind of ground structures are built on, and what might happen if they started shaking. If a big one hits, the program can extrapolate from the epicenter to suggest the likeliest places for destruction, and then adjust as more data from things like 911 calls and social media gets factored in….(More)”.


The Smart Enough City


Community banner

Open Access Book by Ben Green: “Smart cities, where technology is used to solve every problem, are hailed as futuristic urban utopias. We are promised that apps, algorithms, and artificial intelligence will relieve congestion, restore democracy, prevent crime, and improve public services. In The Smart Enough City, Ben Green warns against seeing the city only through the lens of technology; taking an exclusively technical view of urban life will lead to cities that appear smart but under the surface are rife with injustice and inequality. He proposes instead that cities strive to be “smart enough”: to embrace technology as a powerful tool when used in conjunction with other forms of social change—but not to value technology as an end in itself….(More)”.

Artists as ‘Creative Problem-Solvers’ at City Agencies


Sophie Haigney at The New York Times: “Taja Lindley, a Brooklyn-based interdisciplinary artist and activist, will spend the next year doing an unconventional residency — she’ll be collaborating with the New York City Department of Health and Mental Hygiene, working on a project that deals with unequal birth outcomes and maternal mortality for pregnant and parenting black people in the Bronx.

Ms. Lindley is one of four artists who were selected this year for the City’s Public Artists in Residence program, or PAIR, which is managed by New York City’s Department of Cultural Affairs. The program, which began in 2015, matches artists and public agencies, and the artists are tasked with developing creative projects around social issues.

Ms. Lindley will be working with the Tremont Neighborhood Health Action Center, part of the department of health, in the Bronx. “People who are black are met with skepticism, minimized and dismissed when they seek health care,” Ms. Lindley said, “and the voices of black people can really shift medical practices and city practices, so I’ll really be centering those voices.” She said that performance, film and storytelling are likely to be incorporated in her project.

The other three artists selected this year are the artist Laura Nova, who will be in residence with the Department for the Aging; the artist Julia Weist, who will be in residence with the Department of Records and Information Services; and the artist Janet Zweig, who will be in residence with the Mayor’s Office of Sustainability. Each will receive $40,000. There is a three-month-long research phase and then the artists will spend a minimum of nine months creating and producing their work….(More)”.

Crowdsourced reports could save lives when the next earthquake hits


Charlotte Jee at MIT Technology Review: “When it comes to earthquakes, every minute counts. Knowing that one has hit—and where—can make the difference between staying inside a building and getting crushed, and running out and staying alive. This kind of timely information can also be vital to first responders.

However, the speed of early warning systems varies from country to country. In Japan  and California, huge networks of sensors and seismic stations can alert citizens to an earthquake. But these networks are expensive to install and maintain. Earthquake-prone countries such as Mexico and Indonesia don’t have such an advanced or widespread system.

A cheap, effective way to help close this gap between countries might be to crowdsource earthquake reports and combine them with traditional detection data from seismic monitoring stations. The approach was described in a paper in Science Advances today.

The crowdsourced reports come from three sources: people submitting information using LastQuake, an app created by the Euro-Mediterranean Seismological Centre; tweets that refer to earthquake-related keywords; and the time and IP address data associated with visits to the EMSC website.

When this method was applied retrospectively to earthquakes that occurred in 2016 and 2017, the crowdsourced detections on their own were 85% accurate. Combining the technique with traditional seismic data raised accuracy to 97%. The crowdsourced system was faster, too. Around 50% of the earthquake locations were found in less than two minutes, a whole minute faster than with data provided only by a traditional seismic network.

When EMSC has identified a suspected earthquake, it sends out alerts via its LastQuake app asking users nearby for more information: images, videos, descriptions of the level of tremors, and so on. This can help assess the level of damage for early responders….(More)”.