How a largely untested AI algorithm crept into hundreds of hospitals


Vishal Khetpal and Nishant Shah at FastCompany: “Last spring, physicians like us were confused. COVID-19 was just starting its deadly journey around the world, afflicting our patients with severe lung infections, strokes, skin rashes, debilitating fatigue, and numerous other acute and chronic symptoms. Armed with outdated clinical intuitions, we were left disoriented by a disease shrouded in ambiguity.

In the midst of the uncertainty, Epic, a private electronic health record giant and a key purveyor of American health data, accelerated the deployment of a clinical prediction tool called the Deterioration Index. Built with a type of artificial intelligence called machine learning and in use at some hospitals prior to the pandemic, the index is designed to help physicians decide when to move a patient into or out of intensive care, and is influenced by factors like breathing rate and blood potassium level. Epic had been tinkering with the index for years but expanded its use during the pandemic. At hundreds of hospitals, including those in which we both work, a Deterioration Index score is prominently displayed on the chart of every patient admitted to the hospital.

The Deterioration Index is poised to upend a key cultural practice in medicine: triage. Loosely speaking, triage is an act of determining how sick a patient is at any given moment to prioritize treatment and limited resources. In the past, physicians have performed this task by rapidly interpreting a patient’s vital signs, physical exam findings, test results, and other data points, using heuristics learned through years of on-the-job medical training.

Ostensibly, the core assumption of the Deterioration Index is that traditional triage can be augmented, or perhaps replaced entirely, by machine learning and big data. Indeed, a study of 392 COVID-19 patients admitted to Michigan Medicine that the index was moderately successful at discriminating between low-risk patients and those who were at high-risk of being transferred to an ICU, getting placed on a ventilator, or dying while admitted to the hospital. But last year’s hurried rollout of the Deterioration Index also sets a worrisome precedent, and it illustrates the potential for such decision-support tools to propagate biases in medicine and change the ways in which doctors think about their patients….(More)”.

Deepfake Maps Could Really Mess With Your Sense of the World


Will Knight at Wired: “Satellite images showing the expansion of large detention camps in Xinjiang, China, between 2016 and 2018 provided some of the strongest evidence of a government crackdown on more than a million Muslims, triggering international condemnation and sanctions.

Other aerial images—of nuclear installations in Iran and missile sites in North Korea, for example—have had a similar impact on world events. Now, image-manipulation tools made possible by artificial intelligence may make it harder to accept such images at face value.

In a paper published online last month, University of Washington professor Bo Zhao employed AI techniques similar to those used to create so-called deepfakes to alter satellite images of several cities. Zhao and colleagues swapped features between images of Seattle and Beijing to show buildings where there are none in Seattle and to remove structures and replace them with greenery in Beijing.

Zhao used an algorithm called CycleGAN to manipulate satellite photos. The algorithm, developed by researchers at UC Berkeley, has been widely used for all sorts of image trickery. It trains an artificial neural network to recognize the key characteristics of certain images, such as a style of painting or the features on a particular type of map. Another algorithm then helps refine the performance of the first by trying to detect when an image has been manipulated….(More)”.

Quantitative Description of Digital Media


Introduction by Kevin Munger, Andrew M. Guess and Eszter Hargittai: “We introduce the rationale for a new peer-reviewed scholarly journal, the Journal of Quantitative Description: Digital Media. The journal is intended to create a new venue for research on digital media and address several deficiencies in the current social science publishing landscape. First, descriptive research is undersupplied and undervalued. Second, research questions too often only reflect dominant theories and received wisdom. Third, journals are constrained by unnecessary boundaries defined by discipline, geography, and length. Fourth, peer review is inefficient and unnecessarily burdensome for both referees and authors. We outline the journal’s scope and structure, which is open access, fee-free and relies on a Letter of Inquiry (LOI) model. Quantitative description can appeal to social scientists of all stripes and is a crucial methodology for understanding the continuing evolution of digital media and its relationship to important questions of interest to social scientists….(More)”.

Creating Public Value using the AI-Driven Internet of Things


Report by Gwanhoo Lee: “Government agencies seek to deliver quality services in increasingly dynamic and complex environments. However, outdated infrastructures—and a shortage of systems that collect and use massive real-time data—make it challenging for the agencies to fulfill their missions. Governments have a tremendous opportunity to transform public services using the “Internet of Things” (IoT) to provide situationspecific and real-time data, which can improve decision-making and optimize operational effectiveness.

In this report, Professor Lee describes IoT as a network of physical “things” equipped with sensors and devices that enable data transmission and operational control with no or little human intervention. Organizations have recently begun to embrace artificial intelligence (AI) and machine learning (ML) technologies to drive even greater value from IoT applications. AI/ML enhances the data analytics capabilities of IoT by enabling accurate predictions and optimal decisions in new ways. Professor Lee calls this AI/ML-powered IoT the “AI-Driven Internet of Things” (AIoT for short hereafter). AIoT is a natural evolution of IoT as computing, networking, and AI/ML technologies are increasingly converging, enabling organizations to develop as “cognitive enterprises” that capitalize on the synergy across these emerging technologies.

Strategic application of IoT in government is in an early phase. Few U.S. federal agencies have explicitly incorporated IoT in their strategic plan, or connected the potential of AI to their evolving IoT activities. The diversity and scale of public services combined with various needs and demands from citizens provide an opportunity to deliver value from implementing AI-driven IoT applications.

Still, IoT is already making the delivery of some public services smarter and more efficient, including public parking, water management, public facility management, safety alerts for the elderly, traffic control, and air quality monitoring. For example, the City of Chicago has deployed a citywide network of air quality sensors mounted on lampposts. These sensors track the presence of several air pollutants, helping the city develop environmental responses that improve the quality of life at a community level. As the cost of sensors decreases while computing power and machine learning capabilities grow, IoT will become more feasible and pervasive across the public sector—with some estimates of a market approaching $5 trillion in the next few years.

Professor Lee’s research aims to develop a framework of alternative models for creating public value with AIoT, validating the framework with five use cases in the public domain. Specifically, this research identifies three essential building blocks to AIoT: sensing through IoT devices, controlling through the systems that support these devices, and analytics capabilities that leverage AI to understand and act on the information accessed across these applications. By combining the building blocks in different ways, the report identifies four models for creating public value:

  • Model 1 utilizes only sensing capability.
  • Model 2 uses sensing capability and controlling capability.
  • Model 3 leverages sensing capability and analytics capability.
  • Model 4 combines all three capabilities.

The analysis of five AIoT use cases in the public transport sector from Germany, Singapore, the U.K., and the United States identifies 10 critical success factors, such as creating public value, using public-private partnerships, engaging with the global technology ecosystem, implementing incrementally, quantifying the outcome, and using strong cybersecurity measures….(More)”.

The Switch: How the Telegraph, Telephone, and Radio Created the Computer


Book by Chris McDonald: “Digital technology has transformed our world almost beyond recognition over the past four decades. We spend our lives surrounded by laptops, phones, tablets, and video game consoles — not to mention the digital processors that are jam-packed into our appliances and automobiles. We use computers to work, to play, to learn, and to socialize. The Switch tells the story of the humble components that made all of this possible — the transistor and its antecedents, the relay, and the vacuum tube.

All three of these devices were originally developed without any thought for their application to computers or computing. Instead, they were created for communication, in order to amplify or control signals sent over a wire or over the air. By repurposing these amplifiers as simple switches, flipped on and off by the presence or absence of an electric signal, later scientists and engineers constructed our digital universe. Yet none of it would have been possible without the telegraph, telephone, and radio. In these pages you’ll find a story of the interplay between science and technology, and the surprising ways in which inventions created for one purpose can be adapted to another. The tale is enlivened by the colorful cast of scientists and innovators, from Luigi Galvani to William Shockley, who, whether through brilliant insight or sheer obstinate determination, contributed to the evolution of the digital switch….(More)”.

Diverse Sources Database


About: “The Diverse Sources Database is NPR’s resource for journalists who believe in the value of diversity and share our goal to make public radio look and sound like America.

Originally called Source of the Week, the database launched in 2013 as a way help journalists at NPR and member stations expand the racial/ethnic diversity of the experts they tap for stories…(More)”.

‘Belonging Is Stronger Than Facts’: The Age of Misinformation



Max Fisher at the New York Times: “There’s a decent chance you’ve had at least one of these rumors, all false, relayed to you as fact recently: that President Biden plans to force Americans to eat less meat; that Virginia is eliminating advanced math in schools to advance racial equality; and that border officials are mass-purchasing copies of Vice President Kamala Harris’s book to hand out to refugee children.

All were amplified by partisan actors. But you’re just as likely, if not more so, to have heard it relayed from someone you know. And you may have noticed that these cycles of falsehood-fueled outrage keep recurring.

We are in an era of endemic misinformation — and outright disinformation. Plenty of bad actors are helping the trend along. But the real drivers, some experts believe, are social and psychological forces that make people prone to sharing and believing misinformation in the first place. And those forces are on the rise.

“Why are misperceptions about contentious issues in politics and science seemingly so persistent and difficult to correct?” Brendan Nyhan, a Dartmouth College political scientist, posed in a new paper in Proceedings of the National Academy of Sciences.

It’s not for want of good information, which is ubiquitous. Exposure to good information does not reliably instill accurate beliefs anyway. Rather, Dr. Nyhan writes, a growing body of evidence suggests that the ultimate culprits are “cognitive and memory limitations, directional motivations to defend or support some group identity or existing belief, and messages from other people and political elites.”

Put more simply, people become more prone to misinformation when three things happen. First, and perhaps most important, is when conditions in society make people feel a greater need for what social scientists call ingrouping — a belief that their social identity is a source of strength and superiority, and that other groups can be blamed for their problems….(More)”.

Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis


A CDT Research report, entitled "Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis".
CDT Research report, entitled “Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis”.

Report by Dhanaraj Thakur and  Emma Llansó: “The ever-increasing amount of user-generated content online has led, in recent years, to an expansion in research and investment in automated content analysis tools. Scrutiny of automated content analysis has accelerated during the COVID-19 pandemic, as social networking services have placed a greater reliance on these tools due to concerns about health risks to their moderation staff from in-person work. At the same time, there are important policy debates around the world about how to improve content moderation while protecting free expression and privacy. In order to advance these debates, we need to understand the potential role of automated content analysis tools.

This paper explains the capabilities and limitations of tools for analyzing online multimedia content and highlights the potential risks of using these tools at scale without accounting for their limitations. It focuses on two main categories of tools: matching models and computer prediction models. Matching models include cryptographic and perceptual hashing, which compare user-generated content with existing and known content. Predictive models (including computer vision and computer audition) are machine learning techniques that aim to identify characteristics of new or previously unknown content….(More)”.

Need Public Policy for Human Gene Editing, Heatwaves, or Asteroids? Try Thinking Like a Citizen


Article by Nicholas Weller, Michelle Sullivan Govani, and Mahmud Farooque: “In a ballroom at the Arizona Science Center one afternoon in 2017, more than 70 Phoenix residents—students, teachers, nurses, and retirees—gathered around tables to participate in a public forum about how cities can respond to extreme weather such as heat waves. Each table was covered in colorful printouts with a large laminated poster resembling a board game. Milling between the tables were decisionmakers from local government and the state. All were taking part in a deliberative process called participatory technology assessment, or pTA, designed to break down the walls between “experts” and citizens to gain insights into public policy dilemmas involving science, technology, and uncertainty.

Foreshadowing their varied viewpoints and experiences, participants prepared differently for the “extreme weather” of the heavily air conditioned ballroom, with some gripping cardigans around their shoulders while others were comfortable in tank tops. Extreme heat is something all the participants were familiar with—Phoenix is one of the hottest cities in the country—but not everyone understood the unequal way that heat and related deaths affect different parts of the Valley of the Sun. Though a handful of the participants might have called themselves environmentalists, most were not regular town-hall goers or political activists. Instead, they represented a diverse cross section of people in Phoenix. All had applied to attend—motivated by a small stipend, the opportunity to have their voice heard, or a bit of both.

Unlike typical town hall setups, where a few bold participants tend to dominate questioning and decisionmakers often respond by being defensive or vague, pTA gatherings are deliberately organized to encourage broad participation and conversation. To help people engage with the topic, the meeting was divided into subgroups to examine the story of Heattown, a fictionalized name for a real but anonymized community contending with the health, environmental, and economic impacts of heat waves. Then each group began a guided discussion of the different characters living in Heattown, vulnerabilities of the emergency-response and infrastructure systems, and strategies for dealing with those vulnerabilities….(More)”.

Three ways to supercharge your city’s open-data portal


Bloomberg Cities: “…Three open data approaches cities are finding success with:

Map it

Much of the data that people seem to be most interested in is location-based, local data leaders say. That includes everything from neighborhood crime stats and police data used by journalists and activists to property data regularly mined by real estate companies. Rather than simply making spatial data available, many cities have begun mapping it themselves, allowing users to browse information that’s useful to them.

At atlas.phila.gov, for example, Philadelphians can type in their own addresses to find property deeds, historic photos, nearby 311 complaints and service requests, and their polling place and date of the next local election, among other information. Los Angeles city’s GeoHub collects maps showing the locations of marijuana dispensariesreports of hate crimes, and five years of severe and fatal crashes between drivers and bikers or pedestrians, and dozens more.

A CincyInsights map highlighting cleaned up greens-aces across the city.
A CincyInsights map highlighting cleaned up green spaces across the city.

….

Train residents on how to use it

Cities with open-data policies learn from best practices in other city halls. In the last few years, many have begun offering trainings to equip residents with rudimentary data analysis skills. Baton Rouge, for example, offered a free, three-part Citizen Data Academy instructing residents on “how to find [open data], what it includes, and how to use it to understand trends and improve quality of life in our community.” …

In some communities, open-data officials work with city workers and neighborhood leaders to learn to help their communities access the benefits of public data even if only a small fraction of residents are accessing the data itself.

In Philadelphia, city teams work with the Citizens Planning Institute, an educational initiative of the city planning commission, to train neighborhood organizers in how to use city data around things like zoning and construction permits to keep up with development in their neighborhoods, says Kistine Carolan, open data program manager in the Office of Innovation and Technology. The Los Angeles Department of Neighborhood Empowerment runs a Data Literacy Program to help neighborhood groups make better use of the city’s data. So far, officials say, representatives of 50 of the city’s 99 neighborhood councils have signed up as part of the Data Liaisons program to learn new GIS and data-analysis skills to benefit their neighborhoods. 

Leverage the COVID moment

The COVID-19 pandemic has disrupted cities’ open-data plans, just like it has complicated every other aspect of society. Cities had to cancel scheduled in-person trainings and programs that help them reach some of their less-connected residents. But the pandemic has also demonstrated the fundamental role that data can play in helping to manage public emergencies. Cities large and small have hosted online tools that allow residents to track where cases are spiking—tools that have gotten many new people to interact with public data, officials say….(More)”.