How a largely untested AI algorithm crept into hundreds of hospitals


Vishal Khetpal and Nishant Shah at FastCompany: “Last spring, physicians like us were confused. COVID-19 was just starting its deadly journey around the world, afflicting our patients with severe lung infections, strokes, skin rashes, debilitating fatigue, and numerous other acute and chronic symptoms. Armed with outdated clinical intuitions, we were left disoriented by a disease shrouded in ambiguity.

In the midst of the uncertainty, Epic, a private electronic health record giant and a key purveyor of American health data, accelerated the deployment of a clinical prediction tool called the Deterioration Index. Built with a type of artificial intelligence called machine learning and in use at some hospitals prior to the pandemic, the index is designed to help physicians decide when to move a patient into or out of intensive care, and is influenced by factors like breathing rate and blood potassium level. Epic had been tinkering with the index for years but expanded its use during the pandemic. At hundreds of hospitals, including those in which we both work, a Deterioration Index score is prominently displayed on the chart of every patient admitted to the hospital.

The Deterioration Index is poised to upend a key cultural practice in medicine: triage. Loosely speaking, triage is an act of determining how sick a patient is at any given moment to prioritize treatment and limited resources. In the past, physicians have performed this task by rapidly interpreting a patient’s vital signs, physical exam findings, test results, and other data points, using heuristics learned through years of on-the-job medical training.

Ostensibly, the core assumption of the Deterioration Index is that traditional triage can be augmented, or perhaps replaced entirely, by machine learning and big data. Indeed, a study of 392 COVID-19 patients admitted to Michigan Medicine that the index was moderately successful at discriminating between low-risk patients and those who were at high-risk of being transferred to an ICU, getting placed on a ventilator, or dying while admitted to the hospital. But last year’s hurried rollout of the Deterioration Index also sets a worrisome precedent, and it illustrates the potential for such decision-support tools to propagate biases in medicine and change the ways in which doctors think about their patients….(More)”.

Creating Public Value using the AI-Driven Internet of Things


Report by Gwanhoo Lee: “Government agencies seek to deliver quality services in increasingly dynamic and complex environments. However, outdated infrastructures—and a shortage of systems that collect and use massive real-time data—make it challenging for the agencies to fulfill their missions. Governments have a tremendous opportunity to transform public services using the “Internet of Things” (IoT) to provide situationspecific and real-time data, which can improve decision-making and optimize operational effectiveness.

In this report, Professor Lee describes IoT as a network of physical “things” equipped with sensors and devices that enable data transmission and operational control with no or little human intervention. Organizations have recently begun to embrace artificial intelligence (AI) and machine learning (ML) technologies to drive even greater value from IoT applications. AI/ML enhances the data analytics capabilities of IoT by enabling accurate predictions and optimal decisions in new ways. Professor Lee calls this AI/ML-powered IoT the “AI-Driven Internet of Things” (AIoT for short hereafter). AIoT is a natural evolution of IoT as computing, networking, and AI/ML technologies are increasingly converging, enabling organizations to develop as “cognitive enterprises” that capitalize on the synergy across these emerging technologies.

Strategic application of IoT in government is in an early phase. Few U.S. federal agencies have explicitly incorporated IoT in their strategic plan, or connected the potential of AI to their evolving IoT activities. The diversity and scale of public services combined with various needs and demands from citizens provide an opportunity to deliver value from implementing AI-driven IoT applications.

Still, IoT is already making the delivery of some public services smarter and more efficient, including public parking, water management, public facility management, safety alerts for the elderly, traffic control, and air quality monitoring. For example, the City of Chicago has deployed a citywide network of air quality sensors mounted on lampposts. These sensors track the presence of several air pollutants, helping the city develop environmental responses that improve the quality of life at a community level. As the cost of sensors decreases while computing power and machine learning capabilities grow, IoT will become more feasible and pervasive across the public sector—with some estimates of a market approaching $5 trillion in the next few years.

Professor Lee’s research aims to develop a framework of alternative models for creating public value with AIoT, validating the framework with five use cases in the public domain. Specifically, this research identifies three essential building blocks to AIoT: sensing through IoT devices, controlling through the systems that support these devices, and analytics capabilities that leverage AI to understand and act on the information accessed across these applications. By combining the building blocks in different ways, the report identifies four models for creating public value:

  • Model 1 utilizes only sensing capability.
  • Model 2 uses sensing capability and controlling capability.
  • Model 3 leverages sensing capability and analytics capability.
  • Model 4 combines all three capabilities.

The analysis of five AIoT use cases in the public transport sector from Germany, Singapore, the U.K., and the United States identifies 10 critical success factors, such as creating public value, using public-private partnerships, engaging with the global technology ecosystem, implementing incrementally, quantifying the outcome, and using strong cybersecurity measures….(More)”.

Implications of the use of artificial intelligence in public governance: A systematic literature review and a research agenda


Paper by Anneke Zuiderwijk, Yu-CheChen and Fadi Salem: “To lay the foundation for the special issue that this research article introduces, we present 1) a systematic review of existing literature on the implications of the use of Artificial Intelligence (AI) in public governance and 2) develop a research agenda. First, an assessment based on 26 articles on this topic reveals much exploratory, conceptual, qualitative, and practice-driven research in studies reflecting the increasing complexities of using AI in government – and the resulting implications, opportunities, and risks thereof for public governance. Second, based on both the literature review and the analysis of articles included in this special issue, we propose a research agenda comprising eight process-related recommendations and seven content-related recommendations. Process-wise, future research on the implications of the use of AI for public governance should move towards more public sector-focused, empirical, multidisciplinary, and explanatory research while focusing more on specific forms of AI rather than AI in general. Content-wise, our research agenda calls for the development of solid, multidisciplinary, theoretical foundations for the use of AI for public governance, as well as investigations of effective implementation, engagement, and communication plans for government strategies on AI use in the public sector. Finally, the research agenda calls for research into managing the risks of AI use in the public sector, governance modes possible for AI use in the public sector, performance and impact measurement of AI use in government, and impact evaluation of scaling-up AI usage in the public sector….(More)”.

The Ancient Imagination Behind China’s AI Ambition


Essay by Jennifer Bourne: “Artificial intelligence is a modern technology, but in both the West and the East the aspiration for inventing autonomous tools and robots that can think for themselves can be traced back to ancient times. Adrienne Mayor, a historian of science at Stanford, has noted that in ancient Greece, there were myths about tools that helped men become godlike, such as the legendary inventor Daedalus who fabricated wings for himself and his son to escape from prison. 

Similar myths and stories are to be found in China too, where aspirations for advanced robots also appeared thousands of years ago. In a tale that appears in the Taoist text “Liezi,” which is attributed to the 5th-century BCE philosopher Lie Yukou, a technician named Yan Shi made a humanlike robot that could dance and sing and even dared to flirt with the king’s concubines. The king, angry and fearful, ordered the robot to be dismantled. 

In the Three Kingdoms era (220-280), a politician named Zhuge Liang invented a “fully automated” wheelbarrow (the translation from the Chinese is roughly “wooden ox”) that could reportedly carry over 200 pounds of food supplies and walk 20 miles a day without needing any fuel or manpower. Later, Zhang Zhuo, a scholar who died around 730, wrote a story about a robot that was obedient, polite and could pour wine for guests at parties. In the same collection of stories, Zhang also mentioned a robot monk who wandered around town, asking for alms and bowing to those who gave him something. And in “Extensive Records of the Taiping Era,” published in 978, a technician called Ma Daifeng is said to have invented a robot maid who did household chores for her master.

Imaginative narratives of intelligent robots or autonomous tools can be found throughout agriculture-dominated ancient China, where wealth flowed from a higher capacity for labor. So, stories reflect ancient people’s desire to get more artificial hands on deck, and to free themselves from intensive farm work….(More)”.

Algorithmic thinking in the public interest: navigating technical, legal, and ethical hurdles to web scraping in the social sciences


Paper by Alex Luscombe, Kevin Dick & Kevin Walby: “Web scraping, defined as the automated extraction of information online, is an increasingly important means of producing data in the social sciences. We contribute to emerging social science literature on computational methods by elaborating on web scraping as a means of automated access to information. We begin by situating the practice of web scraping in context, providing an overview of how it works and how it compares to other methods in the social sciences. Next, we assess the benefits and challenges of scraping as a technique of information production. In terms of benefits, we highlight how scraping can help researchers answer new questions, supersede limits in official data, overcome access hurdles, and reinvigorate the values of sharing, openness, and trust in the social sciences. In terms of challenges, we discuss three: technical, legal, and ethical. By adopting “algorithmic thinking in the public interest” as a way of navigating these hurdles, researchers can improve the state of access to information on the Internet while also contributing to scholarly discussions about the legality and ethics of web scraping. Example software accompanying this article are available within the supplementary materials..(More)”.

Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis


A CDT Research report, entitled "Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis".
CDT Research report, entitled “Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis”.

Report by Dhanaraj Thakur and  Emma Llansó: “The ever-increasing amount of user-generated content online has led, in recent years, to an expansion in research and investment in automated content analysis tools. Scrutiny of automated content analysis has accelerated during the COVID-19 pandemic, as social networking services have placed a greater reliance on these tools due to concerns about health risks to their moderation staff from in-person work. At the same time, there are important policy debates around the world about how to improve content moderation while protecting free expression and privacy. In order to advance these debates, we need to understand the potential role of automated content analysis tools.

This paper explains the capabilities and limitations of tools for analyzing online multimedia content and highlights the potential risks of using these tools at scale without accounting for their limitations. It focuses on two main categories of tools: matching models and computer prediction models. Matching models include cryptographic and perceptual hashing, which compare user-generated content with existing and known content. Predictive models (including computer vision and computer audition) are machine learning techniques that aim to identify characteristics of new or previously unknown content….(More)”.

Practical Lessons for Government AI Projects


Paper by Godofredo Jr Ramizo: “Governments around the world are launching projects that embed artificial intelligence (AI) in the delivery of public services. How can government officials navigate the complexities of AI projects and deliver successful outcomes? Using a review of the existing literature and interviews with senior government officials from Hong Kong, Malaysia, and Singapore who have worked on Smart City and similar AI-driven projects, this paper demonstrates the diversity of government AI projects and identifies practical lessons that help safeguard public interest. I make two contributions. First, I show that we can classify government AI projects based on their level of importance to government functions and the level of organisational resources available to them. These two dimensions result in four types of AI projects, each with its own risks and appropriate strategies. Second, I propose five general lessons for government AI projects in any field, and outline specific measures appropriate to each of the aforementioned types of AI projects….(More)”.

Cooperative AI: machines must learn to find common ground


Paper by Allan Dafoe et al in Nature: “Artificial-intelligence assistants and recommendation algorithms interact with billions of people every day, influencing lives in myriad ways, yet they still have little understanding of humans. Self-driving vehicles controlled by artificial intelligence (AI) are gaining mastery of their interactions with the natural world, but they are still novices when it comes to coordinating with other cars and pedestrians or collaborating with their human operators.

The state of AI applications reflects that of the research field. It has long been steeped in a kind of methodological individualism. As is evident from introductory textbooks, the canonical AI problem is that of a solitary machine confronting a non-social environment. Historically, this was a sensible starting point. An AI agent — much like an infant — must first master a basic understanding of its environment and how to interact with it.

Even in work involving multiple AI agents, the field has not yet tackled the hard problems of cooperation. Most headline results have come from two-player zero-sum games, such as backgammon, chess, Go and poker. Gains in these competitive examples can be made only at the expense of others. Although such settings of pure conflict are vanishingly rare in the real world, they make appealing research projects. They are culturally cherished, relatively easy to benchmark (by asking whether the AI can beat the opponent), have natural curricula (because students train against peers of their own skill level) and have simpler solutions than semi-cooperative games do.

AI needs social understanding and cooperative intelligence to integrate well into society. The coming years might give rise to diverse ecologies of AI systems that interact in rapid and complex ways with each other and with humans: on pavements and roads, in consumer and financial markets, in e-mail communication and social media, in cybersecurity and physical security. Autonomous vehicles or smart cities that do not engage well with humans will fail to deliver their benefits, and might even disrupt stable human relationships…(More)”

Experimental Regulations for AI: Sandboxes for Morals and Mores


Paper by Sofia Ranchordas: “Recent EU legislative and policy initiatives aim to offer flexible, innovation-friendly, and future-proof regulatory frameworks. Key examples are the EU Coordinated Plan on AI and the recently published EU AI Regulation Proposal which refer to the importance of experimenting with regulatory sandboxes so as to balance innovation in AI against its potential risks. Originally developed in the Fintech sector, regulatory sandboxes create a testbed for a selected number of innovative projects, by waiving otherwise applicable rules, guiding compliance, or customizing enforcement. Despite the burgeoning literature on regulatory sandboxes and the regulation of AI, the legal, methodological, and ethical challenges of regulatory sandboxes have remained understudied. This exploratory article delves into the some of the benefits and intricacies of employing experimental legal instruments in the context of the regulation of AI. This article’s contribution is twofold: first, it contextualizes the adoption of regulatory sandboxes in the broader discussion on experimental approaches to regulation; second, it offers a reflection on the steps ahead for the design and implementation of AI regulatory sandboxes….(More)”.

Artificial intelligence (AI) has become one of the most impactful technologies of the twenty-first century


Lynne Parker at the AI.gov website: “Artificial intelligence (AI) has become one of the most impactful technologies of the twenty-first century.  Nearly every sector of the economy and society has been affected by the capabilities and potential of AI.  AI is enabling farmers to grow food more efficiently, medical researchers to better understand and treat COVID-19, scientists to develop new materials, transportation professionals to deliver more goods faster and with less energy, weather forecasters to more accurately predict the tracks of hurricanes, and national security protectors to better defend our Nation.

At the same time, AI has raised important societal concerns.  What is the impact of AI on the changing nature of work?  How can we ensure that AI is used appropriately, and does not result in unfair discrimination or bias?  How can we guard against uses of AI that infringe upon human rights and democratic principles?

These dual perspectives on AI have led to the concept of “trustworthy AI”.  Trustworthy AI is AI that is designed, developed, and used in a manner that is lawful, fair, unbiased, accurate, reliable, effective, safe, secure, resilient, understandable, and with processes in place to regularly monitor and evaluate the AI system’s performance and outcomes.

Achieving trustworthy AI requires an all-of-government and all-of-Nation approach, combining the efforts of industry, academia, government, and civil society.  The Federal government is doing its part through a national strategy, called the National AI Initiative Act of 2020 (NAIIA).  The National AI Initiative (NAII) builds upon several years of impactful AI policy actions, many of which were outcomes from EO 13859 on Maintaining American Leadership in AI.

Six key pillars define the Nation’s AI strategy:

  • prioritizing AI research and development;
  • strengthening AI research infrastructure;
  • advancing trustworthy AI through technical standards and governance;
  • training an AI-ready workforce;
  • promoting international AI engagement; and
  • leveraging trustworthy AI for government and national security.

Coordinating all of these efforts is the National AI Initiative Office, which is legislated by the NAIIA to coordinate and support the NAII.  This Office serves as the central point of contact for exchanging technical and programmatic information on AI activities at Federal departments and agencies, as well as related Initiative activities in industry, academia, nonprofit organizations, professional societies, State and tribal governments, and others.

The AI.gov website provides a portal for exploring in more depth the many AI actions, initiatives, strategies, programs, reports, and related efforts across the Federal government.  It serves as a resource for those who want to learn more about how to take full advantage of the opportunities of AI, and to learn how the Federal government is advancing the design, development, and use of trustworthy AI….(More)”