The Constitution of Algorithms


Open Access Book by By Florian Jaton: “A laboratory study that investigates how algorithms come into existence. Algorithms—often associated with the terms big datamachine learning, or artificial intelligence—underlie the technologies we use every day, and disputes over the consequences, actual or potential, of new algorithms arise regularly. In this book, Florian Jaton offers a new way to study computerized methods, providing an account of where algorithms come from and how they are constituted, investigating the practical activities by which algorithms are progressively assembled rather than what they may suggest or require once they are assembled.

Drawing on a four-year ethnographic study of a computer science laboratory that specialized in digital image processing, Jaton illuminates the invisible processes that are behind the development of algorithms. Tracing what he terms a set of intertwining courses of actions sharing common finalities, he describes the practical activity of creating algorithms through the lenses of ground-truthingprogramming, and formulating. He first presents the building of ground truths, referential repositories that form the material basis for algorithms. Then, after considering programming’s resistance to ethnographic scrutiny, he describes programming courses of action he attended at the laboratory. Finally, he offers an account of courses of action that successfully formulated some of the relationships among the data of a ground-truth database, revealing the links between ground-truthing, programming, and formulating activities—entangled processes that lead to the shaping of algorithms. In practice, ground-truthing, programming, and formulating form a whirlwind process, an emergent and intertwined agency….(More)”.

AI and Shared Prosperity


Paper by Katya Klinova and Anton Korinek: “Future advances in AI that automate away human labor may have stark implications for labor markets and inequality. This paper proposes a framework to analyze the effects of specific types of AI systems on the labor market, based on how much labor demand they will create versus displace, while taking into account that productivity gains also make society wealthier and thereby contribute to additional labor demand. This analysis enables ethically-minded companies creating or deploying AI systems as well as researchers and policymakers to take into account the effects of their actions on labor markets and inequality, and therefore to steer progress in AI in a direction that advances shared prosperity and an inclusive economic future for all of humanity…(More)”.

AI helps scour video archives for evidence of human-rights abuses


The Economist: “Thanks especially to ubiquitous camera-phones, today’s wars have been filmed more than any in history. Consider the growing archives of Mnemonic, a Berlin charity that preserves video that purports to document war crimes and other violations of human rights. If played nonstop, Mnemonic’s collection of video from Syria’s decade-long war would run until 2061. Mnemonic also holds seemingly bottomless archives of video from conflicts in Sudan and Yemen. Even greater amounts of potentially relevant additional footage await review online.

Outfits that, like Mnemonic, scan video for evidence of rights abuses note that the task is a slog. Some trim costs by recruiting volunteer reviewers. Not everyone, however, is cut out for the tedium and, especially, periodic dreadfulness involved. That is true even for paid staff. Karim Khan, who leads a United Nations team in Baghdad investigating Islamic State (IS) atrocities, says viewing the graphic cruelty causes enough “secondary trauma” for turnover to be high. The UN project, called UNITAD, is sifting through documentation that includes more than a year’s worth of video, most of it found online or on the phones and computers of captured or killed IS members.

Now, however, reviewing such video is becoming much easier. Technologists are developing a type of artificial-intelligence (AI) software that uses “machine vision” to rapidly scour video for imagery that suggests an abuse of human rights has been recorded. It’s early days, but the software is promising. A number of organisations, including Mnemonic and UNITAD, have begun to operate such programs.

This year UNITAD began to run one dubbed Zeteo. It performs well, says David Hasman, one of its operators. Zeteo can be instructed to find—and, if the image resolution is decent, typically does find—bits of video showing things like explosions, beheadings, firing into a crowd and grave-digging. Zeteo can also spot footage of a known person’s face, as well as scenes as precise as a woman walking in uniform, a boy holding a gun in twilight, and people sitting on a rug with an IS flag in view. Searches can encompass metadata that reveals when, where and on what devices clips were filmed….(More)”.

Confronting Bias: BSA’s Framework to Build Trust in AI


BSA Software Alliance: “The Framework is a playbook organizations can use to enhance trust in their AI systems through risk management processes that promote fairness, transparency, and accountability. It can be leveraged by organizations that develop AI systems and companies that acquire and deploy such systems as the basis for:
– Internal Process Guidance. The Framework can be used as a tool for organizing and establishing roles,
responsibilities, and expectations for internal risk management processes.
– Training, Awareness, and Education. The Framework can be used to build internal training and education
programs for employees involved in developing and using AI systems, and for educating executives about
the organization’s approach to managing AI bias risks.
– Supply Chain Assurance and Accountability. AI developers and organizations that deploy AI
systems can use the Framework as a basis for communicating and coordinating about their respective roles and responsibilities for managing AI risks throughout a system’s lifecycle.
– Trust and Confidence. The Framework can help organizations communicate information about a
product’s features and its approach to mitigating AI bias risks to a public audience. In that sense, the
Framework can help organizations communicate to the public about their commitment to building
ethical AI systems.
– Incident Response. Following an unexpected incident, the processes and documentation set forth
in the Framework can serve as an audit trail that can help organizations quickly diagnose and remediate
potential problems…(More)”

Implications of the use of artificial intelligence in public governance: A systematic literature review and a research agenda


Paper by Anneke Zuiderwijk, Yu-Che Chen and Fadi Salem: “To lay the foundation for the special issue that this research article introduces, we present 1) a systematic review of existing literature on the implications of the use of Artificial Intelligence (AI) in public governance and 2) develop a research agenda. First, an assessment based on 26 articles on this topic reveals much exploratory, conceptual, qualitative, and practice-driven research in studies reflecting the increasing complexities of using AI in government – and the resulting implications, opportunities, and risks thereof for public governance. Second, based on both the literature review and the analysis of articles included in this special issue, we propose a research agenda comprising eight process-related recommendations and seven content-related recommendations. Process-wise, future research on the implications of the use of AI for public governance should move towards more public sector-focused, empirical, multidisciplinary, and explanatory research while focusing more on specific forms of AI rather than AI in general. Content-wise, our research agenda calls for the development of solid, multidisciplinary, theoretical foundations for the use of AI for public governance, as well as investigations of effective implementation, engagement, and communication plans for government strategies on AI use in the public sector. Finally, the research agenda calls for research into managing the risks of AI use in the public sector, governance modes possible for AI use in the public sector, performance and impact measurement of AI use in government, and impact evaluation of scaling-up AI usage in the public sector….(More)”.

What Robots Can — And Can’t — Do For the Old and Lonely


Katie Engelhart at The New Yorker: “…In 2017, the Surgeon General, Vivek Murthy, declared loneliness an “epidemic” among Americans of all ages. This warning was partly inspired by new medical research that has revealed the damage that social isolation and loneliness can inflict on a body. The two conditions are often linked, but they are not the same: isolation is an objective state (not having much contact with the world); loneliness is a subjective one (feeling that the contact you have is not enough). Both are thought to prompt a heightened inflammatory response, which can increase a person’s risk for a vast range of pathologies, including dementia, depression, high blood pressure, and stroke. Older people are more susceptible to loneliness; forty-three per cent of Americans over sixty identify as lonely. Their individual suffering is often described by medical researchers as especially perilous, and their collective suffering is seen as an especially awful societal failing….

So what’s a well-meaning social worker to do? In 2018, New York State’s Office for the Aging launched a pilot project, distributing Joy for All robots to sixty state residents and then tracking them over time. Researchers used a six-point loneliness scale, which asks respondents to agree or disagree with statements like “I experience a general sense of emptiness.” They concluded that seventy per cent of participants felt less lonely after one year. The pets were not as sophisticated as other social robots being designed for the so-called silver market or loneliness economy, but they were cheaper, at about a hundred dollars apiece.

In April, 2020, a few weeks after New York aging departments shut down their adult day programs and communal dining sites, the state placed a bulk order for more than a thousand robot cats and dogs. The pets went quickly, and caseworkers started asking for more: “Can I get five cats?” A few clients with cognitive impairments were disoriented by the machines. One called her local department, distraught, to say that her kitty wasn’t eating. But, more commonly, people liked the pets so much that the batteries ran out. Caseworkers joked that their clients had loved them to death….(More)”.

How a largely untested AI algorithm crept into hundreds of hospitals


Vishal Khetpal and Nishant Shah at FastCompany: “Last spring, physicians like us were confused. COVID-19 was just starting its deadly journey around the world, afflicting our patients with severe lung infections, strokes, skin rashes, debilitating fatigue, and numerous other acute and chronic symptoms. Armed with outdated clinical intuitions, we were left disoriented by a disease shrouded in ambiguity.

In the midst of the uncertainty, Epic, a private electronic health record giant and a key purveyor of American health data, accelerated the deployment of a clinical prediction tool called the Deterioration Index. Built with a type of artificial intelligence called machine learning and in use at some hospitals prior to the pandemic, the index is designed to help physicians decide when to move a patient into or out of intensive care, and is influenced by factors like breathing rate and blood potassium level. Epic had been tinkering with the index for years but expanded its use during the pandemic. At hundreds of hospitals, including those in which we both work, a Deterioration Index score is prominently displayed on the chart of every patient admitted to the hospital.

The Deterioration Index is poised to upend a key cultural practice in medicine: triage. Loosely speaking, triage is an act of determining how sick a patient is at any given moment to prioritize treatment and limited resources. In the past, physicians have performed this task by rapidly interpreting a patient’s vital signs, physical exam findings, test results, and other data points, using heuristics learned through years of on-the-job medical training.

Ostensibly, the core assumption of the Deterioration Index is that traditional triage can be augmented, or perhaps replaced entirely, by machine learning and big data. Indeed, a study of 392 COVID-19 patients admitted to Michigan Medicine that the index was moderately successful at discriminating between low-risk patients and those who were at high-risk of being transferred to an ICU, getting placed on a ventilator, or dying while admitted to the hospital. But last year’s hurried rollout of the Deterioration Index also sets a worrisome precedent, and it illustrates the potential for such decision-support tools to propagate biases in medicine and change the ways in which doctors think about their patients….(More)”.

Creating Public Value using the AI-Driven Internet of Things


Report by Gwanhoo Lee: “Government agencies seek to deliver quality services in increasingly dynamic and complex environments. However, outdated infrastructures—and a shortage of systems that collect and use massive real-time data—make it challenging for the agencies to fulfill their missions. Governments have a tremendous opportunity to transform public services using the “Internet of Things” (IoT) to provide situationspecific and real-time data, which can improve decision-making and optimize operational effectiveness.

In this report, Professor Lee describes IoT as a network of physical “things” equipped with sensors and devices that enable data transmission and operational control with no or little human intervention. Organizations have recently begun to embrace artificial intelligence (AI) and machine learning (ML) technologies to drive even greater value from IoT applications. AI/ML enhances the data analytics capabilities of IoT by enabling accurate predictions and optimal decisions in new ways. Professor Lee calls this AI/ML-powered IoT the “AI-Driven Internet of Things” (AIoT for short hereafter). AIoT is a natural evolution of IoT as computing, networking, and AI/ML technologies are increasingly converging, enabling organizations to develop as “cognitive enterprises” that capitalize on the synergy across these emerging technologies.

Strategic application of IoT in government is in an early phase. Few U.S. federal agencies have explicitly incorporated IoT in their strategic plan, or connected the potential of AI to their evolving IoT activities. The diversity and scale of public services combined with various needs and demands from citizens provide an opportunity to deliver value from implementing AI-driven IoT applications.

Still, IoT is already making the delivery of some public services smarter and more efficient, including public parking, water management, public facility management, safety alerts for the elderly, traffic control, and air quality monitoring. For example, the City of Chicago has deployed a citywide network of air quality sensors mounted on lampposts. These sensors track the presence of several air pollutants, helping the city develop environmental responses that improve the quality of life at a community level. As the cost of sensors decreases while computing power and machine learning capabilities grow, IoT will become more feasible and pervasive across the public sector—with some estimates of a market approaching $5 trillion in the next few years.

Professor Lee’s research aims to develop a framework of alternative models for creating public value with AIoT, validating the framework with five use cases in the public domain. Specifically, this research identifies three essential building blocks to AIoT: sensing through IoT devices, controlling through the systems that support these devices, and analytics capabilities that leverage AI to understand and act on the information accessed across these applications. By combining the building blocks in different ways, the report identifies four models for creating public value:

  • Model 1 utilizes only sensing capability.
  • Model 2 uses sensing capability and controlling capability.
  • Model 3 leverages sensing capability and analytics capability.
  • Model 4 combines all three capabilities.

The analysis of five AIoT use cases in the public transport sector from Germany, Singapore, the U.K., and the United States identifies 10 critical success factors, such as creating public value, using public-private partnerships, engaging with the global technology ecosystem, implementing incrementally, quantifying the outcome, and using strong cybersecurity measures….(More)”.

Implications of the use of artificial intelligence in public governance: A systematic literature review and a research agenda


Paper by Anneke Zuiderwijk, Yu-CheChen and Fadi Salem: “To lay the foundation for the special issue that this research article introduces, we present 1) a systematic review of existing literature on the implications of the use of Artificial Intelligence (AI) in public governance and 2) develop a research agenda. First, an assessment based on 26 articles on this topic reveals much exploratory, conceptual, qualitative, and practice-driven research in studies reflecting the increasing complexities of using AI in government – and the resulting implications, opportunities, and risks thereof for public governance. Second, based on both the literature review and the analysis of articles included in this special issue, we propose a research agenda comprising eight process-related recommendations and seven content-related recommendations. Process-wise, future research on the implications of the use of AI for public governance should move towards more public sector-focused, empirical, multidisciplinary, and explanatory research while focusing more on specific forms of AI rather than AI in general. Content-wise, our research agenda calls for the development of solid, multidisciplinary, theoretical foundations for the use of AI for public governance, as well as investigations of effective implementation, engagement, and communication plans for government strategies on AI use in the public sector. Finally, the research agenda calls for research into managing the risks of AI use in the public sector, governance modes possible for AI use in the public sector, performance and impact measurement of AI use in government, and impact evaluation of scaling-up AI usage in the public sector….(More)”.

The Ancient Imagination Behind China’s AI Ambition


Essay by Jennifer Bourne: “Artificial intelligence is a modern technology, but in both the West and the East the aspiration for inventing autonomous tools and robots that can think for themselves can be traced back to ancient times. Adrienne Mayor, a historian of science at Stanford, has noted that in ancient Greece, there were myths about tools that helped men become godlike, such as the legendary inventor Daedalus who fabricated wings for himself and his son to escape from prison. 

Similar myths and stories are to be found in China too, where aspirations for advanced robots also appeared thousands of years ago. In a tale that appears in the Taoist text “Liezi,” which is attributed to the 5th-century BCE philosopher Lie Yukou, a technician named Yan Shi made a humanlike robot that could dance and sing and even dared to flirt with the king’s concubines. The king, angry and fearful, ordered the robot to be dismantled. 

In the Three Kingdoms era (220-280), a politician named Zhuge Liang invented a “fully automated” wheelbarrow (the translation from the Chinese is roughly “wooden ox”) that could reportedly carry over 200 pounds of food supplies and walk 20 miles a day without needing any fuel or manpower. Later, Zhang Zhuo, a scholar who died around 730, wrote a story about a robot that was obedient, polite and could pour wine for guests at parties. In the same collection of stories, Zhang also mentioned a robot monk who wandered around town, asking for alms and bowing to those who gave him something. And in “Extensive Records of the Taiping Era,” published in 978, a technician called Ma Daifeng is said to have invented a robot maid who did household chores for her master.

Imaginative narratives of intelligent robots or autonomous tools can be found throughout agriculture-dominated ancient China, where wealth flowed from a higher capacity for labor. So, stories reflect ancient people’s desire to get more artificial hands on deck, and to free themselves from intensive farm work….(More)”.