From satisficing to artificing: The evolution of administrative decision-making in the age of the algorithm


Paper by Thea Snow at Data & Policy: “Algorithmic decision tools (ADTs) are being introduced into public sector organizations to support more accurate and consistent decision-making. Whether they succeed turns, in large part, on how administrators use these tools. This is one of the first empirical studies to explore how ADTs are being used by Street Level Bureaucrats (SLBs). The author develops an original conceptual framework and uses in-depth interviews to explore whether SLBs are ignoring ADTs (algorithm aversion); deferring to ADTs (automation bias); or using ADTs together with their own judgment (an approach the author calls “artificing”). Interviews reveal that artificing is the most common use-type, followed by aversion, while deference is rare. Five conditions appear to influence how practitioners use ADTs: (a) understanding of the tool (b) perception of human judgment (c) seeing value in the tool (d) being offered opportunities to modify the tool (e) alignment of tool with expectations….(More)”.

Chief information officers’ perceptions about artificial intelligence


Article by J. Ignacio Criado et al: “This article presents a study about artificial intelligence (AI) policy based on the perceptions, expectations, and challenges/opportunities given by chief information officers (CIOs). In general, publications about AI in the public sector relies on experiences, cases, ideas, and results from the private sector. Our study stands out from the need of defining a distinctive approach to AI in the public sector, gathering primary (and comparative) data from different countries, and assessing the key role of CIOs to frame federal/national AI policies and strategies. This article reports three research questions, including three dimensions of analysis: (1) perceptions regarding to the concept of AI in the public sector; (2) expectations about the development of AI in the public sector; and, (3) challenges and opportunities of AI in the public sector. This exploratory study presents the results of a survey administered to federal/national ministerial government CIOs in ministries of Mexico and Spain. Our descriptive statistical (and exploratory) analysis provides an overall approach to our dimensions, exploratory answering the research questions of the study. Our data supports the existence of different governance models and policy priorities in different countries. Also, these results might inform research in this same area and will help senior officials to assess the national AI policies actually in process of design and implementation in different national/federal, regional/state, and local/municipal contexts….(More)”.

Predictive Policing and Artificial Intelligence


Book edited by John McDaniel and Ken Pease: “This edited text draws together the insights of numerous worldwide eminent academics to evaluate the condition of predictive policing and artificial intelligence (AI) as interlocked policy areas. Predictive and AI technologies are growing in prominence and at an unprecedented rate. Powerful digital crime mapping tools are being used to identify crime hotspots in real-time, as pattern-matching and search algorithms are sorting through huge police databases populated by growing volumes of data in an eff ort to identify people liable to experience (or commit) crime, places likely to host it, and variables associated with its solvability. Facial and vehicle recognition cameras are locating criminals as they move, while police services develop strategies informed by machine learning and other kinds of predictive analytics. Many of these innovations are features of modern policing in the UK, the US and Australia, among other jurisdictions.

AI promises to reduce unnecessary labour, speed up various forms of police work, encourage police forces to more efficiently apportion their resources, and enable police officers to prevent crime and protect people from a variety of future harms. However, the promises of predictive and AI technologies and innovations do not always match reality. They often have significant weaknesses, come at a considerable cost and require challenging trade- off s to be made. Focusing on the UK, the US and Australia, this book explores themes of choice architecture, decision- making, human rights, accountability and the rule of law, as well as future uses of AI and predictive technologies in various policing contexts. The text contributes to ongoing debates on the benefits and biases of predictive algorithms, big data sets, machine learning systems, and broader policing strategies and challenges.

Written in a clear and direct style, this book will appeal to students and scholars of policing, criminology, crime science, sociology, computer science, cognitive psychology and all those interested in the emergence of AI as a feature of contemporary policing….(More)”.

The Protein and the Social Worker: How machine learning can help us tackle society’s biggest challenges


Article by Juan Mateos-Garcia: “The potential for machine learning (ML) to address our toughest health, education and sustainability issues remains unfulfilled. What lessons about what to do – and what not to do – can we learn from other sectors where ML has been applied at scale?

Last year, the UK research lab DeepMind announced that its AI system, AlphaFold 2, can predict a protein’s 3D structure with an unprecedented level of accuracy. This breakthrough could enable rapid advances in drug discovery and environmental applications.

Like almost all AI systems today, AlphaFold 2 is based on ML techniques that learn from data to make predictions. These ‘prediction machines’ are at the heart of internet products and services we use every day, from search engines and social networks to personal assistants and online stores. In years to come, ML is expected to transform other sectors including transportation (through self-driving vehicles), biomedical research (through precision medicine) and manufacturing (through robotics).

But what about fields such as healthy living, early years development or sustainability, where our societies face some of their greatest challenges? Predictive ML techniques could also play an important role there – by helping identify pupils at risk of falling behind, or by personalising interventions to encourage healthier behaviours. However, its potential in these areas is still far from being realised….(More)”.

From Rationality to Relationality: Ubuntu as an Ethical and Human Rights Framework for Artificial Intelligence Governance


Paper by Sabelo Mhlambi: “What is the measure of personhood and what does it mean for machines to exhibit human-like qualities and abilities? Furthermore, what are the human rights, economic, social, and political implications of using machines that are designed to reproduce human behavior and decision making? The question of personhood is one of the most fundamental questions in philosophy and it is at the core of the questions, and the quest, for an artificial or mechanical personhood. 

The development of artificial intelligence has depended on the traditional Western view of personhood as rationality. However, the traditional view of rationality as the essence of personhood, designating how humans, and now machines, should model and approach the world, has always been marked by contradictions, exclusions, and inequality. It has shaped Western economic structures (capitalism’s free markets built on colonialism’s forced markets), political structures (modernity’s individualism imposed through coloniality), and discriminatory social hierarchies (racism and sexism as institutions embedded in enlightenment-era rationalized social and gender exclusions from full person status and economic, political, and social participation), which in turn shape the data, creation, and function of artificial intelligence. It is therefore unsurprising that the artificial intelligence industry reproduces these dehumanizations. Furthermore, the perceived rationality of machines obscures machine learning’s uncritical imitation of discriminatory patterns within its input data, and minimizes the role systematic inequalities play in harmful artificial intelligence outcomes….(More)”.

Court Rules Deliveroo Used ‘Discriminatory’ Algorithm


Gabriel Geiger at Motherboard: “An algorithm used by the popular European food delivery app Deliveroo to rank and offer shifts to riders is discriminatory, an Italian court ruled late last week, in what some experts are calling a historic decision for the gig economy. The case was brought by a group of Deliveroo riders backed by CGIL, Italy’s largest trade union. 

markedly detailed ordinance written by presiding judge Chiara Zompi gives an intimate look at one of many often secretive algorithms used by gig platforms to micromanage workers and which can have profound impacts on their livelihoods. 

While machine-learning algorithms are central to Deliveroo’s entire business model, the particular algorithm examined by the court allegedly was used to determine the “reliability” of a rider. According to the ordinance, if a rider failed to cancel a shift pre-booked through the app at least 24 hours before its start, their “reliability index” would be negatively affected. Since riders deemed more reliable by the algorithm were first to be offered shifts in busier timeblocks, this effectively meant that riders who can’t make their shifts—even if it’s because of a serious emergency or illness—would have fewer job opportunities in the future….(More)”

Cultivating Trustworthy Artificial Intelligence in Digital Government


Paper by Teresa M. Harrison and Luis Felipe Luna-Reyes: “While there is growing consensus that the analytical and cognitive tools of artificial intelligence (AI) have the potential to transform government in positive ways, it is also clear that AI challenges traditional government decision-making processes and threatens the democratic values within which they are framed. These conditions argue for conservative approaches to AI that focus on cultivating and sustaining public trust. We use the extended Brunswik lens model as a framework to illustrate the distinctions between policy analysis and decision making as we have traditionally understood and practiced them and how they are evolving in the current AI context along with the challenges this poses for the use of trustworthy AI. We offer a set of recommendations for practices, processes, and governance structures in government to provide for trust in AI and suggest lines of research that support them….(More)”.

New York Temporarily Bans Facial Recognition Technology in Schools


Hunton’s Privacy Blog: “On December 22, 2020, New York Governor Andrew Cuomo signed into law legislation that temporarily bans the use or purchase of facial recognition and other biometric identifying technology in public and private schools until at least July 1, 2022. The legislation also directs the New York Commissioner of Education (the “Commissioner”) to conduct a study on whether this technology is appropriate for use in schools.

In his press statement, Governor Cuomo indicated that the legislation comes after concerns were raised about potential risks to students, including issues surrounding misidentification by the technology as well as safety, security and privacy concerns. “This legislation requires state education policymakers to take a step back, consult with experts and address privacy issues before determining whether any kind of biometric identifying technology can be brought into New York’s schools. The safety and security of our children is vital to every parent, and whether to use this technology is not a decision to be made lightly,” the Governor explained.

Key elements of the legislation include:

  • Defining “facial recognition” as “any tool using an automated or semi-automated process that assists in uniquely identifying or verifying a person by comparing and analyzing patterns based on the person’s face,” and “biometric identifying technology” as “any tool using an automated or semi-automated process that assists in verifying a person’s identity based on a person’s biometric information”;
  • Prohibiting the purchase and use of facial recognition and other biometric identifying technology in all public and private elementary and secondary schools until July 1, 2022, or until the Commissioner authorizes the purchase and use of such technology, whichever occurs later; and
  • Directing the Commissioner, in consultation with New York’s Office of Information Technology, Division of Criminal Justice Services, Education Department’s Chief Privacy Officer and other stakeholders, to conduct a study and make recommendations as to the circumstances in which facial recognition and other biometric identifying technology is appropriate for use in schools and what restrictions and guidelines should be enacted to protect privacy, civil rights and civil liberties interests….(More)”.

Augmented Reality and the Surveillance Society


Mark Pesce at IEEE Spectrum: “First articulated in a 1965 white paper by Ivan Sutherland, titled “The Ultimate Display,” augmented reality (AR) lay beyond our technical capacities for 50 years. That changed when smartphones began providing people with a combination of cheap sensors, powerful processors, and high-bandwidth networking—the trifecta needed for AR to generate its spatial illusions. Among today’s emerging technologies, AR stands out as particularly demanding—for computational power, for sensed data, and, I’d argue, for attention to the danger it poses.

Unlike virtual-reality (VR) gear, which creates for the user a completely synthetic experience, AR gear adds to the user’s perception of her environment. To do that effectively, AR systems need to know where in space the user is located. VR systems originally used expensive and fragile systems for tracking user movements from the outside in, often requiring external sensors to be set up in the room. But the new generation of VR accomplishes this through a set of techniques collectively known as simultaneous localization and mapping (SLAM). These systems harvest a rich stream of observational data—mostly from cameras affixed to the user’s headgear, but sometimes also from sonar, lidar, structured light, and time-of-flight sensors—using those measurements to update a continuously evolving model of the user’s spatial environment.

For safety’s sake, VR systems must be restricted to certain tightly constrained areas, lest someone blinded by VR goggles tumble down a staircase. AR doesn’t hide the real world, though, so people can use it anywhere. That’s important because the purpose of AR is to add helpful (or perhaps just entertaining) digital illusions to the user’s perceptions. But AR has a second, less appreciated, facet: It also functions as a sophisticated mobile surveillance system.

This second quality is what makes Facebook’s recent Project Aria experiment so unnerving. Nearly four years ago, Mark Zuckerberg announced Facebook’s goal to create AR “spectacles”—consumer-grade devices that could one day rival the smartphone in utility and ubiquity. That’s a substantial technical ask, so Facebook’s research team has taken an incremental approach. Project Aria packs the sensors necessary for SLAM within a form factor that resembles a pair of sunglasses. Wearers collect copious amounts of data, which is fed back to Facebook for analysis. This information will presumably help the company to refine the design of an eventual Facebook AR product.

The concern here is obvious: When it comes to market in a few years, these glasses will transform their users into data-gathering minions for Facebook. Tens, then hundreds of millions of these AR spectacles will be mapping the contours of the world, along with all of its people, pets, possessions, and peccadilloes. The prospect of such intensive surveillance at planetary scale poses some tough questions about who will be doing all this watching and why….(More)”.

The Control Paradox: From AI to Populism


Book by Ezio Di Nucci: “Is technological innovation spinning out of control? During a one-week period in 2018, social media was revealed to have had huge undue influence on the 2016 U.S. presidential election and the first fatality from a self-driving car was recorded. What’s paradoxical about the understandable fear of machines taking control through software, robots, and artificial intelligence is that new technology is often introduced in order to increase our control of a certain task. This is what Ezio Di Nucci calls the “control paradox.”

Di Nucci also brings this notion to bear on politics: we delegate power and control to political representatives in order to improve democratic governance. However, recent populist uprisings have shown that voters feel disempowered and neglected by this system. This lack of direct control within representative democracies could be a motivating factor for populism, and Di Nucci argues that a better understanding of delegation is a possible solution….(More)”.