Paper by Teresa M. Harrison and Luis Felipe Luna-Reyes: “While there is growing consensus that the analytical and cognitive tools of artificial intelligence (AI) have the potential to transform government in positive ways, it is also clear that AI challenges traditional government decision-making processes and threatens the democratic values within which they are framed. These conditions argue for conservative approaches to AI that focus on cultivating and sustaining public trust. We use the extended Brunswik lens model as a framework to illustrate the distinctions between policy analysis and decision making as we have traditionally understood and practiced them and how they are evolving in the current AI context along with the challenges this poses for the use of trustworthy AI. We offer a set of recommendations for practices, processes, and governance structures in government to provide for trust in AI and suggest lines of research that support them….(More)”.
New York Temporarily Bans Facial Recognition Technology in Schools
Hunton’s Privacy Blog: “On December 22, 2020, New York Governor Andrew Cuomo signed into law legislation that temporarily bans the use or purchase of facial recognition and other biometric identifying technology in public and private schools until at least July 1, 2022. The legislation also directs the New York Commissioner of Education (the “Commissioner”) to conduct a study on whether this technology is appropriate for use in schools.
In his press statement, Governor Cuomo indicated that the legislation comes after concerns were raised about potential risks to students, including issues surrounding misidentification by the technology as well as safety, security and privacy concerns. “This legislation requires state education policymakers to take a step back, consult with experts and address privacy issues before determining whether any kind of biometric identifying technology can be brought into New York’s schools. The safety and security of our children is vital to every parent, and whether to use this technology is not a decision to be made lightly,” the Governor explained.
Key elements of the legislation include:
- Defining “facial recognition” as “any tool using an automated or semi-automated process that assists in uniquely identifying or verifying a person by comparing and analyzing patterns based on the person’s face,” and “biometric identifying technology” as “any tool using an automated or semi-automated process that assists in verifying a person’s identity based on a person’s biometric information”;
- Prohibiting the purchase and use of facial recognition and other biometric identifying technology in all public and private elementary and secondary schools until July 1, 2022, or until the Commissioner authorizes the purchase and use of such technology, whichever occurs later; and
- Directing the Commissioner, in consultation with New York’s Office of Information Technology, Division of Criminal Justice Services, Education Department’s Chief Privacy Officer and other stakeholders, to conduct a study and make recommendations as to the circumstances in which facial recognition and other biometric identifying technology is appropriate for use in schools and what restrictions and guidelines should be enacted to protect privacy, civil rights and civil liberties interests….(More)”.
Augmented Reality and the Surveillance Society
Mark Pesce at IEEE Spectrum: “First articulated in a 1965 white paper by Ivan Sutherland, titled “The Ultimate Display,” augmented reality (AR) lay beyond our technical capacities for 50 years. That changed when smartphones began providing people with a combination of cheap sensors, powerful processors, and high-bandwidth networking—the trifecta needed for AR to generate its spatial illusions. Among today’s emerging technologies, AR stands out as particularly demanding—for computational power, for sensed data, and, I’d argue, for attention to the danger it poses.
Unlike virtual-reality (VR) gear, which creates for the user a completely synthetic experience, AR gear adds to the user’s perception of her environment. To do that effectively, AR systems need to know where in space the user is located. VR systems originally used expensive and fragile systems for tracking user movements from the outside in, often requiring external sensors to be set up in the room. But the new generation of VR accomplishes this through a set of techniques collectively known as simultaneous localization and mapping (SLAM). These systems harvest a rich stream of observational data—mostly from cameras affixed to the user’s headgear, but sometimes also from sonar, lidar, structured light, and time-of-flight sensors—using those measurements to update a continuously evolving model of the user’s spatial environment.
For safety’s sake, VR systems must be restricted to certain tightly constrained areas, lest someone blinded by VR goggles tumble down a staircase. AR doesn’t hide the real world, though, so people can use it anywhere. That’s important because the purpose of AR is to add helpful (or perhaps just entertaining) digital illusions to the user’s perceptions. But AR has a second, less appreciated, facet: It also functions as a sophisticated mobile surveillance system.
This second quality is what makes Facebook’s recent Project Aria experiment so unnerving. Nearly four years ago, Mark Zuckerberg announced Facebook’s goal to create AR “spectacles”—consumer-grade devices that could one day rival the smartphone in utility and ubiquity. That’s a substantial technical ask, so Facebook’s research team has taken an incremental approach. Project Aria packs the sensors necessary for SLAM within a form factor that resembles a pair of sunglasses. Wearers collect copious amounts of data, which is fed back to Facebook for analysis. This information will presumably help the company to refine the design of an eventual Facebook AR product.
The concern here is obvious: When it comes to market in a few years, these glasses will transform their users into data-gathering minions for Facebook. Tens, then hundreds of millions of these AR spectacles will be mapping the contours of the world, along with all of its people, pets, possessions, and peccadilloes. The prospect of such intensive surveillance at planetary scale poses some tough questions about who will be doing all this watching and why….(More)”.
The Control Paradox: From AI to Populism
Book by Ezio Di Nucci: “Is technological innovation spinning out of control? During a one-week period in 2018, social media was revealed to have had huge undue influence on the 2016 U.S. presidential election and the first fatality from a self-driving car was recorded. What’s paradoxical about the understandable fear of machines taking control through software, robots, and artificial intelligence is that new technology is often introduced in order to increase our control of a certain task. This is what Ezio Di Nucci calls the “control paradox.”
Di Nucci also brings this notion to bear on politics: we delegate power and control to political representatives in order to improve democratic governance. However, recent populist uprisings have shown that voters feel disempowered and neglected by this system. This lack of direct control within representative democracies could be a motivating factor for populism, and Di Nucci argues that a better understanding of delegation is a possible solution….(More)”.
Is a racially-biased algorithm delaying health care for one million Black people?
Jyoti Madhusoodanan at Nature: “One million Black adults in the United States might be treated earlier for kidney disease if doctors were to remove a controversial ‘race-based correction factor’ from an algorithm they use to diagnose people and decide whether to administer medication, a comprehensive analysis finds.
Critics of the factor question its medical validity and say it potentially perpetuates racial bias — and that the latest study, published on 2 December in JAMA1, strengthens growing calls to discontinue its use.
“A population that is marginalized and much less likely to have necessary resources and support is the last group we want to put in a situation where they’re going to have delays in diagnosis and treatment,” says nephrologist Keith Norris at the University of California, Los Angeles, who argues for retiring the correction until there’s clear evidence that it’s necessary.
On the flip side, others say that the correction is based on scientific data that can’t be ignored, although they, too, agree that its basis on race is a problem….(More)”.
Feasibility study on the possible introduction of a mechanism for certifying artificial intelligence tools & services
Press Release by the Council of Europe: “The European Commission for the Efficiency of Justice (CEPEJ) has adopted a feasibility study on the possible establishment of a certification mechanism for artificial intelligence tools and services. The study is based on the CEPEJ Charter on the use of artificial intelligence in judicial systems and their environment, adopted in December 2018. The Council of Europe, if it decides to create such a mechanism, could be a pioneer in this field. After consultation with all member and observer states, this feasibility study will be followed by an action plan that the CEPEJ will prepare and send to the Committee of Ministers for examination in 2021….(Study)”.
Chatbots RESET: A Framework for Governing Responsible Use of Conversational AI in Healthcare
Report by the World Economic Forum: “When a new technology is introduced in healthcare, especially one based on AI, it invites meticulous scrutiny. The COVID-19 pandemic has accelerated the adoption of chatbots in healthcare applications and as a result, careful consideration is required to promote their responsible use. To address these governance challenges, the World Economic Forum has assembled a multistakeholder community, which has co-created Chatbots RESET, a framework for governing the responsible use of chatbots in healthcare. The framework outlined in this paper offers an actionable guide for stakeholders to promote the responsible use of chatbots in healthcare applications…(More)”.
Why Predictive Algorithms are So Risky for Public Sector Bodies
Paper by Madeleine Waller and Paul Waller: “This paper collates multidisciplinary perspectives on the use of predictive analytics in government services. It moves away from the hyped narratives of “AI” or “digital”, and the broad usage of the notion of “ethics”, to focus on highlighting the possible risks of the use of prediction algorithms in public administration. Guidelines for AI use in public bodies are currently available, however there is little evidence these are being followed or that they are being written into new mandatory regulations. The use of algorithms is not just an issue of whether they are fair and safe to use, but whether they abide with the law and whether they actually work.
Particularly in public services, there are many things to consider before implementing predictive analytics algorithms, as flawed use in this context can lead to harmful consequences for citizens, individually and collectively, and public sector workers. All stages of the implementation process of algorithms are discussed, from the specification of the problem and model design through to the context of their use and the outcomes.
Evidence is drawn from case studies of use in child welfare services, the US Justice System and UK public examination grading in 2020. The paper argues that the risks and drawbacks of such technological approaches need to be more comprehensively understood, and testing done in the operational setting, before implementing them. The paper concludes that while algorithms may be useful in some contexts and help to solve problems, it seems those relating to predicting real life have a long way to go to being safe and trusted for use. As “ethics” are located in time, place and social norms, the authors suggest that in the context of public administration, laws on human rights, statutory administrative functions, and data protection — all within the principles of the rule of law — provide the basis for appraising the use of algorithms, with maladministration being the primary concern rather than a breach of “ethics”….(More)”
Review into bias in algorithmic decision-making
Report by the Center for Data Ethics and Innovation (CDEI) (UK): “Unfair biases, whether conscious or unconscious, can be a problem in many decision-making processes. This review considers the impact that an increasing use of algorithmic tools is having on bias in decision-making, the steps that are required to manage risks, and the opportunities that better use of data offers to enhance fairness. We have focused on the use of
algorithms in significant decisions about individuals, looking across four sectors (recruitment, financial services, policing and local government), and making cross-cutting recommendations that aim to help build the right systems so that algorithms improve, rather than worsen, decision-making…(More)”.
The new SkillsMatch platform tackles skills assessment and matches your skills with training
European Commission: “The European labour market requires new skills to meet the demands of the Digital Age. EU citizens should have the right training, skills and support to empower them to find quality jobs and improve their living standards.
‘Soft skills’ such as confidence, teamwork, self-motivation, networking, presentation skills, are considered important for the employability and adaptability of Europe’s citizens. Soft skills are essential for how we work together and influence the decisions we take every day and can be more important than hard skills in today’s workplaces. The lack of soft skills is often only discovered once a person is already working on the job.
The state-of-the-art SkillsMatch platform helps users to match and adapt their soft skills assets to the demands of the labour market. The project is the first to offer a fully comprehensive platform with style guide cataloguing 36 different soft skills and matching them with occupations, as well as training opportunities, offering a large number of courses to improve soft skills depending on the chosen occupation.
The platform proposes courses, such as organisation and personal development, entrepreneurship, business communication and conflict resolution. There is a choice of courses in Spanish and English. Moreover, the platform will also provide recognition of the new learning and skills (open badges)…(More)”.