Is a racially-biased algorithm delaying health care for one million Black people?


Jyoti Madhusoodanan at Nature: “One million Black adults in the United States might be treated earlier for kidney disease if doctors were to remove a controversial ‘race-based correction factor’ from an algorithm they use to diagnose people and decide whether to administer medication, a comprehensive analysis finds.

Critics of the factor question its medical validity and say it potentially perpetuates racial bias — and that the latest study, published on 2 December in JAMA1, strengthens growing calls to discontinue its use.

“A population that is marginalized and much less likely to have necessary resources and support is the last group we want to put in a situation where they’re going to have delays in diagnosis and treatment,” says nephrologist Keith Norris at the University of California, Los Angeles, who argues for retiring the correction until there’s clear evidence that it’s necessary.

On the flip side, others say that the correction is based on scientific data that can’t be ignored, although they, too, agree that its basis on race is a problem….(More)”.

Feasibility study on the possible introduction of a mechanism for certifying artificial intelligence tools & services


Press Release by the Council of Europe: “The European Commission for the Efficiency of Justice (CEPEJ) has adopted a feasibility study on the possible establishment of a certification mechanism for artificial intelligence tools and services. The study is based on the CEPEJ Charter on the use of artificial intelligence in judicial systems and their environment, adopted in December 2018. The Council of Europe, if it decides to create such a mechanism, could be a pioneer in this field. After consultation with all member and observer states, this feasibility study will be followed by an action plan that the CEPEJ will prepare and send to the Committee of Ministers for examination in 2021….(Study)”.

Chatbots RESET: A Framework for Governing Responsible Use of Conversational AI in Healthcare


Report by the World Economic Forum: “When a new technology is introduced in healthcare, especially one based on AI, it invites meticulous scrutiny. The COVID-19 pandemic has accelerated the adoption of chatbots in healthcare applications and as a result, careful consideration is required to promote their responsible use. To address these governance challenges, the World Economic Forum has assembled a multistakeholder community, which has co-created Chatbots RESET, a framework for governing the responsible use of chatbots in healthcare. The framework outlined in this paper offers an actionable guide for stakeholders to promote the responsible use of chatbots in healthcare applications…(More)”.

Why Predictive Algorithms are So Risky for Public Sector Bodies


Paper by Madeleine Waller and Paul Waller: “This paper collates multidisciplinary perspectives on the use of predictive analytics in government services. It moves away from the hyped narratives of “AI” or “digital”, and the broad usage of the notion of “ethics”, to focus on highlighting the possible risks of the use of prediction algorithms in public administration. Guidelines for AI use in public bodies are currently available, however there is little evidence these are being followed or that they are being written into new mandatory regulations. The use of algorithms is not just an issue of whether they are fair and safe to use, but whether they abide with the law and whether they actually work.

Particularly in public services, there are many things to consider before implementing predictive analytics algorithms, as flawed use in this context can lead to harmful consequences for citizens, individually and collectively, and public sector workers. All stages of the implementation process of algorithms are discussed, from the specification of the problem and model design through to the context of their use and the outcomes.

Evidence is drawn from case studies of use in child welfare services, the US Justice System and UK public examination grading in 2020. The paper argues that the risks and drawbacks of such technological approaches need to be more comprehensively understood, and testing done in the operational setting, before implementing them. The paper concludes that while algorithms may be useful in some contexts and help to solve problems, it seems those relating to predicting real life have a long way to go to being safe and trusted for use. As “ethics” are located in time, place and social norms, the authors suggest that in the context of public administration, laws on human rights, statutory administrative functions, and data protection — all within the principles of the rule of law — provide the basis for appraising the use of algorithms, with maladministration being the primary concern rather than a breach of “ethics”….(More)”

Review into bias in algorithmic decision-making


Report by the Center for Data Ethics and Innovation (CDEI) (UK): “Unfair biases, whether conscious or unconscious, can be a problem in many decision-making processes. This review considers the impact that an increasing use of algorithmic tools is having on bias in decision-making, the steps that are required to manage risks, and the opportunities that better use of data offers to enhance fairness. We have focused on the use of
algorithms in significant decisions about individuals, looking across four sectors (recruitment, financial services, policing and local government), and making cross-cutting recommendations that aim to help build the right systems so that algorithms improve, rather than worsen, decision-making…(More)”.

The new SkillsMatch platform tackles skills assessment and matches your skills with training


European Commission: “The European labour market requires new skills to meet the demands of the Digital Age. EU citizens should have the right training, skills and support to empower them to find quality jobs and improve their living standards.

‘Soft skills’ such as confidence, teamwork, self-motivation, networking, presentation skills, are considered important for the employability and adaptability of Europe’s citizens. Soft skills are essential for how we work together and influence the decisions we take every day and can be more important than hard skills in today’s workplaces. The lack of soft skills is often only discovered once a person is already working on the job.

The state-of-the-art SkillsMatch platform helps users to match and adapt their soft skills assets to the demands of the labour market. The project is the first to offer a fully comprehensive platform with style guide cataloguing 36 different soft skills and matching them with occupations, as well as training opportunities, offering a large number of courses to improve soft skills depending on the chosen occupation.

The platform proposes courses, such as organisation and personal development, entrepreneurship, business communication and conflict resolution. There is a choice of courses in Spanish and English. Moreover, the platform will also provide recognition of the new learning and skills (open badges)…(More)”.

Malicious Uses and Abuses of Artificial Intelligence


Report by Europol, the United Nations Interregional Crime and Justice Research Institute (UNICRI) and Trend Micro: “… looking into current and predicted criminal uses of artificial intelligence (AI)… The report provides law enforcers, policy makers and other organizations with information on existing and potential attacks leveraging AI and recommendations on how to mitigate these risks.

“AI promises the world greater efficiency, automation and autonomy. At a time where the public is getting increasingly concerned about the possible misuse of AI, we have to be transparent about the threats, but also look into the potential benefits from AI technology.” said Edvardas Šileris, Head of Europol’s Cybercrime Centre. “This report will help us not only to anticipate possible malicious uses and abuses of AI, but also to prevent and mitigate those threats proactively. This is how we can unlock the potential AI holds and benefit from the positive use of AI systems.”

The report concludes that cybercriminals will leverage AI both as an attack vector and an attack surface. Deepfakes are currently the best-known use of AI as an attack vector. However, the report warns that new screening technology will be needed in the future to mitigate the risk of disinformation campaigns and extortion, as well as threats that target AI data sets.

For example, AI could be used to support:

  • Convincing social engineering attacks at scale
  • Document-scraping malware to make attacks more efficient
  • Evasion of image recognition and voice biometrics
  • Ransomware attacks, through intelligent targeting and evasion
  • Data pollution, by identifying blind spots in detection rules..

The three organizations make several recommendations to conclude the report:

Using artificial intelligence to make decisions: Addressing the problem of algorithmic bias (2020)


Foreword of a Report by the Australian Human Rights Commission: “Artificial intelligence (AI) promises better, smarter decision making.

Governments are starting to use AI to make decisions in welfare, policing and law enforcement, immigration, and many other areas. Meanwhile, the private sector is already using AI to make decisions about pricing and risk, to determine what sorts of people make the ‘best’ customers… In fact, the use cases for AI are limited only by our imagination.

However, using AI carries with it the risk of algorithmic bias. Unless we fully understand and address this risk, the promise of AI will be hollow.

Algorithmic bias is a kind of error associated with the use of AI in decision making, and often results in unfairness. Algorithmic bias can arise in many ways. Sometimes the problem is with the design of the AI-powered decision-making tool itself. Sometimes the problem lies with the data set that was used to train the AI tool, which could replicate or even make worse existing problems, including societal inequality.

Algorithmic bias can cause real harm. It can lead to a person being unfairly treated, or even suffering unlawful discrimination, on the basis of characteristics such as their race, age, sex or disability.

This project started by simulating a typical decision-making process. In this technical paper, we explore how algorithmic bias can ‘creep in’ to AI systems and, most importantly, how this problem can be addressed.

To ground our discussion, we chose a hypothetical scenario: an electricity retailer uses an AI-powered tool to decide how to offer its products to customers, and on what terms. The general principles and solutions for mitigating the problem, however, will be relevant far beyond this specific situation.

Because algorithmic bias can result in unlawful activity, there is a legal imperative to address this risk. However, good businesses go further than the bare minimum legal requirements, to ensure they always act ethically and do not jeopardise their good name.

Rigorous design, testing and monitoring can avoid algorithmic bias. This technical paper offers some guidance for companies to ensure that when they use AI, their decisions are fair, accurate and comply with human rights….(More)”

Facial-recognition research needs an ethical reckoning


Editorial in Nature: “…As Nature reports in a series of Features on facial recognition this week, many in the field are rightly worried about how the technology is being used. They know that their work enables people to be easily identified, and therefore targeted, on an unprecedented scale. Some scientists are analysing the inaccuracies and biases inherent in facial-recognition technology, warning of discrimination, and joining the campaigners calling for stronger regulation, greater transparency, consultation with the communities that are being monitored by cameras — and for use of the technology to be suspended while lawmakers reconsider where and how it should be used. The technology might well have benefits, but these need to be assessed against the risks, which is why it needs to be properly and carefully regulated.Is facial recognition too biased to be let loose?

Responsible studies

Some scientists are urging a rethink of ethics in the field of facial-recognition research, too. They are arguing, for example, that scientists should not be doing certain types of research. Many are angry about academic studies that sought to study the faces of people from vulnerable groups, such as the Uyghur population in China, whom the government has subjected to surveillance and detained on a mass scale.

Others have condemned papers that sought to classify faces by scientifically and ethically dubious measures such as criminality….One problem is that AI guidance tends to consist of principles that aren’t easily translated into practice. Last year, the philosopher Brent Mittelstadt at the University of Oxford, UK, noted that at least 84 AI ethics initiatives had produced high-level principles on both the ethical development and deployment of AI (B. Mittelstadt Nature Mach. Intell. 1, 501–507; 2019). These tended to converge around classical medical-ethics concepts, such as respect for human autonomy, the prevention of harm, fairness and explicability (or transparency). But Mittelstadt pointed out that different cultures disagree fundamentally on what principles such as ‘fairness’ or ‘respect for autonomy’ actually mean in practice. Medicine has internationally agreed norms for preventing harm to patients, and robust accountability mechanisms. AI lacks these, Mittelstadt noted. Specific case studies and worked examples would be much more helpful to prevent ethics guidance becoming little more than window-dressing….(More)”.

Evaluating Identity Disclosure Risk in Fully Synthetic Health Data: Model Development and Validation


Paper by Khaled El Emam et al: “There has been growing interest in data synthesis for enabling the sharing of data for secondary analysis; however, there is a need for a comprehensive privacy risk model for fully synthetic data: If the generative models have been overfit, then it is possible to identify individuals from synthetic data and learn something new about them.

Objective: The purpose of this study is to develop and apply a methodology for evaluating the identity disclosure risks of fully synthetic data.

Methods: A full risk model is presented, which evaluates both identity disclosure and the ability of an adversary to learn something new if there is a match between a synthetic record and a real person. We term this “meaningful identity disclosure risk.” The model is applied on samples from the Washington State Hospital discharge database (2007) and the Canadian COVID-19 cases database. Both of these datasets were synthesized using a sequential decision tree process commonly used to synthesize health and social science data.

Results: The meaningful identity disclosure risk for both of these synthesized samples was below the commonly used 0.09 risk threshold (0.0198 and 0.0086, respectively), and 4 times and 5 times lower than the risk values for the original datasets, respectively.

Conclusions: We have presented a comprehensive identity disclosure risk model for fully synthetic data. The results for this synthesis method on 2 datasets demonstrate that synthesis can reduce meaningful identity disclosure risks considerably. The risk model can be applied in the future to evaluate the privacy of fully synthetic data….(More)”.