Biased Algorithms Are Easier to Fix Than Biased People


Sendhil Mullainathan in The New York Times: “In one study published 15 years ago, two people applied for a job. Their résumés were about as similar as two résumés can be. One person was named Jamal, the other Brendan.

In a study published this year, two patients sought medical care. Both were grappling with diabetes and high blood pressure. One patient was black, the other was white.

Both studies documented racial injustice: In the first, the applicant with a black-sounding name got fewer job interviews. In the second, the black patient received worse care.

But they differed in one crucial respect. In the first, hiring managers made biased decisions. In the second, the culprit was a computer program.

As a co-author of both studies, I see them as a lesson in contrasts. Side by side, they show the stark differences between two types of bias: human and algorithmic.

Marianne Bertrand, an economist at the University of Chicago, and I conducted the first study: We responded to actual job listings with fictitious résumés, half of which were randomly assigned a distinctively black name.

The study was: “Are Emily and Greg more employable than Lakisha and Jamal?”

The answer: Yes, and by a lot. Simply having a white name increased callbacks for job interviews by 50 percent.

I published the other study in the journal “Science” in late October with my co-authors: Ziad Obermeyer, a professor of health policy at University of California at Berkeley; Brian Powers, a clinical fellow at Brigham and Women’s Hospital; and Christine Vogeli, a professor of medicine at Harvard Medical School. We focused on an algorithm that is widely used in allocating health care services, and has affected roughly a hundred million people in the United States.

To better target care and provide help, health care systems are turning to voluminous data and elaborately constructed algorithms to identify the sickest patients.

We found these algorithms have a built-in racial bias. At similar levels of sickness, black patients were deemed to be at lower risk than white patients. The magnitude of the distortion was immense: Eliminating the algorithmic bias would more than double the number of black patients who would receive extra help. The problem lay in a subtle engineering choice: to measure “sickness,” they used the most readily available data, health care expenditures. But because society spends less on black patients than equally sick white ones, the algorithm understated the black patients’ true needs.

One difference between these studies is the work needed to uncover bias…(More)”.

Assessing employer intent when AI hiring tools are biased


Report by Caitlin Chin at Brookings: “When it comes to gender stereotypes in occupational roles, artificial intelligence (AI) has the potential to either mitigate historical bias or heighten it. In the case of the Word2vec model, AI appears to do both.

Word2vec is a publicly available algorithmic model built on millions of words scraped from online Google News articles, which computer scientists commonly use to analyze word associations. In 2016, Microsoft and Boston University researchers revealed that the model picked up gender stereotypes existing in online news sources—and furthermore, that these biased word associations were overwhelmingly job related. Upon discovering this problem, the researchers neutralized the biased word correlations in their specific algorithm, writing that “in a small way debiased word embeddings can hopefully contribute to reducing gender bias in society.”

Their study draws attention to a broader issue with artificial intelligence: Because algorithms often emulate the training datasets that they are built upon, biased input datasets could generate flawed outputs. Because many contemporary employers utilize predictive algorithms to scan resumes, direct targeted advertising, or even conduct face- or voice-recognition-based interviews, it is crucial to consider whether popular hiring tools might be susceptible to the same cultural biases that the researchers discovered in Word2vec.

In this paper, I discuss how hiring is a multi-layered and opaque process and how it will become more difficult to assess employer intent as recruitment processes move online. Because intent is a critical aspect of employment discrimination law, I ultimately suggest four ways upon which to include it in the discussion surrounding algorithmic bias….(More)”

This report from The Brookings Institution’s Artificial Intelligence and Emerging Technology (AIET) Initiative is part of “AI and Bias,” a series that explores ways to mitigate possible biases and create a pathway toward greater fairness in AI and emerging technologies.

Responsible Operations: Data Science, Machine Learning, and AI in Libraries


OCLC Research Position Paper by Thomas Padilla: “Despite greater awareness, significant gaps persist between concept and operationalization in libraries at the level of workflows (managing bias in probabilistic description), policies (community engagement vis-à-vis the development of machine-actionable collections), positions (developing staff who can utilize, develop, critique, and/or promote services influenced by data science, machine learning, and AI), collections (development of “gold standard” training data), and infrastructure (development of systems that make use of these technologies and methods). Shifting from awareness to operationalization will require holistic organizational commitment to responsible operations. The viability of responsible operations depends on organizational incentives and protections that promote constructive dissent…(More)”.

Regulating Artificial Intelligence


Book by Thomas Wischmeyer and Timo Rademacher: “This book assesses the normative and practical challenges for artificial intelligence (AI) regulation, offers comprehensive information on the laws that currently shape or restrict the design or use of AI, and develops policy recommendations for those areas in which regulation is most urgently needed. By gathering contributions from scholars who are experts in their respective fields of legal research, it demonstrates that AI regulation is not a specialized sub-discipline, but affects the entire legal system and thus concerns all lawyers. 

Machine learning-based technology, which lies at the heart of what is commonly referred to as AI, is increasingly being employed to make policy and business decisions with broad social impacts, and therefore runs the risk of causing wide-scale damage. At the same time, AI technology is becoming more and more complex and difficult to understand, making it harder to determine whether or not it is being used in accordance with the law. In light of this situation, even tech enthusiasts are calling for stricter regulation of AI. Legislators, too, are stepping in and have begun to pass AI laws, including the prohibition of automated decision-making systems in Article 22 of the General Data Protection Regulation, the New York City AI transparency bill, and the 2017 amendments to the German Cartel Act and German Administrative Procedure Act. While the belief that something needs to be done is widely shared, there is far less clarity about what exactly can or should be done, or what effective regulation might look like. 

The book is divided into two major parts, the first of which focuses on features common to most AI systems, and explores how they relate to the legal framework for data-driven technologies, which already exists in the form of (national and supra-national) constitutional law, EU data protection and competition law, and anti-discrimination law. In the second part, the book examines in detail a number of relevant sectors in which AI is increasingly shaping decision-making processes, ranging from the notorious social media and the legal, financial and healthcare industries, to fields like law enforcement and tax law, in which we can observe how regulation by AI is becoming a reality….(More)”.

Responsible Artificial Intelligence


Book by Virginia Dignum: “In this book, the author examines the ethical implications of Artificial Intelligence systems as they integrate and replace traditional social structures in new sociocognitive-technological environments. She discusses issues related to the integrity of researchers, technologists, and manufacturers as they design, construct, use, and manage artificially intelligent systems; formalisms for reasoning about moral decisions as part of the behavior of artificial autonomous systems such as agents and robots; and design methodologies for social agents based on societal, moral, and legal values. 


Throughout the book the author discusses related work, conscious of both classical, philosophical treatments of ethical issues and the implications in modern, algorithmic systems, and she combines regular references and footnotes with suggestions for further reading. This short overview is suitable for undergraduate students, in both technical and non-technical courses, and for interested and concerned researchers, practitioners, and citizens….(More)”.

Facial recognition needs a wider policy debate


Editorial Team of the Financial Times: “In his dystopian novel 1984, George Orwell warned of a future under the ever vigilant gaze of Big Brother. Developments in surveillance technology, in particular facial recognition, mean the prospect is no longer the stuff of science fiction.

In China, the government was this year found to have used facial recognition to track the Uighurs, a largely Muslim minority. In Hong Kong, protesters took down smart lamp posts for fear of their actions being monitored by the authorities. In London, the consortium behind the King’s Cross development was forced to halt the use of two cameras with facial recognition capabilities after regulators intervened. All over the world, companies are pouring money into the technology.

At the same time, governments and law enforcement agencies of all hues are proving willing buyers of a technology that is still evolving — and doing so despite concerns over the erosion of people’s privacy and human rights in the digital age. Flaws in the technology have, in certain cases, led to inaccuracies, in particular when identifying women and minorities.

The news this week that Chinese companies are shaping new standards at the UN is the latest sign that it is time for a wider policy debate. Documents seen by this newspaper revealed Chinese companies have proposed new international standards at the International Telecommunication Union, or ITU, a Geneva-based organisation of industry and official representatives, for things such as facial recognition. Setting standards for what is a revolutionary technology — one recently described as the “plutonium of artificial intelligence” — before a wider debate about its merits and what limits should be imposed on its use, can only lead to unintended consequences. Crucially, standards ratified in the ITU are commonly adopted as policy by developing nations in Africa and elsewhere — regions where China has long wanted to expand its influence. A case in point is Zimbabwe, where the government has partnered with Chinese facial recognition company CloudWalk Technology. The investment, part of Beijing’s Belt and Road investment in the country, will see CloudWalk technology monitor major transport hubs. It will give the Chinese company access to valuable data on African faces, helping to improve the accuracy of its algorithms….

Progress is needed on regulation. Proposals by the European Commission for laws to give EU citizens explicit rights over the use of their facial recognition data as part of a wider overhaul of regulation governing artificial intelligence are welcome. The move would bolster citizens’ protection above existing restrictions laid out under its general data protection regulation. Above all, policymakers should be mindful that if the technology’s unrestrained rollout continues, it could hold implications for other, potentially more insidious, innovations. Western governments should step up to the mark — or risk having control of the technology’s future direction taken from them….(More)”.

Machine Learning Technologies and Their Inherent Human Rights Issues in Criminal Justice Contexts


Essay by Jamie Grace: “This essay is an introductory exploration of machine learning technologies and their inherent human rights issues in criminal justice contexts. These inherent human rights issues include privacy concerns, the chilling of freedom of expression, problems around potential for racial discrimination, and the rights of victims of crime to be treated with dignity.

This essay is built around three case studies – with the first on the digital ‘mining’ of rape complainants’ mobile phones for evidence for disclosure to defence counsel. This first case study seeks to show how AI or machine learning tech might hypothetically either ease or inflame some of the tensions involved for human rights in this context. The second case study is concerned with the human rights challenges of facial recognition of suspects by police forces, using automated algorithms (live facial recognition) in public places. The third case study is concerned with the development of useful self-regulation in algorithmic governance practices in UK policing. This essay concludes with an emphasis on the need for the ‘politics of information’ (Lyon, 2007) to catch up with the ‘politics of public protection’ (Nash, 2010)….(More)”.

Algorithmic Regulation


Book edited by Karen Yeung and Martin Lodge: “As the power and sophistication of of ‘big data’ and predictive analytics has continued to expand, so too has policy and public concern about the use of algorithms in contemporary life. This is hardly surprising given our increasing reliance on algorithms in daily life, touching policy sectors from healthcare, transport, finance, consumer retail, manufacturing education, and employment through to public service provision and the operation of the criminal justice system. This has prompted concerns about the need and importance of holding algorithmic power to account, yet it is far from clear that existing legal and other oversight mechanisms are up to the task. This collection of essays, edited by two leading regulatory governance scholars, offers a critical exploration of ‘algorithmic regulation’, understood both as a means for co-ordinating and regulating social action and decision-making, as well as the need for institutional mechanisms through which the power of algorithms and algorithmic systems might themselves be regulated. It offers a unique perspective that is likely to become a significant reference point for the ever-growing debates about the power of algorithms in daily life in the worlds of research, policy and practice. The range of contributors are drawn from a broad range of disciplinary perspectives including law, public administration, applied philosophy, data science and artificial intelligence.

Taken together, they highlight the rise of algorithmic power, the potential benefits and risks associated with this power, the way in which Sheila Jasanoff’s long-standing claim that ‘technology is politics’ has been thrown into sharp relief by the speed and scale at which algorithmic systems are proliferating, and the urgent need for wider public debate and engagement of their underlying values and value trade-offs, the way in which they affect individual and collective decision-making and action, and effective and legitimate mechanisms by and through which algorithmic power is held to account….(More)”.

Artificial Intelligence and National Security


CRS Report: “Artificial intelligence (AI) is a rapidly growing field of technology with potentially significant implications for national security. As such, the U.S. Department of Defense (DOD) and other nations are developing AI applications for a range of military functions. AI research is underway in the fields of intelligence collection and analysis, logistics, cyber operations, information operations, command and control, and in a variety of semiautonomous and autonomous vehicles.

Already, AI has been incorporated into military operations in Iraq and Syria. Congressional action has the potential to shape the technology’s development further, with budgetary and legislative decisions influencing the growth of military applications as well as the pace of their adoption.

AI technologies present unique challenges for military integration, particularly because the bulk of AI development is happening in the commercial sector. Although AI is not unique in this regard, the defense acquisition process may need to be adapted for acquiring emerging technologies like AI. In addition, many commercial AI applications must undergo significant modification prior to being functional for the military.

A number of cultural issues also challenge AI acquisition, as some commercial AI companies are averse to partnering with DOD due to ethical concerns, and even within the department, there can be resistance to incorporating AI technology into existing weapons systems and processes.

Potential international rivals in the AI market are creating pressure for the United States to compete for innovative military AI applications. China is a leading competitor in this regard, releasing a plan in 2017 to capture the global lead in AI development by 2030. Currently, China is primarily focused on using AI to make faster and more well-informed decisions, as well as on developing a variety of autonomous military vehicles. Russia is also active in military AI development, with a primary focus on robotics.

Although AI has the potential to impart a number of advantages in the military context, it may also introduce distinct challenges. AI technology could, for example, facilitate autonomous operations, lead to more informed military decisionmaking, and increase the speed and scale of military action. However, it may also be unpredictable or vulnerable to unique forms of manipulation. As a result of these factors, analysts hold a broad range of opinions on how influential AI will be in future combat operations. While a small number of analysts believe that the technology will have minimal impact, most believe that AI will have at least an evolutionary—if not revolutionary—effect….(More)”.

Steering AI and Advanced ICTs for Knowledge Societies: a Rights, Openness, Access, and Multi-stakeholder Perspective


Report by Unesco: “Artificial Intelligence (AI) is increasingly becoming the veiled decision-maker of our times. The diverse technical applications loosely associated with this label drive more and more of our lives. They scan billions of web pages, digital trails and sensor-derived data within micro-seconds, using algorithms to prepare and produce significant decisions.

AI and its constitutive elements of data, algorithms, hardware, connectivity and storage exponentially increase the power of Information and Communications Technology (ICT). This is a major opportunity for Sustainable Development, although risks also need to be addressed.

It should be noted that the development of AI technology is part of the wider ecosystem of Internet and other advanced ICTs including big data, Internet of Things, blockchains, etc. To assess AI and other advanced ICTs’ benefits and challenges – particularly for communications and information – a useful approach is UNESCO’s Internet Universality ROAM principles.These principles urge that digital development be aligned with human Rights, Openness, Accessibility and Multi-stakeholder governance to guide the ensemble of values, norms, policies, regulations, codes and ethics that govern the development and use of AI….(More)”