Apparent Algorithmic Bias and Algorithmic Learning


Paper by Anja Lambrecht and Catherine E. Tucker: “It is worrying to think that algorithms might discriminate against minority groups and reinforce existing inequality. Typically, such concerns have focused on the idea that the algorithm’s code could reflect bias, or the data that feeds the algorithm might lead the algorithm to produce uneven outcomes.

In this paper, we highlight another reason for why algorithms might appear biased against minority groups which is the length of time algorithms need to learn: if an algorithm has access to less data for particular groups, or accesses this data at differential speeds, it will produce differential outcomes, potentially disadvantaging minority groups.

Specifically, we revisit a classic study which documents that searches on Google for black names were more likely to return ads that highlighted the need for a criminal background check than searches for white names. We show that at least a partial explanation for this finding is that if consumer demand for a piece of information is low, an algorithm accumulates information at a lesser speed and thus takes longer to learn about consumer preferences. Since black names are less common, the algorithm learns about the quality of the underlying ad more slowly, and as a result an ad is more likely to persist for searches next to black names even if the algorithm judges the ad to be of low-quality. Therefore, the algorithm may be likely to show an ad — including an undesirable ad — in the context of searches for a disadvantaged group for a longer period of time.

We replicate this result using the context of religious affiliations and present evidence that ads targeted towards searches for religious groups persists for longer for groups that are less searched for. This suggests that the process of algorithmic learning can lead to differential outcomes across those whose characteristics are more common and those who are rarer in society….(More)”.

MEPs chart path for a European approach to Artificial Intelligence


Samuel Stolton at Euractiv: “As part of a series of debates in Parliament’s Legal Affairs Committee on Tuesday afternoon, MEPs exchanged ideas concerning several reports on Artificial Intelligence, covering ethics, civil liability, and intellectual property.

The reports represent Parliament’s recommendations to the Commission on the future for AI technology in the bloc, following the publication of the executive’s White Paper on Artificial Intelligence, which stated that high-risk technologies in ‘critical sectors’ and those deemed to be of ‘critical use’ should be subjected to new requirements.

One Parliament initiative on the ethical aspects of AI, led by Spanish Socialist Ibán García del Blanco, notes that he believes a uniform regulatory framework in the field of AI in Europe is necessary to avoid member states adopting different approaches.

“We felt that regulation is important to make sure that there is no restriction on the internal market. If we leave scope to the member states, I think we’ll see greater legal uncertainty,” García del Blanco said on Tuesday.

In the context of the current public health crisis, García del Blanco also said the use of certain biometric applications and remote recognition technologies should be proportionate, while respecting the EU’s data protection regime and the EU Charter of Fundamental Rights.

A new EU agency for Artificial Intelligence?

One of the most contested areas of García del Blanco’s report was his suggestion that the EU should establish a new agency responsible for overseeing compliance with future ethical principles in Artificial Intelligence.

“We shouldn’t get distracted by the idea of setting up an agency, European Union citizens are not interested in setting up further bodies,” said the conservative EPP’s shadow rapporteur on the file, Geoffroy Didier.

The centrist-liberal Renew group also did not warm up to the idea of establishing a new agency for AI, with MEP Stephane Sejourne saying that there already exist bodies that could have their remits extended.

In the previous mandate, as part of a 2017 resolution on Civil Law Rules on Robotics, Parliament had called upon the Commission to ‘consider’ whether an EU Agency for Robotics and Artificial Intelligence could be worth establishing in the future.

Another point of divergence consistently raised by MEPs on Tuesday was the lack of harmony in key definitions related to Artificial Intelligence across different Parliamentary texts, which could create legal loopholes in the future.

In this vein, members highlighted the need to work towards joint definitions for Artificial intelligence operations, in order to ensure consistency across Parliament’s four draft recommendations to the Commission….(More)”.

Our weird behavior during the pandemic is messing with AI models


Will Douglas Heaven at MIT Technology Review: “In the week of April 12-18, the top 10 search terms on Amazon.com were: toilet paper, face mask, hand sanitizer, paper towels, Lysol spray, Clorox wipes, mask, Lysol, masks for germ protection, and N95 mask. People weren’t just searching, they were buying too—and in bulk. The majority of people looking for masks ended up buying the new Amazon #1 Best Seller, “Face Mask, Pack of 50”.

When covid-19 hit, we started buying things we’d never bought before. The shift was sudden: the mainstays of Amazon’s top ten—phone cases, phone chargers, Lego—were knocked off the charts in just a few days. Nozzle, a London-based consultancy specializing in algorithmic advertising for Amazon sellers, captured the rapid change in this simple graph.

It took less than a week at the end of February for the top 10 Amazon search terms in multiple countries to fill up with products related to covid-19. You can track the spread of the pandemic by what we shopped for: the items peaked first in Italy, followed by Spain, France, Canada, and the US. The UK and Germany lag slightly behind. “It’s an incredible transition in the space of five days,” says Rael Cline, Nozzle’s CEO. The ripple effects have been seen across retail supply chains.

But they have also affected artificial intelligence, causing hiccups for the algorithms that run behind the scenes in inventory management, fraud detection, marketing, and more. Machine-learning models trained on normal human behavior are now finding that normal has changed, and some are no longer working as they should. 

How bad the situation is depends on whom you talk to. According to Pactera Edge, a global AI consultancy, “automation is in tailspin.” Others say they are keeping a cautious eye on automated systems that are just about holding up, stepping in with a manual correction when needed.

What’s clear is that the pandemic has revealed how intertwined our lives are with AI, exposing a delicate codependence in which changes to our behavior change how AI works, and changes to how AI works change our behavior. This is also a reminder that human involvement in automated systems remains key. “You can never sit and forget when you’re in such extraordinary circumstances,” says Cline….(More)”.

An Artificial Revolution: On Power, Politics and AI


Book by Ivana Bartoletti: “AI has unparalleled transformative potential to reshape society but without legal scrutiny, international oversight and public debate, we are sleepwalking into a future written by algorithms which encode regressive biases into our daily lives. As governments and corporations worldwide embrace AI technologies in pursuit of efficiency and profit, we are at risk of losing our common humanity: an attack that is as insidious as it is pervasive.

Leading privacy expert Ivana Bartoletti exposes the reality behind the AI revolution, from the low-paid workers who train algorithms to recognise cancerous polyps, to the rise of data violence and the symbiotic relationship between AI and right-wing populism.

Impassioned and timely, An Artificial Revolution is an essential primer to understand the intersection of technology and geopolitical forces shaping the future of civilisation, and the political response that will be required to ensure the protection of democracy and human rights….(More)”.

Examining the Black Box: Tools for Assessing Algorithmic Systems


Report by the Ada Lovelace Institute and DataKind UK: “As algorithmic systems become more critical to decision making across many parts of society, there is increasing interest in how they can be scrutinised and assessed for societal impact, and regulatory and normative compliance.

This report is primarily aimed at policymakers, to inform more accurate and focused policy conversations. It may also be helpful to anyone who creates, commissions or interacts with an algorithmic system and wants to know what methods or approaches exist to assess and evaluate that system…

Clarifying terms and approaches

Through literature review and conversations with experts from a range of disciplines, we’ve identified four prominent approaches to assessing algorithms that are often referred to by just two terms: algorithm audit and algorithmic impact assessment. But there is not always agreement on what these terms mean among different communities: social scientists, computer scientists, policymakers and the general public have different interpretations and frames of reference.

While there is broad enthusiasm among policymakers for algorithm audits and impact assessments, there is often lack of detail about the approaches being discussed. This stems both from the confusion of terms, but also from the different maturity of the approaches the terms describe.

Clarifying which approach we’re referring to, as well as where further research is needed, will help policymakers and practitioners to do the more vital work of building evidence and methodology to take these approaches forward.

We focus on algorithm audit and algorithmic impact assessment. For each, we identify two key approaches the terms can be interpreted as:

  • Algorithm audit
    • Bias audit: a targeted, non-comprehensive approach focused on assessing algorithmic systems for bias
    • Regulatory inspection: a broad approach, focused on an algorithmic system’s compliance with regulation or norms, necessitating a number of different tools and methods; typically performed by regulators or auditing professionals
  • Algorithmic impact assessment
    • Algorithmic risk assessment: assessing possible societal impacts of an algorithmic system before the system is in use (with ongoing monitoring often advised)
    • Algorithmic impact evaluation: assessing possible societal impacts of an algorithmic system on the users or population it affects after it is in use…(More)”.

.

Is Law Computable? Critical Perspectives on Law and Artificial Intelligence


Book edited by Simon Deakin and Christopher Markou: “What does computable law mean for the autonomy, authority, and legitimacy of the legal system? Are we witnessing a shift from Rule of Law to a new Rule of Technology? Should we even build these things in the first place?

This unique volume collects original papers by a group of leading international scholars to address some of the fascinating questions raised by the encroachment of Artificial Intelligence (AI) into more aspects of legal process, administration, and culture. Weighing near-term benefits against the longer-term, and potentially path-dependent, implications of replacing human legal authority with computational systems, this volume pushes back against the more uncritical accounts of AI in law and the eagerness of scholars, governments, and LegalTech developers, to overlook the more fundamental – and perhaps ‘bigger picture’ – ramifications of computable law…(More)”

COVID-19 Outbreak Prediction with Machine Learning


Paper by Sina F. Ardabili et al: “Several outbreak prediction models for COVID-19 are being used by officials around the world to make informed-decisions and enforce relevant control measures. Among the standard models for COVID-19 global pandemic prediction, simple epidemiological and statistical models have received more attention by authorities, and they are popular in the media. Due to a high level of uncertainty and lack of essential data, standard models have shown low accuracy for long-term prediction. Although the literature includes several attempts to address this issue, the essential generalization and robustness abilities of existing models needs to be improved.

This paper presents a comparative analysis of machine learning and soft computing models to predict the COVID-19 outbreak as an alternative to SIR and SEIR models. Among a wide range of machine learning models investigated, two models showed promising results (i.e., multi-layered perceptron, MLP, and adaptive network-based fuzzy inference system, ANFIS). Based on the results reported here, and due to the highly complex nature of the COVID-19 outbreak and variation in its behavior from nation-to-nation, this study suggests machine learning as an effective tool to model the outbreak. This paper provides an initial benchmarking to demonstrate the potential of machine learning for future research. Paper further suggests that real novelty in outbreak prediction can be realized through integrating machine learning and SEIR models….(More)”.

Doctors are using AI to triage covid-19 patients. The tools may be here to stay


Karen Hao at MIT Technology Review: “The pandemic, in other words, has turned into a gateway for AI adoption in health care—bringing both opportunity and risk. On the one hand, it is pushing doctors and hospitals to fast-track promising new technologies. On the other, this accelerated process could allow unvetted tools to bypass regulatory processes, putting patients in harm’s way.

“At a high level, artificial intelligence in health care is very exciting,” says Chris Longhurst, the chief information officer at UC San Diego Health. “But health care is one of those industries where there are a lot of factors that come into play. A change in the system can have potentially fatal unintended consequences.”

Before the pandemic, health-care AI was already a booming area of research. Deep learning, in particular, has demonstrated impressive results for analyzing medical images to identify diseases like breast and lung cancer or glaucoma at least as accurately as human specialists. Studies have also shown the potential of using computer vision to monitor elderly people in their homes and patients in intensive care units.

But there have been significant obstacles to translating that research into real-world applications. Privacy concerns make it challenging to collect enough data for training algorithms; issues related to bias and generalizability make regulators cautious to grant approvals. Even for applications that do get certified, hospitals rightly have their own intensive vetting procedures and established protocols. “Physicians, like everybody else—we’re all creatures of habit,” says Albert Hsiao, a radiologist at UCSD Health who is now trialing his own covid detection algorithm based on chest x-rays. “We don’t change unless we’re forced to change.”

As a result, AI has been slow to gain a foothold. “It feels like there’s something there; there are a lot of papers that show a lot of promise,” said Andrew Ng, a leading AI practitioner, in a recent webinar on its applications in medicine. But “it’s not yet as widely deployed as we wish.”…

In addition to the speed of evaluation, Durand identifies something else that may have encouraged hospitals to adopt AI during the pandemic: they are thinking about how to prepare for the inevitable staff shortages that will arise after the crisis. Traumatic events like a pandemic are often followed by an exodus of doctors and nurses. “Some doctors may want to change their way of life,” he says. “What’s coming, we don’t know.”…(More)”

National AI Strategies from a human rights perspective


Report by Global Partners Digital: “…looks at existing strategies adopted by governments and regional organisations since 2017. It assesses the extent to which human rights considerations have been incorporated and makes a series of recommendations to policymakers looking to develop or revise AI strategies in the future….

Our report found that while the majority of National AI Strategies mention human rights, very few contain a deep human rights-based analysis or concrete assessment of how various AI applications impact human rights. In all but a few cases, they also lacked depth or specificity on how human rights should be protected in the context of AI, which was in contrast to the level of specificity on other issues such as economic competitiveness or innovation advantage. 

The report provides recommendations to help governments develop human rights-based national AI strategies. These recommendations fall under six broad themes:

  • Include human rights explicitly and throughout the strategy: Thinking about the impact of AI on human rights-and how to mitigate the risks associated with those impacts- should be core to a national strategy. Each section should consider the risks and opportunities AI provides as related to human rights, with a specific focus on at-risk, vulnerable and marginalized communities.
  • Outline specific steps to be taken to ensure human rights are protected: As strategies engage with human rights, they should include specific goals, commitments or actions to ensure that human rights are protected.
  • Build in incentives or specific requirements to ensure rights-respecting practice: Governments should take steps within their strategies to incentivize human rights-respecting practices and actions across all sectors, as well as to ensure that their goals with regards to the protection of human rights are fulfilled.
  • Set out grievance and remediation processes for human rights violations: A National AI Strategy should look at the existing grievance and remedial processes available for victims of human rights violations relating to AI. The strategy should assess whether the process needs revision in light of the particular nature of AI as a technology or in the capacity-building of those involved so that they are able to receive complaints concerning AI.
  • Recognize the regional and international dimensions to AI policy: National strategies should clearly identify relevant regional and global fora and processes relating to AI, and the means by which the government will promote human rights-respecting approaches and outcomes at them through proactive engagement.
  • Include human rights experts and other stakeholders in the drafting of National AI Strategies: When drafting a national strategy, the government should ensure that experts on human rights and the impact of AI on human rights are a core part of the drafting process….(More)”.

The imperative of interpretable machines


Julia Stoyanovich, Jay J. Van Bavel & Tessa V. West at Nature: “As artificial intelligence becomes prevalent in society, a framework is needed to connect interpretability and trust in algorithm-assisted decisions, for a range of stakeholders.

We are in the midst of a global trend to regulate the use of algorithms, artificial intelligence (AI) and automated decision systems (ADS). As reported by the One Hundred Year Study on Artificial Intelligence: “AI technologies already pervade our lives. As they become a central force in society, the field is shifting from simply building systems that are intelligent to building intelligent systems that are human-aware and trustworthy.” Major cities, states and national governments are establishing task forces, passing laws and issuing guidelines about responsible development and use of technology, often starting with its use in government itself, where there is, at least in theory, less friction between organizational goals and societal values.

In the United States, New York City has made a public commitment to opening the black box of the government’s use of technology: in 2018, an ADS task force was convened, the first of such in the nation, and charged with providing recommendations to New York City’s government agencies for how to become transparent and accountable in their use of ADS. In a 2019 report, the task force recommended using ADS where they are beneficial, reduce potential harm and promote fairness, equity, accountability and transparency2. Can these principles become policy in the face of the apparent lack of trust in the government’s ability to manage AI in the interest of the public? We argue that overcoming this mistrust hinges on our ability to engage in substantive multi-stakeholder conversations around ADS, bringing with it the imperative of interpretability — allowing humans to understand and, if necessary, contest the computational process and its outcomes.

Remarkably little is known about how humans perceive and evaluate algorithms and their outputs, what makes a human trust or mistrust an algorithm3, and how we can empower humans to exercise agency — to adopt or challenge an algorithmic decision. Consider, for example, scoring and ranking — data-driven algorithms that prioritize entities such as individuals, schools, or products and services. These algorithms may be used to determine credit worthiness, and desirability for college admissions or employment. Scoring and ranking are as ubiquitous and powerful as they are opaque. Despite their importance, members of the public often know little about why one person is ranked higher than another by a résumé screening or a credit scoring tool, how the ranking process is designed and whether its results can be trusted.

As an interdisciplinary team of scientists in computer science and social psychology, we propose a framework that forms connections between interpretability and trust, and develops actionable explanations for a diversity of stakeholders, recognizing their unique perspectives and needs. We focus on three questions (Box 1) about making machines interpretable: (1) what are we explaining, (2) to whom are we explaining and for what purpose, and (3) how do we know that an explanation is effective? By asking — and charting the path towards answering — these questions, we can promote greater trust in algorithms, and improve fairness and efficiency of algorithm-assisted decision making…(More)”.