Is Law Computable? Critical Perspectives on Law and Artificial Intelligence


Book edited by Simon Deakin and Christopher Markou: “What does computable law mean for the autonomy, authority, and legitimacy of the legal system? Are we witnessing a shift from Rule of Law to a new Rule of Technology? Should we even build these things in the first place?

This unique volume collects original papers by a group of leading international scholars to address some of the fascinating questions raised by the encroachment of Artificial Intelligence (AI) into more aspects of legal process, administration, and culture. Weighing near-term benefits against the longer-term, and potentially path-dependent, implications of replacing human legal authority with computational systems, this volume pushes back against the more uncritical accounts of AI in law and the eagerness of scholars, governments, and LegalTech developers, to overlook the more fundamental – and perhaps ‘bigger picture’ – ramifications of computable law…(More)”

COVID-19 Outbreak Prediction with Machine Learning


Paper by Sina F. Ardabili et al: “Several outbreak prediction models for COVID-19 are being used by officials around the world to make informed-decisions and enforce relevant control measures. Among the standard models for COVID-19 global pandemic prediction, simple epidemiological and statistical models have received more attention by authorities, and they are popular in the media. Due to a high level of uncertainty and lack of essential data, standard models have shown low accuracy for long-term prediction. Although the literature includes several attempts to address this issue, the essential generalization and robustness abilities of existing models needs to be improved.

This paper presents a comparative analysis of machine learning and soft computing models to predict the COVID-19 outbreak as an alternative to SIR and SEIR models. Among a wide range of machine learning models investigated, two models showed promising results (i.e., multi-layered perceptron, MLP, and adaptive network-based fuzzy inference system, ANFIS). Based on the results reported here, and due to the highly complex nature of the COVID-19 outbreak and variation in its behavior from nation-to-nation, this study suggests machine learning as an effective tool to model the outbreak. This paper provides an initial benchmarking to demonstrate the potential of machine learning for future research. Paper further suggests that real novelty in outbreak prediction can be realized through integrating machine learning and SEIR models….(More)”.

Doctors are using AI to triage covid-19 patients. The tools may be here to stay


Karen Hao at MIT Technology Review: “The pandemic, in other words, has turned into a gateway for AI adoption in health care—bringing both opportunity and risk. On the one hand, it is pushing doctors and hospitals to fast-track promising new technologies. On the other, this accelerated process could allow unvetted tools to bypass regulatory processes, putting patients in harm’s way.

“At a high level, artificial intelligence in health care is very exciting,” says Chris Longhurst, the chief information officer at UC San Diego Health. “But health care is one of those industries where there are a lot of factors that come into play. A change in the system can have potentially fatal unintended consequences.”

Before the pandemic, health-care AI was already a booming area of research. Deep learning, in particular, has demonstrated impressive results for analyzing medical images to identify diseases like breast and lung cancer or glaucoma at least as accurately as human specialists. Studies have also shown the potential of using computer vision to monitor elderly people in their homes and patients in intensive care units.

But there have been significant obstacles to translating that research into real-world applications. Privacy concerns make it challenging to collect enough data for training algorithms; issues related to bias and generalizability make regulators cautious to grant approvals. Even for applications that do get certified, hospitals rightly have their own intensive vetting procedures and established protocols. “Physicians, like everybody else—we’re all creatures of habit,” says Albert Hsiao, a radiologist at UCSD Health who is now trialing his own covid detection algorithm based on chest x-rays. “We don’t change unless we’re forced to change.”

As a result, AI has been slow to gain a foothold. “It feels like there’s something there; there are a lot of papers that show a lot of promise,” said Andrew Ng, a leading AI practitioner, in a recent webinar on its applications in medicine. But “it’s not yet as widely deployed as we wish.”…

In addition to the speed of evaluation, Durand identifies something else that may have encouraged hospitals to adopt AI during the pandemic: they are thinking about how to prepare for the inevitable staff shortages that will arise after the crisis. Traumatic events like a pandemic are often followed by an exodus of doctors and nurses. “Some doctors may want to change their way of life,” he says. “What’s coming, we don’t know.”…(More)”

National AI Strategies from a human rights perspective


Report by Global Partners Digital: “…looks at existing strategies adopted by governments and regional organisations since 2017. It assesses the extent to which human rights considerations have been incorporated and makes a series of recommendations to policymakers looking to develop or revise AI strategies in the future….

Our report found that while the majority of National AI Strategies mention human rights, very few contain a deep human rights-based analysis or concrete assessment of how various AI applications impact human rights. In all but a few cases, they also lacked depth or specificity on how human rights should be protected in the context of AI, which was in contrast to the level of specificity on other issues such as economic competitiveness or innovation advantage. 

The report provides recommendations to help governments develop human rights-based national AI strategies. These recommendations fall under six broad themes:

  • Include human rights explicitly and throughout the strategy: Thinking about the impact of AI on human rights-and how to mitigate the risks associated with those impacts- should be core to a national strategy. Each section should consider the risks and opportunities AI provides as related to human rights, with a specific focus on at-risk, vulnerable and marginalized communities.
  • Outline specific steps to be taken to ensure human rights are protected: As strategies engage with human rights, they should include specific goals, commitments or actions to ensure that human rights are protected.
  • Build in incentives or specific requirements to ensure rights-respecting practice: Governments should take steps within their strategies to incentivize human rights-respecting practices and actions across all sectors, as well as to ensure that their goals with regards to the protection of human rights are fulfilled.
  • Set out grievance and remediation processes for human rights violations: A National AI Strategy should look at the existing grievance and remedial processes available for victims of human rights violations relating to AI. The strategy should assess whether the process needs revision in light of the particular nature of AI as a technology or in the capacity-building of those involved so that they are able to receive complaints concerning AI.
  • Recognize the regional and international dimensions to AI policy: National strategies should clearly identify relevant regional and global fora and processes relating to AI, and the means by which the government will promote human rights-respecting approaches and outcomes at them through proactive engagement.
  • Include human rights experts and other stakeholders in the drafting of National AI Strategies: When drafting a national strategy, the government should ensure that experts on human rights and the impact of AI on human rights are a core part of the drafting process….(More)”.

The imperative of interpretable machines


Julia Stoyanovich, Jay J. Van Bavel & Tessa V. West at Nature: “As artificial intelligence becomes prevalent in society, a framework is needed to connect interpretability and trust in algorithm-assisted decisions, for a range of stakeholders.

We are in the midst of a global trend to regulate the use of algorithms, artificial intelligence (AI) and automated decision systems (ADS). As reported by the One Hundred Year Study on Artificial Intelligence: “AI technologies already pervade our lives. As they become a central force in society, the field is shifting from simply building systems that are intelligent to building intelligent systems that are human-aware and trustworthy.” Major cities, states and national governments are establishing task forces, passing laws and issuing guidelines about responsible development and use of technology, often starting with its use in government itself, where there is, at least in theory, less friction between organizational goals and societal values.

In the United States, New York City has made a public commitment to opening the black box of the government’s use of technology: in 2018, an ADS task force was convened, the first of such in the nation, and charged with providing recommendations to New York City’s government agencies for how to become transparent and accountable in their use of ADS. In a 2019 report, the task force recommended using ADS where they are beneficial, reduce potential harm and promote fairness, equity, accountability and transparency2. Can these principles become policy in the face of the apparent lack of trust in the government’s ability to manage AI in the interest of the public? We argue that overcoming this mistrust hinges on our ability to engage in substantive multi-stakeholder conversations around ADS, bringing with it the imperative of interpretability — allowing humans to understand and, if necessary, contest the computational process and its outcomes.

Remarkably little is known about how humans perceive and evaluate algorithms and their outputs, what makes a human trust or mistrust an algorithm3, and how we can empower humans to exercise agency — to adopt or challenge an algorithmic decision. Consider, for example, scoring and ranking — data-driven algorithms that prioritize entities such as individuals, schools, or products and services. These algorithms may be used to determine credit worthiness, and desirability for college admissions or employment. Scoring and ranking are as ubiquitous and powerful as they are opaque. Despite their importance, members of the public often know little about why one person is ranked higher than another by a résumé screening or a credit scoring tool, how the ranking process is designed and whether its results can be trusted.

As an interdisciplinary team of scientists in computer science and social psychology, we propose a framework that forms connections between interpretability and trust, and develops actionable explanations for a diversity of stakeholders, recognizing their unique perspectives and needs. We focus on three questions (Box 1) about making machines interpretable: (1) what are we explaining, (2) to whom are we explaining and for what purpose, and (3) how do we know that an explanation is effective? By asking — and charting the path towards answering — these questions, we can promote greater trust in algorithms, and improve fairness and efficiency of algorithm-assisted decision making…(More)”.

Global AI Ethics Consortium


About: “…The newly founded Global AI Ethics Consortium (GAIEC) on Ethics and the Use of Data and Artificial Intelligence in the Fight Against COVID-19 and other Pandemics aims to:

  1. Support immediate needs for expertise related to the COVID-19 crisis and the emerging ethical questions related to the use of AI in managing the pandemic.
  2. Create a repository that includes avenues of communication for sharing and disseminating current research, new research opportunities, and past research findings.
  3. Coordinate internal funding and research initiatives to allow for maximum opportunities to pursue vital research related to health crises and the ethical use of AI.
  4. Discuss research findings and opportunities for new areas of collaboration.

Read the Statement of Purpose and find out more about the Global AI Ethics Consortium and its founding members: Christoph Lütge (TUM Institute for Ethics in Artificial Intelligence, Technical University of Munich), Jean-Gabriel Ganascia (LIP6-CNRS, Sorbonne Université), Mark Findlay (Centre for AI and Data Governance, Law School, Singapore Management University), Ken Ito and Kan Hiroshi Suzuki (The University of Tokyo), Jeannie Marie Paterson (Centre for AI and Digital Ethics, University of Melbourne), Huw Price (Leverhulme Centre for the Future of Intelligence, University of Cambridge), Stefaan G. Verhulst (The GovLab, New York University), Yi Zeng (Research Center for AI Ethics and Safety, Beijing Academy of Artificial Intelligence), and Adrian Weller (The Allan Turing Institute).

If you or your organization is interested in the GAIEC — Global AI Ethics Consortium please contact us at [email protected]…(More)”.

The significance of algorithmic selection for everyday life: The Case of Switzerland


University of Zurich: “This project empirically investigates the significance of automated algorithmic selection (AS) applications on the Internet for everyday life in Switzerland. It is the first countrywide, representative empirical study in the emerging interdisciplinary field of critical algorithm studies which assesses growing social, economic and political implications of algorithms in various life domains. The project is based on an innovative mix of methods comprising qualitative interviews and a representative Swiss online survey, combined with a passive metering (tracking) of Internet use.

  • Latzer, Michael / Festic, Noemi / Kappeler, Kiran (2020): Use and Assigned Relevance of Algorithmic-Selection Applications in Switzerland. Report 1 from the Project: The Significance of Algorithmic Selection for Everyday Life: The Case of Switzerland. Zurich: University of Zurich. http://mediachange.ch/research/algosig [forthcoming]
  • Latzer, Michael / Festic, Noemi / Kappeler, Kiran (2020): Awareness of Algorithmic Selection and Attitudes in Switzerland. Report 2 from the Project: The Significance of Algorithmic Selection for Everyday Life: The Case of Switzerland. Zurich: University of Zurich. http://mediachange.ch/research/algosig [forthcoming]
  • Latzer, Michael / Festic, Noemi / Kappeler, Kiran (2020): Awareness of Risks Related to Algorithmic Selection in Switzerland. Report 3 from the Project: The Significance of Algorithmic Selection for Everyday Life: The Case of Switzerland. Zurich: University of Zurich. http://mediachange.ch/research/algosig [forthcoming]
  • Latzer, Michael / Festic, Noemi / Kappeler, Kiran (2020): Coping Practices Related to Algorithmic Selection in Switzerland. Report 4 from the Project: The Significance of Algorithmic Selection for Everyday Life: The Case of Switzerland. Zurich: University of Zurich. http://mediachange.ch/research/algosig [forthcoming]…(More)”.

A guide to healthy skepticism of artificial intelligence and coronavirus


Alex Engler at Brookings: “The COVID-19 outbreak has spurred considerable news coverage about the ways artificial intelligence (AI) can combat the pandemic’s spread. Unfortunately, much of it has failed to be appropriately skeptical about the claims of AI’s value. Like many tools, AI has a role to play, but its effect on the outbreak is probably small. While this may change in the future, technologies like data reporting, telemedicine, and conventional diagnostic tools are currently far more impactful than AI.

Still, various news articles have dramatized the role AI is playing in the pandemic by overstating what tasks it can perform, inflating its effectiveness and scale, neglecting the level of human involvement, and being careless in consideration of related risks. In fact, the COVID-19 AI-hype has been diverse enough to cover the greatest hits of exaggerated claims around AI. And so, framed around examples from the COVID-19 outbreak, here are eight considerations for a skeptic’s approach to AI claims….(More)”.

The explanation game: a formal framework for interpretable machine learning


Paper by David S. Watson & Luciano Floridi: “We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealised explanation game in which players collaborate to find the best explanation(s) for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal patterns of variable granularity and scope. We characterise the conditions under which such a game is almost surely guaranteed to converge on a (conditionally) optimal explanation surface in polynomial time, and highlight obstacles that will tend to prevent the players from advancing beyond certain explanatory thresholds. The game serves a descriptive and a normative function, establishing a conceptual space in which to analyse and compare existing proposals, as well as design new and improved solutions….(More)”

Responding to COVID-19 with AI and machine learning


Paper by Mihaela van der Schaar et al: “…AI and machine learning can use data to make objective and informed recommendations, and can help ensure that scarce resources are allocated as efficiently as possible. Doing so will save lives and can help reduce the burden on healthcare systems and professionals….

1. Managing limited resources

AI and machine learning can help us identify people who are at highest risk of being infected by the novel coronavirus. This can be done by integrating electronic health record data with a multitude of “big data” pertaining to human-to-human interactions (from cellular operators, traffic, airlines, social media, etc.). This will make allocation of resources like testing kits more efficient, as well as informing how we, as a society, respond to this crisis over time….

2. Developing a personalized treatment course for each patient 

As mentioned above, COVID-19 symptoms and disease evolution vary widely from patient to patient in terms of severity and characteristics. A one-size-fits-all approach for treatment doesn’t work. We also are a long way off from mass-producing a vaccine. 

Machine learning techniques can help determine the most efficient course of treatment for each individual patient on the basis of observational data about previous patients, including their characteristics and treatments administered. We can use machine learning to answer key “what-if” questions about each patient, such as “What if we postpone a couple hours before putting them on a ventilator?” or “Would the outcome for this patient be better if we switched them from supportive care to an experimental treatment earlier?”

3. Informing policies and improving collaboration

…It’s hard to get a clear sense of which decisions result in the best outcomes. In such a stressful situation, it’s also hard for decision-makers to be aware of the outcomes of decisions being made by their counterparts elsewhere. 

Once again, data-driven AI and machine learning can provide objective and usable insights that far exceed the capabilities of existing methods. We can gain valuable insight into what the differences between policies are, why policies are different, which policies work better, and how to design and adopt improved policies….

4. Managing uncertainty

….We can use an area of machine learning called transfer learning to account for differences between populations, substantially eliminating bias while still extracting usable data that can be applied from one population to another. 

We can also use methods to make us aware of the degree of uncertainty of any given conclusion or recommendation generated from machine learning. This means that decision-makers can be provided with confidence estimates that tell them how confident they can be about a recommended course of action.

5. Expediting clinical trials

Randomized clinical trials (RCTs) are generally used to judge the relative effectiveness of a new treatment. However, these trials can be slow and costly, and may fail to uncover specific subgroups for which a treatment may be most effective. A specific problem posed by COVID-19 is that subjects selected for RCTs tend not to be elderly, or to have other conditions; as we know, COVID-19 has a particularly severe impact on both those patient groups….

The AI and machine learning techniques I’ve mentioned above do not require further peer review or further testing. Many have already been implemented on a smaller scale in real-world settings. They are essentially ready to go, with only slight adaptations required….(More) (Full Paper)”.