In AI We Trust: Power, Illusion and Control of Predictive Algorithms


Book by Helga Nowotny: “One of the most persistent concerns about the future is whether it will be dominated by the predictive algorithms of AI – and, if so, what this will mean for our behaviour, for our institutions and for what it means to be human. AI changes our experience of time and the future and challenges our identities, yet we are blinded by its efficiency and fail to understand how it affects us.

At the heart of our trust in AI lies a paradox: we leverage AI to increase control over the future and uncertainty, while at the same time the performativity of AI, the power it has to make us act in the ways it predicts, reduces our agency over the future. This happens when we forget that that we humans have created the digital technologies to which we attribute agency. These developments also challenge the narrative of progress, which played such a central role in modernity and is based on the hubris of total control. We are now moving into an era where this control is limited as AI monitors our actions, posing the threat of surveillance, but also offering the opportunity to reappropriate control and transform it into care.

As we try to adjust to a world in which algorithms, robots and avatars play an ever-increasing role, we need to understand better the limitations of AI and how their predictions affect our agency, while at the same time having the courage to embrace the uncertainty of the future….(More)”.

Towards intellectual freedom in an AI Ethics Global Community


Paper by Christoph Ebell et al: “The recent incidents involving Dr. Timnit Gebru, Dr. Margaret Mitchell, and Google have triggered an important discussion emblematic of issues arising from the practice of AI Ethics research. We offer this paper and its bibliography as a resource to the global community of AI Ethics Researchers who argue for the protection and freedom of this research community. Corporate, as well as academic research settings, involve responsibility, duties, dissent, and conflicts of interest. This article is meant to provide a reference point at the beginning of this decade regarding matters of consensus and disagreement on how to enact AI Ethics for the good of our institutions, society, and individuals. We have herein identified issues that arise at the intersection of information technology, socially encoded behaviors, and biases, and individual researchers’ work and responsibilities. We revisit some of the most pressing problems with AI decision-making and examine the difficult relationships between corporate interests and the early years of AI Ethics research. We propose several possible actions we can take collectively to support researchers throughout the field of AI Ethics, especially those from marginalized groups who may experience even more barriers in speaking out and having their research amplified. We promote the global community of AI Ethics researchers and the evolution of standards accepted in our profession guiding a technological future that makes life better for all….(More)”.

Administrative Law in the Automated State


Paper by Cary Coglianese: “In the future, administrative agencies will rely increasingly on digital automation powered by machine learning algorithms. Can U.S. administrative law accommodate such a future? Not only might a highly automated state readily meet longstanding administrative law principles, but the responsible use of machine learning algorithms might perform even better than the status quo in terms of fulfilling administrative law’s core values of expert decision-making and democratic accountability. Algorithmic governance clearly promises more accurate, data-driven decisions. Moreover, due to their mathematical properties, algorithms might well prove to be more faithful agents of democratic institutions. Yet even if an automated state were smarter and more accountable, it might risk being less empathic. Although the degree of empathy in existing human-driven bureaucracies should not be overstated, a large-scale shift to government by algorithm will pose a new challenge for administrative law: ensuring that an automated state is also an empathic one….(More)”.

How we mapped billions of trees in West Africa using satellites, supercomputers and AI


Martin Brandt and Kjeld Rasmussen in The Conversation: “The possibility that vegetation cover in semi-arid and arid areas was retreating has long been an issue of international concern. In the 1930s it was first theorized that the Sahara was expanding and woody vegetation was on the retreat. In the 1970s, spurred by the “Sahel drought”, focus was on the threat of “desertification”, caused by human overuse and/or climate change. In recent decades, the potential impact of climate change on the vegetation has been the main concern, along with the feedback of vegetation on the climate, associated with the role of the vegetation in the global carbon cycle.

Using high-resolution satellite data and machine-learning techniques at supercomputing facilities, we have now been able to map billions of individual trees and shrubs in West Africa. The goal is to better understand the real state of vegetation coverage and evolution in arid and semi-arid areas.

Finding a shrub in the desert – from space

Since the 1970s, satellite data have been used extensively to map and monitor vegetation in semi-arid areas worldwide. Images are available in “high” spatial resolution (with NASA’s satellites Landsat MSS and TM, and ESA’s satellites Spot and Sentinel) and “medium or low” spatial resolution (NOAA AVHRR and MODIS).

To accurately analyse vegetation cover at continental or global scale, it is necessary to use the highest-resolution images available – with a resolution of 1 metre or less – and up until now the costs of acquiring and analysing the data have been prohibitive. Consequently, most studies have relied on moderate- to low-resolution data. This has not allowed for the identification of individual trees, and therefore these studies only yield aggregate estimates of vegetation cover and productivity, mixing herbaceous and woody vegetation.

In a new study covering a large part of the semi-arid Sahara-Sahel-Sudanian zone of West Africa, published in Nature in October 2020, an international group of researchers was able to overcome these limitations. By combining an immense amount of high-resolution satellite data, advanced computing capacities, machine-learning techniques and extensive field data gathered over decades, we were able to identify individual trees and shrubs with a crown area of more than 3 m2 with great accuracy. The result is a database of 1.8 billion trees in the region studied, available to all interested….(More)”

Supercomputing, machine learning, satellite data and field assessments allow to map billions of individual trees in West Africa. Martin Brandt, Author provided

How medical AI devices are evaluated: limitations and recommendations from an analysis of FDA approvals


Paper by Eric Wu et al: “Medical artificial-intelligence (AI) algorithms are being increasingly proposed for the assessment and care of patients. Although the academic community has started to develop reporting guidelines for AI clinical trials, there are no established best practices for evaluating commercially available algorithms to ensure their reliability and safety. The path to safe and robust clinical AI requires that important regulatory questions be addressed. Are medical devices able to demonstrate performance that can be generalized to the entire intended population? Are commonly faced shortcomings of AI (overfitting to training data, vulnerability to data shifts, and bias against underrepresented patient subgroups) adequately quantified and addressed?

In the USA, the US Food and Drug Administration (FDA) is responsible for approving commercially marketed medical AI devices. The FDA releases publicly available information on approved devices in the form of a summary document that generally contains information about the device description, indications for use, and performance data of the device’s evaluation study. The FDA has recently called for improvement of test-data quality, improvement of trust and transparency with users, monitoring of algorithmic performance and bias on the intended population, and testing with clinicians in the loop. To understand the extent to which these concerns are addressed in practice, we have created an annotated database of FDA-approved medical AI devices and systematically analyzed how these devices were evaluated before approval. Additionally, we have conducted a case study of pneumothorax-triage devices and found that evaluating deep-learning models at a single site alone, which is often done, can mask weaknesses in the models and lead to worse performance across sites.

Fig. 1: Breakdown of 130 FDA-approved medical AI devices by body area.

figure1

Devices are categorized by risk level (square, high risk; circle, low risk). Blue indicates that a multi-site evaluation was reported; otherwise, symbols are gray. Red outline indicates a prospective study (key, right margin). Numbers in key indicate the number of devices with each characteristic….(More)”.

The Norms of Algorithmic Credit Scoring


Paper by Nikita Aggarwal: “This article examines the growth of algorithmic credit scoring and its implications for the regulation of consumer credit markets in the UK. It constructs a frame of analysis for the regulation of algorithmic credit scoring, bound by the core norms underpinning UK consumer credit and data protection regulation: allocative efficiency, distributional fairness and consumer privacy (as autonomy). Examining the normative trade-offs that arise within this frame, the article argues that existing data protection and consumer credit frameworks do not achieve an appropriate normative balance in the regulation of algorithmic credit scoring. In particular, the growing reliance on consumers’ personal data by lenders due to algorithmic credit scoring, coupled with the ineffectiveness of existing data protection remedies has created a data protection gap in consumer credit markets that presents a significant threat to consumer privacy and autonomy. The article makes recommendations for filling this gap through institutional and substantive regulatory reforms….(More)”.

The (Im)possibility of Fairness: Different Value Systems Require Different Mechanisms For Fair Decision Making


Sorelle A. Friedler, Carlos Scheidegger, Suresh Venkatasubramanian at Communications of the ACM: “Automated decision-making systems (often machine learning-based) now commonly determine criminal sentences, hiring choices, and loan applications. This widespread deployment is concerning, since these systems have the potential to discriminate against people based on their demographic characteristics. Current sentencing risk assessments are racially biased, and job advertisements discriminate on gender. These concerns have led to an explosive growth in fairness-aware machine learning, a field that aims to enable algorithmic systems that are fair by design.

To design fair systems, we must agree precisely on what it means to be fair. One such definition is individual fairness: individuals who are similar (with respect to some task) should be treated similarly (with respect to that task). Simultaneously, a different definition states that demographic groups should, on the whole, receive similar decisions. This group fairness definition is inspired by civil rights law in the U.S. and U.K. Other definitions state that fair systems should err evenly across demographic groups. Many of these definitions have been incorporated into machine learning pipelines.

In this article, we introduce a framework for understanding these different definitions of fairness and how they relate to each other. Crucially, our framework shows these definitions and their implementations correspond to different axiomatic beliefs about the world. We present two such worldviews and will show they are fundamentally incompatible. First, one can believe the observation processes that generate data for machine learning are structurally biased. This belief provides a justification for seeking non-discrimination. When one believes that demographic groups are, on the whole, fundamentally similar, group fairness mechanisms successfully guarantee the top-level goal of non-discrimination: similar groups receiving similar treatment. Alternatively, one can assume the observed data generally reflects the true underlying reality about differences between people. These worldviews are in conflict; a single algorithm cannot satisfy either definition of fairness under both worldviews. Thus, researchers and practitioners ought to be intentional and explicit about world-views and value assumptions: the systems they design will always encode some belief about the world….(More)”.

Who Is Making Sure the A.I. Machines Aren’t Racist?


Cade Metz at the New York Times: “Hundreds of people gathered for the first lecture at what had become the world’s most important conference on artificial intelligence — row after row of faces. Some were East Asian, a few were Indian, and a few were women. But the vast majority were white men. More than 5,500 people attended the meeting, five years ago in Barcelona, Spain.

Timnit Gebru, then a graduate student at Stanford University, remembers counting only six Black people other than herself, all of whom she knew, all of whom were men.

The homogeneous crowd crystallized for her a glaring issue. The big thinkers of tech say A.I. is the future. It will underpin everything from search engines and email to the software that drives our cars, directs the policing of our streets and helps create our vaccines.

But it is being built in a way that replicates the biases of the almost entirely male, predominantly white work force making it.

In the nearly 10 years I’ve written about artificial intelligence, two things have remained a constant: The technology relentlessly improves in fits and sudden, great leaps forward. And bias is a thread that subtly weaves through that work in a way that tech companies are reluctant to acknowledge.

On her first night home in Menlo Park, Calif., after the Barcelona conference, sitting cross-​legged on the couch with her laptop, Dr. Gebru described the A.I. work force conundrum in a Facebook post.

“I’m not worried about machines taking over the world. I’m worried about groupthink, insularity and arrogance in the A.I. community — especially with the current hype and demand for people in the field,” she wrote. “The people creating the technology are a big part of the system. If many are actively excluded from its creation, this technology will benefit a few while harming a great many.”

The A.I. community buzzed about the mini-manifesto. Soon after, Dr. Gebru helped create a new organization, Black in A.I. After finishing her Ph.D., she was hired by Google….(More)”.

Intellectual Property and Artificial Intelligence


A literature review by the Joint Research Center: “Artificial intelligence has entered into the sphere of creativity and ingenuity. Recent headlines refer to paintings produced by machines, music performed or composed by algorithms or drugs discovered by computer programs. This paper discusses the possible implications of the development and adoption of this new technology in the intellectual property framework and presents the opinions expressed by practitioners and legal scholars in recent publications. The literature review, although not intended to be exhaustive, reveals a series of questions that call for further reflection. These concern the protection of artificial intelligence by intellectual property, the use of data to feed algorithms, the protection of the results generated by intelligent machines as well as the relationship between ethical requirements of transparency and explainability and the interests of rights holders….(More)”.

Machine Learning Shows Social Media Greatly Affects COVID-19 Beliefs


Jessica Kent at HealthITAnalytics: “Using machine learning, researchers found that people’s biases about COVID-19 and its treatments are exacerbated when they read tweets from other users, a study published in JMIR showed.

The analysis also revealed that scientific events, like scientific publications, and non-scientific events, like speeches from politicians, equally influence health belief trends on social media.

The rapid spread of COVID-19 has resulted in an explosion of accurate and inaccurate information related to the pandemic – mainly across social media platforms, researchers noted.

“In the pandemic, social media has contributed to much of the information and misinformation and bias of the public’s attitude toward the disease, treatment and policy,” said corresponding study author Yuan Luo, chief Artificial Intelligence officer at the Institute for Augmented Intelligence in Medicine at Northwestern University Feinberg School of Medicine.

“Our study helps people to realize and re-think the personal decisions that they make when facing the pandemic. The study sends an ‘alert’ to the audience that the information they encounter daily might be right or wrong, and guide them to pick the information endorsed by solid scientific evidence. We also wanted to provide useful insight for scientists or healthcare providers, so that they can more effectively broadcast their voice to targeted audiences.”…(More)”.