The AI gambit: leveraging artificial intelligence to combat climate change—opportunities, challenges, and recommendations


Paper by Josh Cowls, Andreas Tsamados, Mariarosaria Taddeo & Luciano Floridi: “In this article, we analyse the role that artificial intelligence (AI) could play, and is playing, to combat global climate change. We identify two crucial opportunities that AI offers in this domain: it can help improve and expand current understanding of climate change, and it can contribute to combatting the climate crisis effectively. However, the development of AI also raises two sets of problems when considering climate change: the possible exacerbation of social and ethical challenges already associated with AI, and the contribution to climate change of the greenhouse gases emitted by training data and computation-intensive AI systems. We assess the carbon footprint of AI research, and the factors that influence AI’s greenhouse gas (GHG) emissions in this domain. We find that the carbon footprint of AI research may be significant and highlight the need for more evidence concerning the trade-off between the GHG emissions generated by AI research and the energy and resource efficiency gains that AI can offer. In light of our analysis, we argue that leveraging the opportunities offered by AI for global climate change whilst limiting its risks is a gambit which requires responsive, evidence-based, and effective governance to become a winning strategy. We conclude by identifying the European Union as being especially well-placed to play a leading role in this policy response and provide 13 recommendations that are designed to identify and harness the opportunities of AI for combatting climate change, while reducing its impact on the environment….(More)”.

Facial Recognition Technology: Responsible Use Principles and the Legislative Landscape


Report by James Lewis: “…Criticism of FRT is too often based on a misunderstanding about the technology. A good starting point to change this is to clarify the distinction between FRT and facial characterization. FRT compares two images and asks how likely it is that one image is the same as the other. The best FRT is more accurate than humans at matching images. In contrast, “facial analysis” or “facial characterization” examines an image and then tries to characterize it by gender, age, or race. Much of the critique of FRT is actually about facial characterization. Claims about FRT inaccuracy are either out of date or mistakenly talking about facial characterization. Of course, accuracy depends on how FRT is used. When picture quality is poor, accuracy is lower but often still better than the average human. A 2021 report by the National Institute of Standards and Technology (NIST) found that accuracy had improved dramatically and that more accurate systems were less likely to make errors based on race or gender. This confusion hampers the development of effective rules.

Some want to ban FRT, but it will continue to be developed and deployed because of the convenience for consumers and the benefits to public safety. Continued progress in sensors and artificial intelligence (AI) will increase availability and performance of the technologies used for facial recognition. Stopping the development of FRT would require stopping the development of AI, and that is neither possible nor in the national interest. This report provides a list of guardrails to guide the development of law and regulation for civilian use….(More)”.

Addressing bias in big data and AI for health care: A call for open science


Paper by Natalia Norori et al: “Bias in the medical field can be dissected along with three directions: data-driven, algorithmic, and human. Bias in AI algorithms for health care can have catastrophic consequences by propagating deeply rooted societal biases. This can result in misdiagnosing certain patient groups, like gender and ethnic minorities, that have a history of being underrepresented in existing datasets, further amplifying inequalities.

Open science practices can assist in moving toward fairness in AI for health care. These include (1) participant-centered development of AI algorithms and participatory science; (2) responsible data sharing and inclusive data standards to support interoperability; and (3) code sharing, including sharing of AI algorithms that can synthesize underrepresented data to address bias. Future research needs to focus on developing standards for AI in health care that enable transparency and data sharing, while at the same time preserving patients’ privacy….(More)”.

Figure thumbnail gr1

Beyond the individual: governing AI’s societal harm


Paper by Nathalie A. Smuha: “In this paper, I distinguish three types of harm that can arise in the context of artificial intelligence (AI): individual harm, collective harm and societal harm. Societal harm is often overlooked, yet not reducible to the two former types of harm. Moreover, mechanisms to tackle individual and collective harm raised by AI are not always suitable to counter societal harm. As a result, policymakers’ gap analysis of the current legal framework for AI not only risks being incomplete, but proposals for new legislation to bridge these gaps may also inadequately protect societal interests that are adversely impacted by AI. By conceptualising AI’s societal harm, I argue that a shift in perspective is needed beyond the individual, towards a regulatory approach of AI that addresses its effects on society at large. Drawing on a legal domain specifically aimed at protecting a societal interest—environmental law—I identify three ‘societal’ mechanisms that EU policymakers should consider in the context of AI. These concern (1) public oversight mechanisms to increase accountability, including mandatory impact assessments with the opportunity to provide societal feedback; (2) public monitoring mechanisms to ensure independent information gathering and dissemination about AI’s societal impact; and (3) the introduction of procedural rights with a societal dimension, including a right to access to information, access to justice, and participation in public decision-making on AI, regardless of the demonstration of individual harm. Finally, I consider to what extent the European Commission’s new proposal for an AI regulation takes these mechanisms into consideration, before offering concluding remarks….(More)”.

Americans Need a Bill of Rights for an AI-Powered World


Article by Eric Lander and Alondra Nelson: “…Soon after ratifying our Constitution, Americans adopted a Bill of Rights to guard against the powerful government we had just created—enumerating guarantees such as freedom of expression and assembly, rights to due process and fair trials, and protection against unreasonable search and seizure. Throughout our history we have had to reinterpret, reaffirm, and periodically expand these rights. In the 21st century, we need a “bill of rights” to guard against the powerful technologies we have created.

Our country should clarify the rights and freedoms we expect data-driven technologies to respect. What exactly those are will require discussion, but here are some possibilities: your right to know when and how AI is influencing a decision that affects your civil rights and civil liberties; your freedom from being subjected to AI that hasn’t been carefully audited to ensure that it’s accurate, unbiased, and has been trained on sufficiently representative data sets; your freedom from pervasive or discriminatory surveillance and monitoring in your home, community, and workplace; and your right to meaningful recourse if the use of an algorithm harms you. 

Of course, enumerating the rights is just a first step. What might we do to protect them? Possibilities include the federal government refusing to buy software or technology products that fail to respect these rights, requiring federal contractors to use technologies that adhere to this “bill of rights,” or adopting new laws and regulations to fill gaps. States might choose to adopt similar practices….(More)”.

Algorithms Are Not Enough: Creating General Artificial Intelligence


Book by Herbert L. Roitblat: “Since the inception of artificial intelligence, we have been warned about the imminent arrival of computational systems that can replicate human thought processes. Before we know it, computers will become so intelligent that humans will be lucky to be kept as pets. And yet, although artificial intelligence has become increasingly sophisticated—with such achievements as driverless cars and humanless chess-playing—computer science has not yet created general artificial intelligence. In Algorithms Are Not Enough, Herbert Roitblat explains how artificial general intelligence may be possible and why a robopocalypse is neither imminent nor likely.

Existing artificial intelligence, Roitblat shows, has been limited to solving path problems, in which the entire problem consists of navigating a path of choices—finding specific solutions to well-structured problems. Human problem-solving, on the other hand, includes problems that consist of ill-structured situations, including the design of problem-solving paths themselves. These are insight problems, and insight is an essential part of intelligence that has not been addressed by computer science. Roitblat draws on cognitive science, including psychology, philosophy, and history, to identify the essential features of intelligence needed to achieve general artificial intelligence.

Roitblat describes current computational approaches to intelligence, including the Turing Test, machine learning, and neural networks. He identifies building blocks of natural intelligence, including perception, analogy, ambiguity, common sense, and creativity. General intelligence can create new representations to solve new problems, but current computational intelligence cannot. The human brain, like the computer, uses algorithms; but general intelligence, he argues, is more than algorithmic processes…(More)”.

Statistics and Data Science for Good


Introduction to Special Issue of Chance by Caitlin Augustin, Matt Brems, and Davina P. Durgana: “One lesson that our team has taken from the past 18 months is that no individual, no team, and no organization can be successful on their own. We’ve been grateful and humbled to witness incredible collaboration—taking on forms of resource sharing, knowledge exchange, and reimagined outcomes. Some advances, like breakthrough medicine, have been widely publicized. Other advances have received less fanfare. All of these advances are in the public interest and demonstrate how collaborations can be done “for good.”

In reading this issue, we hope that you realize the power of diverse multidisciplinary collaboration; you recognize the positive social impact that statisticians, data scientists, and technologists can have; and you learn that this isn’t limited to companies with billions of dollars or teams of dozens of people. You, our reader, can get involved in similar positive social change.

This special edition of CHANCE focuses on using data and statistics for the public good and on highlighting collaborations and innovations that have been sparked by partnerships between pro bono institutions and social impact partners. We recognize that the “pro bono” or “for good” field is vast, and we welcome all actors working in the public interest into the big tent.

Through the focus of this edition, we hope to demonstrate how new or novel collaborations might spark meaningful and lasting positive change in communities, sectors, and industries. Anchored by work led through Statistics Without Borders and DataKind, this edition features reporting on projects that touch on many of the United Nations Sustainable Development Goals (SDGs).

Pro bono volunteerism is one way of democratizing access to high-skill, high-expense services that are often unattainable for social impact organizations. Statistics Without Borders (founded in 2008), DataKind (founded in 2012), and numerous other volunteer organizations began with this model in mind: If there was an organizing or galvanizing body that could coordinate the myriad requests for statistical, data science, machine learning, or data engineering help, there would be a ready supply of talented individuals who would want to volunteer to see those projects through. Or, put another way, “If you build it, they will come.”

Doing pro bono work requires more than positive intent. Plenty of well-meaning organizations and individuals charitably donate their time, their energy, their expertise, only to have an unintended adverse impact. To do work for good, ethics is an important part of the projects. In this issue, you’ll notice the writers’ attention to independent review boards (IRBs), respecting client and data privacy, discussing ethical considerations of methods used, and so on.

While no single publication can fully capture the great work of pro bono organizations working in “data for good,” we hope readers will be inspired to contribute to open source projects, solve problems in a new way, or even volunteer themselves for a future cohort of projects. We’re thrilled that this special edition represents programs, partners, and volunteers from around the world. You will learn about work that is truly representative of the SDGs, such as international health organizations’ work in Uganda, political justice organizations in Kenya, and conservationists in Madagascar, to name a few.

Several articles describe projects that are contextualized with the SDGs. While achieving many goals is interconnected, such as the intertwining of economic attainment and reducing poverty, we hope that calling out key themes here will whet your appetite for exploration.

  • • Multiple articles focused on tackling aspects of SDG 3: Ensuring healthy lives and promoting well-being for people at all ages.
  • • An article tackling SDG 8: Promote sustained, inclusive, and sustainable economic growth; full and productive employment; and decent work for all.
  • • Several articles touching on SDG 9: Build resilient infrastructure; promote inclusive and sustainable industrialization, and foster innovation; one is a reflection on building and sustaining free and open source software as a public good.
  • • A handful of articles highlighting the needs for capacity-building and systems-strengthening aligned to SDG 16: Promote peaceful and inclusive societies for sustainable development; provide access to justice for all; and build effective, accountable, and inclusive institutions at all levels.
  • • An article about migration along the southern borders of the United States addressing multiple issues related to poverty (SDG 1), opportunity (SDG 10), and peace and justice (SDG 16)….(More)”

Gathering Strength, Gathering Storms


The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report: “In the five years since we released the first AI100 report, much has been written about the state of artificial intelligence and its influences on society. Nonetheless, AI100 remains unique in its combination of two key features. First, it is written by a Study Panel of core multi-disciplinary researchers in the field—experts who create artificial intelligence algorithms or study their influence on society as their main professional activity, and who have been doing so for many years. The authors are firmly rooted within the field of AI and provide an “insider’s” perspective. Second, it is a longitudinal study, with reports by such Study Panels planned once every five years, for at least one hundred years.

This report, the second in that planned series of studies, is being released five years after the first report.  Published on September 1, 2016, the first report was covered widely in the popular press and is known to have influenced discussions on governmental advisory boards and workshops in multiple countries. It has also been used in a variety of artificial intelligence curricula.   

In preparation for the second Study Panel, the Standing Committee commissioned two study-workshops held in 2019. These workshops were a response to feedback on the first AI100 report. Through them, the Standing Committee aimed to engage a broader, multidisciplinary community of scholars and stakeholders in its next study. The goal of the workshops was to draw on the expertise of computer scientists and engineers, scholars in the social sciences and humanities (including anthropologists, economists, historians, media scholars, philosophers, psychologists, and sociologists), law and public policy experts, and representatives from business management as well as the private and public sectors…(More)”.

Old Cracks, New Tech


Paper for the Oxford Commission on AI & Good Governance: “Artificial intelligence (AI) systems are increasingly touted as solutions to many complex social and political issues around the world, particularly in developing countries like Kenya. Yet AI has also exacerbated cleavages and divisions in society, in part because those who build the technology often do not have a strong understanding of the politics of the societies in which the technology is deployed.

In her new report ‘Old Cracks, New Tech: Artificial Intelligence, Human Rights, and Good Governance in Highly Fragmented and Socially Stratified Societies: The Case of Kenya’ writer and activist Nanjala Nyabola explores the Kenyan government’s policy on AI and blockchain technology and evaluates it’s success.Commissioned by the Oxford Commission for Good Governance (OxCAIGG), the report highlights lessons learnt from the Kenyan experience and sets out four key recommendations to help government officials and policy makers ensure good governance in AI in public and private contexts in Kenya.

The report recommends:

  • Conducting a deeper and more wide-ranging analysis of the political implications of existing and proposed applications of AI in Kenya, including comparisons with other countries where similar technology has been deployed.
  • Carrying out a comprehensive review of ongoing implementations of AI in both private and public contexts in Kenya in order to identify existing legal and policy gaps.
  • Conducting deeper legal research into developing meaningful legislation to govern the development and deployment of AI technology in Kenya. In particular, a framework for the implementation of the Data Protection Act (2019) vis-à-vis AI and blockchain technology is urgently required.
  • Arranging training for local political actors and researchers on the risks and opportunities for AI to empower them to independently evaluate proposed interventions with due attention to the local context…(More)”.

Harms of AI


Paper by Daron Acemoglu: “This essay discusses several potential economic, political and social costs of the current path of AI technologies. I argue that if AI continues to be deployed along its current trajectory and remains unregulated, it may produce various social, economic and political harms. These include: damaging competition, consumer privacy and consumer choice; excessively automating work, fueling inequality, inefficiently pushing down wages, and failing to improve worker productivity; and damaging political discourse, democracy’s most fundamental lifeblood. Although there is no conclusive evidence suggesting that these costs are imminent or substantial, it may be useful to understand them before they are fully realized and become harder or even impossible to reverse, precisely because of AI’s promising and wide-reaching potential. I also suggest that these costs are not inherent to the nature of AI technologies, but are related to how they are being used and developed at the moment – to empower corporations and governments against workers and citizens. As a result, efforts to limit and reverse these costs may need to rely on regulation and policies to redirect AI research. Attempts to contain them just by promoting competition may be insufficient….(More)”.