Technology rules? The advent of new technologies in the justice system


Report by The Justice and Home Affairs Committee (House of Lords): “In recent years, and without many of us realising it, Artificial Intelligence has begun to permeate every aspect of our personal and professional lives. We live in a world of big data; more and more decisions in society are being taken by machines using algorithms built from that data, be it in healthcare, education, business, or consumerism. Our Committee has limited its investigation to only one area–how these advanced technologies are used in our justice system. Algorithms are being used to improve crime detection, aid the security categorisation of prisoners, streamline entry clearance processes at our borders and generate new insights that feed into the entire criminal justice pipeline.

We began our work on the understanding that Artificial Intelligence (AI), used correctly, has the potential to improve people’s lives through greater efficiency, improved productivity. and in finding solutions to often complex problems. But while acknowledging the many benefits, we were taken aback by the proliferation of Artificial Intelligence tools potentially being used without proper oversight, particularly by police forces across the country. Facial recognition may be the best known of these new technologies but in reality there are many more already in use, with more being developed all the time.

When deployed within the justice system, AI technologies have serious implications for a person’s human rights and civil liberties. At what point could someone be imprisoned on the basis of technology that cannot be explained? Informed scrutiny is therefore essential to ensure that any new tools deployed in this sphere are safe, necessary, proportionate, and effective. This scrutiny is not happening. Instead, we uncovered a landscape, a new Wild West, in which new technologies are developing at a pace that public awareness, government and legislation have not kept up with.
Public bodies and all 43 police forces are free to individually commission whatever tools they like or buy them from companies eager to get in on the burgeoning AI market. And the market itself is worryingly opaque. We were told that public bodies often do not know much about the systems they are buying and will be implementing, due to the seller’s insistence on commercial confidentiality–despite the fact that many of these systems will be harvesting, and relying on, data from the general public.
This is particularly concerning in light of evidence we heard of dubious selling practices and claims made by vendors as to their products’ effectiveness which are often untested and unproven…(More)”.

Artificial Intelligence for Children


WEF Toolkit: “Children and youth are surrounded by AI in many of the products they use in their daily lives, from social media to education technology, video games, smart toys and speakers. AI determines the videos children watch online, their curriculum as they learn, and the way they play and interact with others.

This toolkit, produced by a diverse team of youth, technologists, academics and business leaders, is designed to help companies develop trustworthy artificial intelligence (AI) for children and youth and to help parents, guardians, children and youth responsibly buy and safely use AI products.

AI can be used to educate and empower children and youth and have a positive impact on society. But children and youth can be especially vulnerable to the potential risks posed by AI, including bias, cybersecurity and lack of accessibility. AI must be designed inclusively to respect the rights of the child user. Child-centric design can protect children and youth from the potential risks posed by the technology.

AI technology must be created so that it is both innovative and responsible. Responsible AI is safe, ethical, transparent, fair, accessible and inclusive. Designing responsible and trusted AI is good for consumers, businesses and society. Parents, guardians and adults all have the responsibility to carefully select ethically designed AI products and help children use them safely.

What is at stake? AI will determine the future of play, childhood, education and societies. Children and youth represent the future, so everything must be done to support them to use AI responsibly and address the challenges of the future.

This toolkit aims to help responsibly design, consume and use AI. It is designed to help companies, designers, parents, guardians, children and youth make sure that AI respects the rights of children and has a positive impact in their lives…(More)”.

Humans in the Loop


Paper by Rebecca Crootof, Margot E. Kaminski and W. Nicholson Price II: “From lethal drones to cancer diagnostics, complex and artificially intelligent algorithms are increasingly integrated into decisionmaking that affects human lives, raising challenging questions about the proper allocation of decisional authority between humans and machines. Regulators commonly respond to these concerns by putting a “human in the loop”: using law to require or encourage including an individual within an algorithmic decisionmaking process.

Drawing on our distinctive areas of expertise with algorithmic systems, we take a bird’s eye view to make three generalizable contributions to the discourse. First, contrary to the popular narrative, the law is already profoundly (and problematically) involved in governing algorithmic systems. Law may explicitly require or prohibit human involvement and law may indirectly encourage or discourage human involvement, all without regard to what we know about the strengths and weaknesses of human and algorithmic decisionmakers and the particular quirks of hybrid human-machine systems. Second, we identify “the MABA-MABA trap,” wherein regulators are tempted to address a panoply of concerns by “slapping a human in it” based on presumptions about what humans and algorithms are respectively better at doing, often without realizing that the new hybrid system needs its own distinct regulatory interventions. Instead, we suggest that regulators should focus on what they want the human to do—what role the human is meant to play—and design regulations to allow humans to play these roles successfully. Third, borrowing concepts from systems engineering and existing law regulating railroads, nuclear reactors, and medical devices, we highlight lessons for regulating humans in the loop as well as alternative means of regulating human-machine systems going forward….(More)”.

The ethical imperative to identify and address data and intelligence asymmetries


Article by Stefaan Verhulst in AI & Society: “The insight that knowledge, resulting from having access to (privileged) information or data, is power is more relevant today than ever before. The data age has redefined the very notion of knowledge and information (as well as power), leading to a greater reliance on dispersed and decentralized datasets as well as to new forms of innovation and learning, such as artificial intelligence (AI) and machine learning (ML). As Thomas Piketty (among others) has shown, we live in an increasingly stratified world, and our society’s socio-economic asymmetries are often grafted onto data and information asymmetries. As we have documented elsewhere, data access is fundamentally linked to economic opportunity, improved governance, better science and citizen empowerment. The need to address data and information asymmetries—and their resulting inequalities of political and economic power—is therefore emerging as among the most urgent ethical challenges of our era, yet often not recognized as such.

Even as awareness grows of this imperative, society and policymakers lag in their understanding of the underlying issue. Just what are data asymmetries? How do they emerge, and what form do they take? And how do data asymmetries accelerate information and other asymmetries? What forces and power structures perpetuate or deepen these asymmetries, and vice versa? I argue that it is a mistake to treat this problem as homogenous. In what follows, I suggest the beginning of a taxonomy of asymmetries. Although closely related, each one emerges from a different set of contingencies, and each is likely to require different policy remedies. The focus of this short essay is to start outlining these different types of asymmetries. Further research could deepen and expand the proposed taxonomy as well help define solutions that are contextually appropriate and fit for purpose….(More)”.

How Native Americans Are Trying to Debug A.I.’s Biases


Alex V. Cipolle in The New York Times: “In September 2021, Native American technology students in high school and college gathered at a conference in Phoenix and were asked to create photo tags — word associations, essentially — for a series of images.

One image showed ceremonial sage in a seashell; another, a black-and-white photograph circa 1884, showed hundreds of Native American children lined up in uniform outside the Carlisle Indian Industrial School, one of the most prominent boarding schools run by the American government during the 19th and 20th centuries.

For the ceremonial sage, the students chose the words “sweetgrass,” “sage,” “sacred,” “medicine,” “protection” and “prayers.” They gave the photo of the boarding school tags with a different tone: “genocide,” “tragedy,” “cultural elimination,” “resiliency” and “Native children.”

The exercise was for the workshop Teaching Heritage to Artificial Intelligence Through Storytelling at the annual conference for the American Indian Science and Engineering Society. The students were creating metadata that could train a photo recognition algorithm to understand the cultural meaning of an image.

The workshop presenters — Chamisa Edmo, a technologist and citizen of the Navajo Nation, who is also Blackfeet and Shoshone-Bannock; Tracy Monteith, a senior Microsoft engineer and member of the Eastern Band of Cherokee Indians; and the journalist Davar Ardalan — then compared these answers with those produced by a major image recognition app.

For the ceremonial sage, the app’s top tag was “plant,” but other tags included “ice cream” and “dessert.” The app tagged the school image with “human,” “crowd,” “audience” and “smile” — the last a particularly odd descriptor, given that few of the children are smiling.

The image recognition app botched its task, Mr. Monteith said, because it didn’t have proper training data. Ms. Edmo explained that tagging results are often “outlandish” and “offensive,” recalling how one app identified a Native American person wearing regalia as a bird. And yet similar image recognition apps have identified with ease a St. Patrick’s Day celebration, Ms. Ardalan noted as an example, because of the abundance of data on the topic….(More)”.

The Strategic and Responsible Use of Artificial Intelligence in the Public Sector of Latin America and the Caribbean


OECD Report: “Governments can use artificial intelligence (AI) to design better policies and make better and more targeted decisions, enhance communication and engagement with citizens, and improve the speed and quality of public services. The Latin America and the Caribbean (LAC) region is seeking to leverage the immense potential of AI to promote the digital transformation of the public sector. The OECD, in collaboration with CAF, Development Bank of Latin America, prepared this report to help national governments in the LAC region understand the current regional baseline of activities and capacities for AI in the public sector; to identify specific approaches and actions they can take to enhance their ability to use this emerging technology for efficient, effective and responsive governments; and to collaborate across borders in pursuit of a regional vision for AI in the public sector. This report incorporates a stocktaking of each country’s strategies and commitments around AI in the public sector, including their alignment with the OECD AI Principles. It also includes an analysis of efforts to build key governance capacities and put in place critical enablers for AI in the public sector. It concludes with a series of recommendations for governments in the LAC region….(More)”.

The Global Politics of Artificial Intelligence


Book edited by Maurizio Tinnirello: “Technologies such as artificial intelligence have led to significant advances in science and medicine, but have also facilitated new forms of repression, policing and surveillance. AI policy has become without doubt a significant issue of global politics.

The Global Politics of Artificial Intelligence tackles some of the issues linked to AI development and use, contributing to a better understanding of the global politics of AI. This is an area where enormous work still needs to be done, and the contributors to this volume provide significant input into this field of study, to policymakers, academics, and society at large. Each of the chapters in this volume works as freestanding contribution, and provides an accessible account of a particular issue linked to AI from a political perspective. Contributors to the volume come from many different areas of expertise, and of the world, and range from emergent to established authors…(More)”.

Towards a Standard for Identifying and Managing Bias in Artificial Intelligence


NIST Report: “As individuals and communities interact in and with an environment that is increasingly virtual they are often vulnerable to the commodification of their digital exhaust. Concepts and behavior that are ambiguous in nature are captured in this environment, quantified, and used to categorize, sort, recommend, or make decisions about people’s lives. While many organizations seek to utilize this information in a responsible manner, biases remain endemic across technology processes and can lead to harmful impacts regardless of intent. These harmful outcomes, even if inadvertent, create significant challenges for cultivating public trust in artificial intelligence (AI)….(More)”

The 2022 AI Index: Industrialization of AI and Mounting Ethical Concerns


Blog by Daniel Zhang, Jack Clark, and Ray Perrault: “The field of artificial intelligence (AI) is at a critical crossroad, according to the 2022 AI Index, an annual study of AI impact and progress at the Stanford Institute for Human-Centered Artificial Intelligence (HAI) led by an independent and interdisciplinary group of experts from across academia and industry: 2021 saw the globalization and industrialization of AI intensify, while the ethical and regulatory issues of these technologies multiplied….

The new report shows several key advances in AI in 2021: 

  • Private investment in AI has more than doubled since 2020, in part due to larger funding rounds. In 2020, there were four funding rounds worth $500 million or more; in 2021, there were 15.
  • AI has become more affordable and higher performing. The cost to train an image classification has decreased by 63.6% and training times have improved by 94.4% since 2018. The median price of robotic arms has also decreased fourfold in the past six years.
  • The United States and China have dominated cross-country research collaborations on AI as the total number of AI publications continues to grow. The two countries had the greatest number of cross-country collaborations in AI papers in the last decade, producing 2.7 times more joint papers in 2021 than between the United Kingdom and China—the second highest on the list.
  • The number of AI patents filed has soared—more than 30 times higher than in 2015, showing a compound annual growth rate of 76.9%.

At the same time, the report also highlights growing research and concerns on ethical issues as well as regulatory interests associated with AI in 2021: 

  • Large language and multimodal language-vision models are excelling on technical benchmarks, but just as their performance increases, so do their ethical issues, like the generation of toxic text.
  • Research on fairness and transparency in AI has exploded since 2014, with a fivefold increase in publications on related topics over the past four years.
  • Industry has increased its involvement in AI ethics, with 71% more publications affiliated with industry at top conferences from 2018 to 2021. 
  • The United States has seen a sharp increase in the number of proposed bills related to AI; lawmakers proposed 130 laws in 2021, compared with just 1 in 2015. However, the number of bills passed remains low, with only 2% ultimately becoming law in the past six years.
  • Globally, AI regulation continues to expand. Since 2015, 18 times more bills related to AI were passed into law in legislatures of 25 countries around the world and mentions of AI in legislative proceedings also grew 7.7 times in the past six years….(More)”

The need to represent: How AI can help counter gender disparity in the news


Blog by Sabrina Argoub: “For the first in our new series of JournalismAI Community Workshops, we decided to look at three recent projects that demonstrate how AI can help raise awareness on issues with misrepresentation of women in the news. 

The Political Misogynistic Discourse Monitor is a web application and API that journalists from AzMina, La Nación, CLIP, and DataCrítica developed to uncover hate speech against women on Twitter.

When Women Make Headlines is an analysis by The Pudding of the (mis)representation of women in news headlines, and how it has changed over time. 

In the AIJO project, journalists from eight different organisations worked together to identify and mitigate biases in gender representation in news. 

We invited, Bàrbara Libório of AzMina, Sahiti Sarva of The Pudding, and Delfina Arambillet of La Nación, to walk us through their projects and share insights on what they learned and how they taught the machine to recognise what constitutes bias and hate speech….(More)”.