How Native Americans Are Trying to Debug A.I.’s Biases


Alex V. Cipolle in The New York Times: “In September 2021, Native American technology students in high school and college gathered at a conference in Phoenix and were asked to create photo tags — word associations, essentially — for a series of images.

One image showed ceremonial sage in a seashell; another, a black-and-white photograph circa 1884, showed hundreds of Native American children lined up in uniform outside the Carlisle Indian Industrial School, one of the most prominent boarding schools run by the American government during the 19th and 20th centuries.

For the ceremonial sage, the students chose the words “sweetgrass,” “sage,” “sacred,” “medicine,” “protection” and “prayers.” They gave the photo of the boarding school tags with a different tone: “genocide,” “tragedy,” “cultural elimination,” “resiliency” and “Native children.”

The exercise was for the workshop Teaching Heritage to Artificial Intelligence Through Storytelling at the annual conference for the American Indian Science and Engineering Society. The students were creating metadata that could train a photo recognition algorithm to understand the cultural meaning of an image.

The workshop presenters — Chamisa Edmo, a technologist and citizen of the Navajo Nation, who is also Blackfeet and Shoshone-Bannock; Tracy Monteith, a senior Microsoft engineer and member of the Eastern Band of Cherokee Indians; and the journalist Davar Ardalan — then compared these answers with those produced by a major image recognition app.

For the ceremonial sage, the app’s top tag was “plant,” but other tags included “ice cream” and “dessert.” The app tagged the school image with “human,” “crowd,” “audience” and “smile” — the last a particularly odd descriptor, given that few of the children are smiling.

The image recognition app botched its task, Mr. Monteith said, because it didn’t have proper training data. Ms. Edmo explained that tagging results are often “outlandish” and “offensive,” recalling how one app identified a Native American person wearing regalia as a bird. And yet similar image recognition apps have identified with ease a St. Patrick’s Day celebration, Ms. Ardalan noted as an example, because of the abundance of data on the topic….(More)”.

The Strategic and Responsible Use of Artificial Intelligence in the Public Sector of Latin America and the Caribbean


OECD Report: “Governments can use artificial intelligence (AI) to design better policies and make better and more targeted decisions, enhance communication and engagement with citizens, and improve the speed and quality of public services. The Latin America and the Caribbean (LAC) region is seeking to leverage the immense potential of AI to promote the digital transformation of the public sector. The OECD, in collaboration with CAF, Development Bank of Latin America, prepared this report to help national governments in the LAC region understand the current regional baseline of activities and capacities for AI in the public sector; to identify specific approaches and actions they can take to enhance their ability to use this emerging technology for efficient, effective and responsive governments; and to collaborate across borders in pursuit of a regional vision for AI in the public sector. This report incorporates a stocktaking of each country’s strategies and commitments around AI in the public sector, including their alignment with the OECD AI Principles. It also includes an analysis of efforts to build key governance capacities and put in place critical enablers for AI in the public sector. It concludes with a series of recommendations for governments in the LAC region….(More)”.

The Global Politics of Artificial Intelligence


Book edited by Maurizio Tinnirello: “Technologies such as artificial intelligence have led to significant advances in science and medicine, but have also facilitated new forms of repression, policing and surveillance. AI policy has become without doubt a significant issue of global politics.

The Global Politics of Artificial Intelligence tackles some of the issues linked to AI development and use, contributing to a better understanding of the global politics of AI. This is an area where enormous work still needs to be done, and the contributors to this volume provide significant input into this field of study, to policymakers, academics, and society at large. Each of the chapters in this volume works as freestanding contribution, and provides an accessible account of a particular issue linked to AI from a political perspective. Contributors to the volume come from many different areas of expertise, and of the world, and range from emergent to established authors…(More)”.

Towards a Standard for Identifying and Managing Bias in Artificial Intelligence


NIST Report: “As individuals and communities interact in and with an environment that is increasingly virtual they are often vulnerable to the commodification of their digital exhaust. Concepts and behavior that are ambiguous in nature are captured in this environment, quantified, and used to categorize, sort, recommend, or make decisions about people’s lives. While many organizations seek to utilize this information in a responsible manner, biases remain endemic across technology processes and can lead to harmful impacts regardless of intent. These harmful outcomes, even if inadvertent, create significant challenges for cultivating public trust in artificial intelligence (AI)….(More)”

The 2022 AI Index: Industrialization of AI and Mounting Ethical Concerns


Blog by Daniel Zhang, Jack Clark, and Ray Perrault: “The field of artificial intelligence (AI) is at a critical crossroad, according to the 2022 AI Index, an annual study of AI impact and progress at the Stanford Institute for Human-Centered Artificial Intelligence (HAI) led by an independent and interdisciplinary group of experts from across academia and industry: 2021 saw the globalization and industrialization of AI intensify, while the ethical and regulatory issues of these technologies multiplied….

The new report shows several key advances in AI in 2021: 

  • Private investment in AI has more than doubled since 2020, in part due to larger funding rounds. In 2020, there were four funding rounds worth $500 million or more; in 2021, there were 15.
  • AI has become more affordable and higher performing. The cost to train an image classification has decreased by 63.6% and training times have improved by 94.4% since 2018. The median price of robotic arms has also decreased fourfold in the past six years.
  • The United States and China have dominated cross-country research collaborations on AI as the total number of AI publications continues to grow. The two countries had the greatest number of cross-country collaborations in AI papers in the last decade, producing 2.7 times more joint papers in 2021 than between the United Kingdom and China—the second highest on the list.
  • The number of AI patents filed has soared—more than 30 times higher than in 2015, showing a compound annual growth rate of 76.9%.

At the same time, the report also highlights growing research and concerns on ethical issues as well as regulatory interests associated with AI in 2021: 

  • Large language and multimodal language-vision models are excelling on technical benchmarks, but just as their performance increases, so do their ethical issues, like the generation of toxic text.
  • Research on fairness and transparency in AI has exploded since 2014, with a fivefold increase in publications on related topics over the past four years.
  • Industry has increased its involvement in AI ethics, with 71% more publications affiliated with industry at top conferences from 2018 to 2021. 
  • The United States has seen a sharp increase in the number of proposed bills related to AI; lawmakers proposed 130 laws in 2021, compared with just 1 in 2015. However, the number of bills passed remains low, with only 2% ultimately becoming law in the past six years.
  • Globally, AI regulation continues to expand. Since 2015, 18 times more bills related to AI were passed into law in legislatures of 25 countries around the world and mentions of AI in legislative proceedings also grew 7.7 times in the past six years….(More)”

The need to represent: How AI can help counter gender disparity in the news


Blog by Sabrina Argoub: “For the first in our new series of JournalismAI Community Workshops, we decided to look at three recent projects that demonstrate how AI can help raise awareness on issues with misrepresentation of women in the news. 

The Political Misogynistic Discourse Monitor is a web application and API that journalists from AzMina, La Nación, CLIP, and DataCrítica developed to uncover hate speech against women on Twitter.

When Women Make Headlines is an analysis by The Pudding of the (mis)representation of women in news headlines, and how it has changed over time. 

In the AIJO project, journalists from eight different organisations worked together to identify and mitigate biases in gender representation in news. 

We invited, Bàrbara Libório of AzMina, Sahiti Sarva of The Pudding, and Delfina Arambillet of La Nación, to walk us through their projects and share insights on what they learned and how they taught the machine to recognise what constitutes bias and hate speech….(More)”.

Mapping of exposed water tanks and swimming pools based on aerial images can help control dengue


Press Release by Fundação de Amparo à Pesquisa do Estado de São Paulo: “Brazilian researchers have developed a computer program that locates swimming pools and rooftop water tanks in aerial photographs with the aid of artificial intelligence to help identify areas vulnerable to infestation by Aedes aegypti, the mosquito that transmits dengue, zika, chikungunya and yellow fever. 

The innovation, which can also be used as a public policy tool for dynamic socio-economic mapping of urban areas, resulted from research and development work by professionals at the University of São Paulo (USP), the Federal University of Minas Gerais (UFMG) and the São Paulo State Department of Health’s Endemic Control Superintendence (SUCEN), as part of a project supported by FAPESP. An article about it is published in the journal PLOS ONE

“Our work initially consisted of creating a model based on aerial images and computer science to detect water tanks and pools, and to use them as a socio-economic indicator,” said Francisco Chiaravalloti Neto, last author of the article. He is a professor in the Epidemiology Department at USP’s School of Public Health (FSP), with a first degree in engineering. 

As the article notes, previous research had already shown that dengue tends to be most prevalent in deprived urban areas, so that prevention of dengue, zika and other diseases transmitted by the mosquito can be made considerably more effective by use of a relatively dynamic socio-economic mapping model, especially given the long interval between population censuses in Brazil (ten years or more). 

“This is one of the first steps in a broader project,” Chiaravalloti Neto said. Among other aims, he and his team plan to detect other elements of the images and quantify real infestation rates in specific areas so as to be able to refine and validate the model. 

“We want to create a flow chart that can be used in different cities to pinpoint at-risk areas without the need for inspectors to call on houses, buildings and other breeding sites, as this is time-consuming and a waste of the taxpayer’s money,” he added…(More)”.

The New Fire: War, Peace, and Democracy in the Age of AI


Book by Ben Buchanan and Andrew Imbrie: “Artificial intelligence is revolutionizing the modern world. It is ubiquitous—in our homes and offices, in the present and most certainly in the future. Today, we encounter AI as our distant ancestors once encountered fire. If we manage AI well, it will become a force for good, lighting the way to many transformative inventions. If we deploy it thoughtlessly, it will advance beyond our control. If we wield it for destruction, it will fan the flames of a new kind of war, one that holds democracy in the balance. As AI policy experts Ben Buchanan and Andrew Imbrie show in The New Fire, few choices are more urgent—or more fascinating—than how we harness this technology and for what purpose.

The new fire has three sparks: data, algorithms, and computing power. These components fuel viral disinformation campaigns, new hacking tools, and military weapons that once seemed like science fiction. To autocrats, AI offers the prospect of centralized control at home and asymmetric advantages in combat. It is easy to assume that democracies, bound by ethical constraints and disjointed in their approach, will be unable to keep up. But such a dystopia is hardly preordained. Combining an incisive understanding of technology with shrewd geopolitical analysis, Buchanan and Imbrie show how AI can work for democracy. With the right approach, technology need not favor tyranny…(More)”.

Artificial Intelligence and Democratic Values


Introduction to Special Issue of the Turkish Policy Quarterly (TPQ): “…Artificial intelligence has fast become part of everyday life, and we wanted to understand how it fits into democratic values. It was important for us to ask how we can ensure that AI and digital policies will promote broad social inclusion, which relies on fundamental rights, democratic institutions, and the rule of law. There seems to be no shortage of principles and concepts that support the fair and responsible use of AI systems, yet it’s difficult to determine how to efficiently manage or deploy those systems today.

Merve Hickok and Marc Rotenberg, two TPQ Advisory Board members, wrote the lead article for this issue. In a world where data means power, vast amounts of data are collected every day by both private companies and government agencies, which then use this data to fuel complex systems for automated decision-making now broadly described as “Artificial Intelligence.” Activities managed with these AI systems range from policing to military, to access to public services and resources such as benefits, education, and employment. The expected benefits from having national talent, capacity, and capabilities to develop and deploy these systems also drive a lot of national governments to prioritize AI and digital policies. A crucial question for policymakers is how to reap the benefits while reducing the negative impacts of these sociotechnical systems on society.

Gabriela Ramos, Assistant Director-General for Social and Human Sciences of UNESCO, has written an article entitled “Ethics of AI and Democracy: UNESCO’s Recommendation’s Insights”. In her article, she discusses how artificial intelligence (AI) can affect democracy. The article discusses the ways in which Artificial Intelligence is affecting democratic processes, democratic values, and the political and social behavior of citizens. The article notes that the use of artificial intelligence, and its potential abuse by some government entities, as well as by big private corporations, poses a serious threat to rights-based democratic institutions, processes, and norms. UNESCO announced a remarkable consensus agreement among 193 member states creating the first-ever global standard on the ethics of AI that could serve as a blueprint for national AI legislation and a global AI ethics benchmark.

Paul Nemitz, Principal Adviser on Justice Policy at the EU Commission, addresses the question of what drives democracy. In his view, technology has undoubtedly shaped democracy. However, technology as well as legal rules regarding technology have shaped and have been shaped by democracy. This is why he says it is essential to develop and use technology according to democratic principles. He writes that there are libertarians today who purposefully design technological systems in such a way that challenges democratic control. It is, however, clear that there is enough counterpower and engagement, at least in Europe, to keep democracy functioning, as long as we work together to create rules that are sensible for democracy’s future and confirm democracy’s supremacy over technology and business interests.

Research associate at the University of Oxford and Professor at European University Cyprus, Paul Timmers, writes about how AI challenges sovereignty and democracy. AI is wonderful. AI is scary. AI is the path to paradise. AI is the path to hell.  What do we make of these contradictory images when, in a world of AI, we seek to both protect sovereignty and respect democratic values? Neither a techno-utopian nor a dystopian view of AI is helpful. The direction of travel must be global guidance and national or regional AI law that stresses end-to-end accountability and AI transparency, while recognizing practical and fundamental limits.

Tania Sourdin, Dean of Newcastle Law School, Australia, asks: what if judges were replaced by AI? She believes that although AI will increasingly be used to support judges when making decisions in most jurisdictions, there will also be attempts over the next decade to totally replace judges with AI. Increasingly, we are seeing a shift towards Judge AI, and to a certain extent we are seeing shifts towards supporting Judge AI, which raises concerns related to democratic values, structures, and what judicial independence means. The reason for this may be partly due to the systems used being set up to support a legal interpretation that fails to allow for a nuanced and contextual view of the law.

Pam Dixon, Executive Director of the World Privacy Forum, writes about biometric technologies. She says that biometric technologies encompass many types, or modalities, of biometrics today, such as face recognition, iris recognition, fingerprint recognition, and DNA recognition, both separately and in combination. A growing body of law and regulations seeks to mitigate the risks associated with biometric technologies as they are increasingly understood as a technology of concern based on scientific data.

We invite you to learn more about how our world is changing. As a way to honor this milestone, we have assembled a list of articles from around the world from some of the best experts in their field. This issue would not be possible without the assistance of many people. In addition to the contributing authors, there were many other individuals who contributed greatly. TPQ’s team is proud to present you with this edition….(More)” (Full list)”

The Staggering Ecological Impacts of Computation and the Cloud


Essay by Steven Gonzalez Monserrate: “While in technical parlance the “Cloud” might refer to the pooling of computing resources over a network, in popular culture, “Cloud” has come to signify and encompass the full gamut of infrastructures that make online activity possible, everything from Instagram to Hulu to Google Drive. Like a puffy cumulus drifting across a clear blue sky, refusing to maintain a solid shape or form, the Cloud of the digital is elusive, its inner workings largely mysterious to the wider public, an example of what MIT cybernetician Norbert Weiner once called a “black box.” But just as the clouds above us, however formless or ethereal they may appear to be, are in fact made of matter, the Cloud of the digital is also relentlessly material.

To get at the matter of the Cloud we must unravel the coils of coaxial cables, fiber optic tubes, cellular towers, air conditioners, power distribution units, transformers, water pipes, computer servers, and more. We must attend to its material flows of electricity, water, air, heat, metals, minerals, and rare earth elements that undergird our digital lives. In this way, the Cloud is not only material, but is also an ecological force. As it continues to expand, its environmental impact increases, even as the engineers, technicians, and executives behind its infrastructures strive to balance profitability with sustainability. Nowhere is this dilemma more visible than in the walls of the infrastructures where the content of the Cloud lives: the factory libraries where data is stored and computational power is pooled to keep our cloud applications afloat….

To quell this thermodynamic threat, data centers overwhelmingly rely on air conditioning, a mechanical process that refrigerates the gaseous medium of air, so that it can displace or lift perilous heat away from computers. Today, power-hungry computer room air conditioners (CRACs) or computer room air handlers (CRAHs) are staples of even the most advanced data centers. In North America, most data centers draw power from “dirty” electricity grids, especially in Virginia’s “data center alley,” the site of 70 percent of the world’s internet traffic in 2019. To cool, the Cloud burns carbon, what Jeffrey Moro calls an “elemental irony.” In most data centers today, cooling accounts for greater than 40 percent of electricity usage….(More)”.