Understanding Bias in Facial Recognition Technologies


Paper by David Leslie: “Over the past couple of years, the growing debate around automated facial recognition has reached a boiling point. As developers have continued to swiftly expand the scope of these kinds of technologies into an almost unbounded range of applications, an increasingly strident chorus of critical voices has sounded concerns about the injurious effects of the proliferation of such systems on impacted individuals and communities. Opponents argue that the irresponsible design and use of facial detection and recognition technologies (FDRTs) threatens to violate civil liberties, infringe on basic human rights and further entrench structural racism and... (More >)

High-Stakes AI Decisions Need to Be Automatically Audited


Oren Etzioni and Michael Li in Wired: “…To achieve increased transparency, we advocate for auditable AI, an AI system that is queried externally with hypothetical cases. Those hypothetical cases can be either synthetic or real—allowing automated, instantaneous, fine-grained interrogation of the model. It’s a straightforward way to monitor AI systems for signs of bias or brittleness: What happens if we change the gender of a defendant? What happens if the loan applicants reside in a historically minority neighborhood? Auditable AI has several advantages over explainable AI. Having a neutral third-party investigate these questions is a far better check on... (More >)

When Do We Trust AI’s Recommendations More Than People’s?


Chiara Longoni and Luca Cian at Harvard Business School: “More and more companies are leveraging technological advances in machine learning, natural language processing, and other forms of artificial intelligence to provide relevant and instant recommendations to consumers. From Amazon to Netflix to REX Real Estate, firms are using AI recommenders to enhance the customer experience. AI recommenders are also increasingly used in the public sector to guide people to essential services. For example, the New York City Department of Social Services uses AI to give citizens recommendations on disability benefits, food assistance, and health insurance. However, simply offering AI... (More >)

UK passport photo checker shows bias against dark-skinned women


Maryam Ahmed at BBC News: “Women with darker skin are more than twice as likely to be told their photos fail UK passport rules when they submit them online than lighter-skinned men, according to a BBC investigation. One black student said she was wrongly told her mouth looked open each time she uploaded five different photos to the government website. This shows how “systemic racism” can spread, Elaine Owusu said. The Home Office said the tool helped users get their passports more quickly. “The indicative check [helps] our customers to submit a photo that is right the first time,”... (More >)

Lessons learned from AI ethics principles for future actions


Paper by Merve Hickok: “As the use of artificial intelligence (AI) systems became significantly more prevalent in recent years, the concerns on how these systems collect, use and process big data also increased. To address these concerns and advocate for ethical and responsible development and implementation of AI, non-governmental organizations (NGOs), research centers, private companies, and governmental agencies published more than 100 AI ethics principles and guidelines. This first wave was followed by a series of suggested frameworks, tools, and checklists that attempt a technical fix to issues brought up in the high-level principles. Principles are important to create... (More >)

Blockchain Chicken Farm: And Other Stories of Tech in China’s Countryside


Book by By Xiaowei R. Wang: “In Blockchain Chicken Farm, the technologist and writer Xiaowei Wang explores the political and social entanglements of technology in rural China. Their discoveries force them to challenge the standard idea that rural culture and people are backward, conservative, and intolerant. Instead, they find that rural China has not only adapted to rapid globalization but has actually innovated the technology we all use today. From pork farmers using AI to produce the perfect pig, to disruptive luxury counterfeits and the political intersections of e-commerce villages, Wang unravels the ties between globalization, technology, agriculture, and... (More >)

AI Localism


Today, The GovLab is excited to launch a new platform which seeks to monitor, analyze and guide how AI is being governed in cities around the world: AI Localism. AI Localism refers to the actions taken by local decision-makers to address the use of AI within a city or community. AI Localism has often emerged because of gaps left by incomplete state, national or global governance frameworks. “AI Localism offers both immediacy and proximity. Because it is managed within tightly defined geographic regions, it affords policymakers a better understanding of the tradeoffs involved. By calibrating algorithms and AI policies... (More >)

Amsterdam and Helsinki launch algorithm registries to bring transparency to public deployments of AI


Khari Johnson at Venture Beat: “Amsterdam and Helsinki today launched AI registries to detail how each city government uses algorithms to deliver services, some of the first major cities in the world to do so. An AI Register for each city was introduced in beta today as part of the Next Generation Internet Policy Summit, organized in part by the European Commission and the city of Amsterdam. The Amsterdam registry currently features a handful of algorithms, but it will be extended to include all algorithms following the collection of feedback at the virtual conference to lay out a European... (More >)

Metrics at Work: Journalism and the Contested Meaning of Algorithms


Book by Angèle Christin: “When the news moved online, journalists suddenly learned what their audiences actually liked, through algorithmic technologies that scrutinize web traffic and activity. Has this advent of audience metrics changed journalists’ work practices and professional identities? In Metrics at Work, Angèle Christin documents the ways that journalists grapple with audience data in the form of clicks, and analyzes how new forms of clickbait journalism travel across national borders. Drawing on four years of fieldwork in web newsrooms in the United States and France, including more than one hundred interviews with journalists, Christin reveals many similarities among... (More >)

Why Modeling the Spread of COVID-19 Is So Damn Hard



Matthew Hutson at IEEE Spectrum: “…Researchers say they’ve learned a lot of lessons modeling this pandemic, lessons that will carry over to the next. The first set of lessons is all about data. Garbage in, garbage out, they say. Jarad Niemi, an associate professor of statistics at Iowa State University who helps run the forecast hub used by the CDC, says it’s not clear what we should be predicting. Infections, deaths, and hospitalization numbers each have problems, which affect their usefulness not only as inputs for the model but also as outputs. It’s hard to know the true number... (More >)