Automating Society Report 2020


Bertelsmann Stiftung: “When launching the first edition of this report, we decided to  call  it  “Automating  Society”,  as ADM systems  in  Europe  were  mostly  new, experimental,  and  unmapped  –  and,  above all, the exception rather than the norm.

This situation has changed rapidly. As clearly shown by over 100 use cases of automated decision-making systems in 16 European countries, which have been compiled by a research network for the 2020 edition of the Automating Society report by Bertelsmann Stiftung and AlgorithmWatch. The report shows: Even though algorithmic systems are increasingly being used by public administration and private companies, there is still a lack of transparency, oversight and competence.

The stubborn opacity surrounding the ever-increasing use of ADM systems has made it all the more urgent that we continue to increase our efforts. Therefore, we have added four countries (Estonia, Greece, Portugal, and Switzerland) to the 12 we already analyzed in the previous edition of this report, bringing the total to 16 countries. While far from exhaustive, this allows us to provide a broader picture of the ADM scenario in Europe. Considering the impact these systems may have on everyday life, and how profoundly they challenge our intuitions – if not our norms and rules – about the relationship between democratic governance and automation, we believe this is an essential endeavor….(More)”.

Algorithm Tips


About: “Algorithm Tips is here to help you start investigating algorithmic decision-making power in society.

This site offers a database of leads which you can search and filter. It’s a curated set of algorithms being used across the US government at the federal, state, and local levels. You can subscribe to alerts for when new algorithms matching your interests are found. For details on our curation methodology see here.

We also provide resources such as example investigations, methodological tips, and guidelines for public records requests related to algorithms.

Finally, we blog about some of the more interesting examples of algorithms we’ve uncovered in our research….(More)”.

Understanding Bias in Facial Recognition Technologies


Paper by David Leslie: “Over the past couple of years, the growing debate around automated facial recognition has reached a boiling point. As developers have continued to swiftly expand the scope of these kinds of technologies into an almost unbounded range of applications, an increasingly strident chorus of critical voices has sounded concerns about the injurious effects of the proliferation of such systems on impacted individuals and communities.

Opponents argue that the irresponsible design and use of facial detection and recognition technologies (FDRTs) threatens to violate civil liberties, infringe on basic human rights and further entrench structural racism and systemic marginalisation. They also caution that the gradual creep of face surveillance infrastructures into every domain of lived experience may eventually eradicate the modern democratic forms of life that have long provided cherished means to individual flourishing, social solidarity and human self-creation. Defenders, by contrast, emphasise the gains in public safety, security and efficiency that digitally streamlined capacities for facial identification, identity verification and trait characterisation may bring.

In this explainer, I focus on one central aspect of this debate: the role that dynamics of bias and discrimination play in the development and deployment of FDRTs. I examine how historical patterns of discrimination have made inroads into the design and implementation of FDRTs from their very earliest moments. And, I explain the ways in which the use of biased FDRTs can lead distributional and recognitional injustices. I also describe how certain complacent attitudes of innovators and users toward redressing these harms raise serious concerns about expanding future adoption. The explainer concludes with an exploration of broader ethical questions around the potential proliferation of pervasive face-based surveillance infrastructures and makes some recommendations for cultivating more responsible approaches to the development and governance of these technologies….(More)”.

High-Stakes AI Decisions Need to Be Automatically Audited


Oren Etzioni and Michael Li in Wired: “…To achieve increased transparency, we advocate for auditable AI, an AI system that is queried externally with hypothetical cases. Those hypothetical cases can be either synthetic or real—allowing automated, instantaneous, fine-grained interrogation of the model. It’s a straightforward way to monitor AI systems for signs of bias or brittleness: What happens if we change the gender of a defendant? What happens if the loan applicants reside in a historically minority neighborhood?

Auditable AI has several advantages over explainable AI. Having a neutral third-party investigate these questions is a far better check on bias than explanations controlled by the algorithm’s creator. Second, this means the producers of the software do not have to expose trade secrets of their proprietary systems and data sets. Thus, AI audits will likely face less resistance.

Auditing is complementary to explanations. In fact, auditing can help to investigate and validate (or invalidate) AI explanations. Say Netflix recommends The Twilight Zone because I watched Stranger Things. Will it also recommend other science fiction horror shows? Does it recommend The Twilight Zone to everyone who’s watched Stranger Things?

Early examples of auditable AI are already having a positive impact. The ACLU recently revealed that Amazon’s auditable facial-recognition algorithms were nearly twice as likely to misidentify people of color. There is growing evidence that public audits can improve model accuracy for under-represented groups.

In the future, we can envision a robust ecosystem of auditing systems that provide insights into AI. We can even imagine “AI guardians” that build external models of AI systems based on audits. Instead of requiring AI systems to provide low-fidelity explanations, regulators can insist that AI systems used for high-stakes decisions provide auditing interfaces.

Auditable AI is not a panacea. If an AI system is performing a cancer diagnostic, the patient will still want an accurate and understandable explanation, not just an audit. Such explanations are the subject of ongoing research and will hopefully be ready for commercial use in the near future. But in the meantime, auditable AI can increase transparency and combat bias….(More)”.

When Do We Trust AI’s Recommendations More Than People’s?


Chiara Longoni and Luca Cian at Harvard Business School: “More and more companies are leveraging technological advances in machine learning, natural language processing, and other forms of artificial intelligence to provide relevant and instant recommendations to consumers. From Amazon to Netflix to REX Real Estate, firms are using AI recommenders to enhance the customer experience. AI recommenders are also increasingly used in the public sector to guide people to essential services. For example, the New York City Department of Social Services uses AI to give citizens recommendations on disability benefits, food assistance, and health insurance.

However, simply offering AI assistance won’t necessarily lead to more successful transactions. In fact, there are cases when AI’s suggestions and recommendations are helpful and cases when they might be detrimental. When do consumers trust the word of a machine, and when do they resist it? Our research suggests that the key factor is whether consumers are focused on the functional and practical aspects of a product (its utilitarian value) or focused on the experiential and sensory aspects of a product (its hedonic value).

In an article in the Journal of Marketing — based on data from over 3,000 people who took part in 10 experiments — we provide evidence supporting for what we call a word-of-machine effect: the circumstances in which people prefer AI recommenders to human ones.

The word-of-machine effect.

The word-of-machine effect stems from a widespread belief that AI systems are more competent than humans in dispensing advice when utilitarian qualities are desired and are less competent when the hedonic qualities are desired. Importantly, the word-of-machine effect is based on a lay belief that does not necessarily correspond to the reality. The fact of the matter is humans are not necessarily less competent than AI at assessing and evaluating utilitarian attributes. Vice versa, AI is not necessarily less competent than humans at assessing and evaluating hedonic attributes….(More)”.

UK passport photo checker shows bias against dark-skinned women


Maryam Ahmed at BBC News: “Women with darker skin are more than twice as likely to be told their photos fail UK passport rules when they submit them online than lighter-skinned men, according to a BBC investigation.

One black student said she was wrongly told her mouth looked open each time she uploaded five different photos to the government website.

This shows how “systemic racism” can spread, Elaine Owusu said.

The Home Office said the tool helped users get their passports more quickly.

“The indicative check [helps] our customers to submit a photo that is right the first time,” said a spokeswoman.

“Over nine million people have used this service and our systems are improving.

“We will continue to develop and evaluate our systems with the objective of making applying for a passport as simple as possible for all.”

Skin colour

The passport application website uses an automated check to detect poor quality photos which do not meet Home Office rules. These include having a neutral expression, a closed mouth and looking straight at the camera.

BBC research found this check to be less accurate on darker-skinned people.

More than 1,000 photographs of politicians from across the world were fed into the online checker.

The results indicated:

  • Dark-skinned women are told their photos are poor quality 22% of the time, while the figure for light-skinned women is 14%
  • Dark-skinned men are told their photos are poor quality 15% of the time, while the figure for light-skinned men is 9%

Photos of women with the darkest skin were four times more likely to be graded poor quality, than women with the lightest skin….(More)”.

Lessons learned from AI ethics principles for future actions


Paper by Merve Hickok: “As the use of artificial intelligence (AI) systems became significantly more prevalent in recent years, the concerns on how these systems collect, use and process big data also increased. To address these concerns and advocate for ethical and responsible development and implementation of AI, non-governmental organizations (NGOs), research centers, private companies, and governmental agencies published more than 100 AI ethics principles and guidelines. This first wave was followed by a series of suggested frameworks, tools, and checklists that attempt a technical fix to issues brought up in the high-level principles. Principles are important to create a common understanding for priorities and are the groundwork for future governance and opportunities for innovation. However, a review of these documents based on their country of origin and funding entities shows that private companies from US-West axis dominate the conversation. Several cases surfaced in the meantime which demonstrate biased algorithms and their impact on individuals and society. The field of AI ethics is urgently calling for tangible action to move from high-level abstractions and conceptual arguments towards applying ethics in practice and creating accountability mechanisms. However, lessons must be learned from the shortcomings of AI ethics principles to ensure the future investments, collaborations, standards, codes or legislation reflect the diversity of voices and incorporate the experiences of those who are already impacted by the biased algorithms….(More)”.

Blockchain Chicken Farm: And Other Stories of Tech in China’s Countryside


Book by By Xiaowei R. Wang: “In Blockchain Chicken Farm, the technologist and writer Xiaowei Wang explores the political and social entanglements of technology in rural China. Their discoveries force them to challenge the standard idea that rural culture and people are backward, conservative, and intolerant. Instead, they find that rural China has not only adapted to rapid globalization but has actually innovated the technology we all use today. From pork farmers using AI to produce the perfect pig, to disruptive luxury counterfeits and the political intersections of e-commerce villages, Wang unravels the ties between globalization, technology, agriculture, and commerce in unprecedented fashion. Accompanied by humorous “Sinofuturist” recipes that frame meals as they transform under new technology, Blockchain Chicken Farm is an original and probing look into innovation, connectivity, and collaboration in the digitized rural world.

FSG Originals × Logic dissects the way technology functions in everyday lives. The titans of Silicon Valley, for all their utopian imaginings, never really had our best interests at heart: recent threats to democracy, truth, privacy, and safety, as a result of tech’s reckless pursuit of progress, have shown as much. We present an alternate story, one that delights in capturing technology in all its contradictions and innovation, across borders and socioeconomic divisions, from history through the future, beyond platitudes and PR hype, and past doom and gloom. Our collaboration features four brief but provocative forays into the tech industry’s many worlds, and aspires to incite fresh conversations about technology focused on nuanced and accessible explorations of the emerging tools that reorganize and redefine life today….(More)”.

AI Localism


Today, The GovLab is  excited to launch a new platform which seeks to monitor, analyze and guide how AI is being governed in cities around the world: AI Localism. 

AI Localism refers to the actions taken by local decision-makers to address the use of AI within a city or community.  AI Localism has often emerged because of gaps left by incomplete state, national or global governance frameworks.

“AI Localism offers both immediacy and proximity. Because it is managed within tightly defined geographic regions, it affords policymakers a better understanding of the tradeoffs involved. By calibrating algorithms and AI policies for local conditions, policymakers have a better chance of creating positive feedback loops that will result in greater effectiveness and accountability.”

The initial AI Localism projects include:

The Ethics and Practice of AI Localism at a Time of Covid-19 and Beyond – In collaboration with the TUM School of Governance and University of Melbourne The GovLab will conduct a comparative review of current practices worldwide to gain a better understanding of successful AI Localism in the context of COVID-19 as to inform and guide local leaders and city officials towards best practices.

Responsible AI at the Local Level – Together with the NYU Center Responsible AI, The GovLab will seek to develop an interactive repository and a set of training modules of Responsible AI approaches at the local level. 

Join us as we seek to understand and develop new forms of governance to guide local leaders towards responsible AI implementation or share any effort you are working on to establishing responsible AI at the local level by visiting: http://ailocalism.org

Amsterdam and Helsinki launch algorithm registries to bring transparency to public deployments of AI


Khari Johnson at Venture Beat: “Amsterdam and Helsinki today launched AI registries to detail how each city government uses algorithms to deliver services, some of the first major cities in the world to do so. An AI Register for each city was introduced in beta today as part of the Next Generation Internet Policy Summit, organized in part by the European Commission and the city of Amsterdam. The Amsterdam registry currently features a handful of algorithms, but it will be extended to include all algorithms following the collection of feedback at the virtual conference to lay out a European vision of the future of the internet, according to a city official.

Each algorithm cited in the registry lists datasets used to train a model, a description of how an algorithm is used, how humans utilize the prediction, and how algorithms were assessed for potential bias or risks. The registry also provides citizens a way to give feedback on algorithms their local government uses and the name, city department, and contact information for the person responsible for the responsible deployment of a particular algorithm. A complete algorithmic registry can empower citizens and give them a way to evaluate, examine, or question governments’ applications of AI.

In a previous development in the U.S., New York City created an automated decision systems task force in 2017 to document and assess city use of algorithms. At the time it was the first city in the U.S. to do so. However, following the release of a report last year, commissioners on the task force complained about a lack of transparency and inability to access information about algorithms used by city government agencies….

In a statement accompanying the announcement, Helsinki City Data project manager Pasi Rautio said the registry is also aimed at increasing public trust in the kinds of artificial intelligence “with the greatest possible openness.”…(More)”.