AI Procurement in a Box


Toolbox by the World Economic Forum: “AI Procurement in a Box is a practical guide that helps governments rethink the procurement of artificial intelligence (AI) with a focus on innovation, efficiency and ethics. Developing a new approach to the acquisition of emerging technologies such as AI will not only accelerate the adoption of AI in the administration, but also drive the development of ethical standards in AI development and deployment. Innovative procurement approaches have the potential to foster innovation, create competitive markets for AI systems and uphold public trust in the public-sector adoption of AI.

AI has the potential to vastly improve government operations and meet the needs of citizens in new ways, ranging from intelligently automating administrative processes to generating insights for public policy developments and improving public service delivery, for example, through personalized healthcare. Many public institutions are lagging behind in harnessing this powerful technology because of challenges related to data, skills and ethical deployment.

Public procurement can be an important driver of government adoption of AI. This means not only ensuring that AI-driven technologies offering the best value for money are purchased, but also driving the ethical development and deployment of innovative AI systems….(More)”.

Using Algorithms to Address Trade-Offs Inherent in Predicting Recidivism


Paper by Jennifer L. Skeem and Christopher Lowenkamp: “Although risk assessment has increasingly been used as a tool to help reform the criminal justice system, some stakeholders are adamantly opposed to using algorithms. The principal concern is that any benefits achieved by safely reducing rates of incarceration will be offset by costs to racial justice claimed to be inherent in the algorithms themselves. But fairness tradeoffs are inherent to the task of predicting recidivism, whether the prediction is made by an algorithm or human.

Based on a matched sample of 67,784 Black and White federal supervisees assessed with the Post Conviction Risk Assessment (PCRA), we compare how three alternative strategies for “debiasing” algorithms affect these tradeoffs, using arrest for a violent crime as the criterion. These candidate algorithms all strongly predict violent re-offending (AUCs=.71-72), but vary in their association with race (r= .00-.21) and shift tradeoffs between balance in positive predictive value and false positive rates. Providing algorithms with access to race (rather than omitting race or ‘blinding’ its effects) can maximize calibration and minimize imbalanced error rates. Implications for policymakers with value preferences for efficiency vs. equity are discussed…(More)”.

Selected Readings on AI for Development


By Dominik Baumann, Jeremy Pesner, Alexandra Shaw, Stefaan Verhulst, Michelle Winowatan, Andrew Young, Andrew J. Zahuranec

As part of an ongoing effort to build a knowledge base for the field of improving governance through technology, The GovLab publishes a series of Selected Readings, which provide an annotated and curated collection of recommended works on themes such as open data, data collaboration, and civic technology. 

In this edition, we explore selected literature on AI and Development. This piece was developed in the context of The GovLab’s collaboration with Agence Française de Développement (AFD) on the use of emerging technology for development. To suggest additional readings on this or any other topic, please email info@thelivinglib.org. All our Selected Readings can be found here.

Context: In recent years, public discourse on artificial intelligence (AI) has focused on its potential for improving the way businesses, governments, and societies make (automated) decisions. Simultaneously, several AI initiatives have raised concerns about human rights, including the possibility of discrimination and privacy breaches. Between these two opposing perspectives is a discussion on how stakeholders can maximize the benefits of AI for society while minimizing the risks that might arise from the use of this technology.

While the majority of AI initiatives today come from the private sector, international development actors increasingly experiment with AI-enabled programs. These initiatives focus on, for example, climate modelling, urban mobility, and disease transmission. These early efforts demonstrate the promise of AI for supporting more efficient, targeted, and impactful development efforts. Yet, the intersection of AI and development remains nascent, and questions remain regarding how this emerging technology can deliver on its promise while mitigating risks to intended beneficiaries.

Readings are listed in alphabetical order.

2030Vision. AI and the Sustainable Development Goals: the State of Play

  • In broad language, this document for 2030Vision assesses AI research and initiatives and the Sustainable Development Goals (SDGs) to determine gaps and potential that can be further explored or scaled. 
  • It specifically reviews the current applications of AI in two SDG sectors, food/agriculture and healthcare.
  • The paper recommends enhancing multi-sector collaboration among businesses, governments, civil society, academia and others to ensure technology can best address the world’s most pressing challenges.

Andersen, Lindsey. Artificial Intelligence in International Development: Avoiding Ethical Pitfalls. Journal of Public & International Affairs (2019). 

  • Investigating the ethical implications of AI in the international development sector, the author argues that the involvement of many different stakeholders and AI-technology providers results in ethical issues concerning fairness and inclusion, transparency, explainability and accountability, data limitations, and privacy and security.
  • The author recommends the information communication technology for development (ICT4D) community adopt the Principles for Digital Development to ensure the ethical implementation of AI in international development projects.
  • The Principles of Digital Development include: 1) design with the user; 2) understand the ecosystem; 3) design for scale; 4) build for sustainability; 5) be data driven; 6) use open standards, open data, open source, and open innovation; and 7) reuse and improve.

Arun, Chinmayi. AI and the Global South: Designing for Other Worlds in Markus D. Dubber, Frank Pasquale, and Sunit Das (eds.), The Oxford Handbook of Ethics of AI, Oxford University Press, Forthcoming (2019).

  • This chapter interrogates the impact of AI’s application in the Global South and raises concerns about such initiatives.
  • Arun argues AI’s deployment in the Global South may result in discrimination, bias, oppression, exclusion, and bad design. She further argues it can be especially harmful to vulnerable communities in places that do not have strong respect for human rights.
  • The paper concludes by outlining the international human rights laws that can mitigate these risks. It stresses the importance of a human rights-centric, inclusive, empowering context-driven approach in the use of AI in the Global South.

Best, Michael. Artificial Intelligence (AI) for Development Series: Module on AI, Ethics and Society. International Telecommunications Union (2018). 

  • This working paper is intended to help ICT policymakers or regulators consider the ethical challenges that emerge within AI applications.
  • The author identifies a four-pronged framework of analysis (risks, rewards, connections, and key questions to consider) that can guide policymaking in the fields of: 1) livelihood and work; 2) diversity, non-discrimination and freedoms from bias; 3) data privacy and minimization; and 4) peace and security.
  • The paper also includes a table of policies and initiatives undertaken by national governments and tech companies around AI, along with the set of values (mentioned above) explicitly considered.

International Development Innovation Alliance (2019). Artificial Intelligence and International Development: An Introduction

  • Results for Development, a nonprofit organization working in the international development sector, developed a report in collaboration with the AI and Development Working Group within the International Development Innovation Alliance (IDIA). The report provides a brief overview of AI and how this technology may impact the international development sector.
  • The report provides examples of AI-powered applications and initiatives that support the SDGs, including eradicating hunger, promoting gender equality, and encouraging climate action.
  • It also provides a collection of supporting resources and case studies for development practitioners interested in using AI.

Paul, Amy, Craig Jolley, and Aubra Anthony. Reflecting the Past, Shaping the Future: Making AI Work for International Development. United States Agency for International Development (2018). 

  • This report outlines the potential of machine learning (ML) and artificial intelligence in supporting development strategy. It also details some of the common risks that can arise from the use of these technologies.
  • The document contains examples of ML and AI applications to support the development sector and recommends good practices in handling such technologies. 
  • It concludes by recommending broad, shared governance, using fair and balanced data, and ensuring local population and development practitioners remain involved in it.

Pincet, Arnaud, Shu Okabe, and Martin Pawelczyk. Linking Aid to the Sustainable Development Goals – a machine learning approach. OECD Development Co-operation Working Papers (2019). 

  • The authors apply ML and semantic analysis to data sourced from the OECD’s Creditor Reporting System to map aid funding to particular SDGs.
  • The researchers find “Good Health and Well-Being” as the most targeted SDG, what the researchers call the “SDG darling.”
  • The authors find that mapping relationships between the system and SDGs can help to ensure equitable funding across different goals.

Quinn, John, Vanessa Frias-Martinez, and Lakshminarayan Subramanian. Computational Sustainability and Artificial Intelligence in the Developing World. Association for the Advancement of Artificial Intelligence (2014). 

  • These researchers suggest three different areas—health, food security, and transportation—in which AI applications can uniquely benefit the developing world. The researchers argue the lack of technological infrastructure in these regions make AI especially useful and valuable, as it can efficiently analyze data and provide solutions.
  • It provides some examples of application within the three themes, including disease surveillance, identification of drought and agricultural trends, modeling of commuting patterns, and traffic congestion monitoring.

Smith, Matthew and Sujaya Neupane. Artificial intelligence and human development: toward a research agenda (2018).

  • The authors highlight potential beneficial applications for AI in a development context, including healthcare, agriculture, governance, education, and economic productivity.
  • They also discuss the risks and downsides of AI, which include the “black boxing” of algorithms, bias in decision making, potential for extreme surveillance, undermining democracy, potential for job and tax revenue loss, vulnerability to cybercrime, and unequal wealth gains towards the already-rich.
  • They recommend further research projects on these topics that are interdisciplinary, locally conducted, and designed to support practice and policy.

Tomašev, Nenad, et al. AI for social good: unlocking the opportunity for positive impact. Nature Communications (2020).

  • This paper takes stock of what the authors term the AI for Social Good movement (AI4SG), which “aims to establish interdisciplinary partnerships centred around AI applications towards SDGs.”  
  • Developed at a multidisciplinary expert seminar on the topic, the authors present 10 recommendations for creating successful AI4SG collaborations: “1) Expectations of what is possible with AI need to be well grounded. 2) There is value in simple solutions. 3) Applications of AI need to be inclusive and accessible, and reviewed at every stage for ethics and human rights compliance. 4) Goals and use cases should be clear and well-defined. 5) Deep, long-term partnerships are required to solve large problem successfully. 6) Planning needs to align incentives, and factor in the limitations of both communities. 7) Establishing and maintaining trust is key to overcoming organisational barriers. 8) Options for reducing the development cost of AI solutions should be explored. 9) Improving data readiness is key. 10) Data must be processed securely, with utmost respect for human rights and privacy.”

Vinuesa, Ricardo, et al. The role of artificial intelligence in achieving the Sustainable Development Goals. 

  • This report analyzes how AI can meet both the demands of some SDGs and also inhibit progress toward others. It highlights a critical research gap about the extent to which AI impacts sustainable development in the medium and long term. 
  • Through his analysis, Vinuesa claims AI has the potential to positively impact the environment, society, and the economy. However, AI can hinder these groups.
  • The authors recognize that although AI enables efficiency and productivity, it can also increase inequality and hinder achievements of the 2030 Agenda. Vinuesa and his co-authors suggest adequate policy formation and regulation are needed to ensure fast and equitable development of AI technologies that can address the SDGs. 

United Nations Education, Scientific and Cultural Organization (UNESCO) (2019). Artificial intelligence for Sustainable Development: Synthesis Report, Mobile Learning Week 2019

  • In this report, UNESCO assesses the findings from Mobile Learning Week (MLW) 2019. The three main conclusions were: 1) the world is facing a learning crisis; 2) education drives sustainable development; and 3) sustainable development can only be achieved if we harness the potential of AI. 
  • Questions around four major themes dominated the MLW 2019 sessions: 1) how to guarantee inclusive and equitable use of AI in education; 2) how to harness AI to improve learning; 3) how to increase skills development; and 4) how to ensure transparent and auditable use of education data. 
  • To move forward, UNESCO advocates for more international cooperation and stakeholder involvement, creation of education and AI standards, and development of national policies to address educational gaps and risks. 

An Introduction to Ethics in Robotics and AI


Book by Christoph Bartneck, Christoph Lütge, Alan Wagner and Sean Welsh: “This open access book introduces the reader to the foundations of AI and ethics. It discusses issues of trust, responsibility, liability, privacy and risk. It focuses on the interaction between people and the AI systems and Robotics they use. Designed to be accessible for a broad audience, reading this book does not require prerequisite technical, legal or philosophical expertise. Throughout, the authors use examples to illustrate the issues at hand and conclude the book with a discussion on the application areas of AI and Robotics, in particular autonomous vehicles, automatic weapon systems and biased algorithms. A list of questions and further readings is also included for students willing to explore the topic further….(More)”.

Eye-catching advances in some AI fields are not real


Matthew Hutson at Science: “Artificial intelligence (AI) just seems to get smarter and smarter. Each iPhone learns your face, voice, and habits better than the last, and the threats AI poses to privacy and jobs continue to grow. The surge reflects faster chips, more data, and better algorithms. But some of the improvement comes from tweaks rather than the core innovations their inventors claim—and some of the gains may not exist at all, says Davis Blalock, a computer science graduate student at the Massachusetts Institute of Technology (MIT). Blalock and his colleagues compared dozens of approaches to improving neural networks—software architectures that loosely mimic the brain. “Fifty papers in,” he says, “it became clear that it wasn’t obvious what the state of the art even was.”

The researchers evaluated 81 pruning algorithms, programs that make neural networks more efficient by trimming unneeded connections. All claimed superiority in slightly different ways. But they were rarely compared properly—and when the researchers tried to evaluate them side by side, there was no clear evidence of performance improvements over a 10-year period. The result, presented in March at the Machine Learning and Systems conference, surprised Blalock’s Ph.D. adviser, MIT computer scientist John Guttag, who says the uneven comparisons themselves may explain the stagnation. “It’s the old saw, right?” Guttag said. “If you can’t measure something, it’s hard to make it better.”

Researchers are waking up to the signs of shaky progress across many subfields of AI. A 2019 meta-analysis of information retrieval algorithms used in search engines concluded the “high-water mark … was actually set in 2009.” Another study in 2019 reproduced seven neural network recommendation systems, of the kind used by media streaming services. It found that six failed to outperform much simpler, nonneural algorithms developed years before, when the earlier techniques were fine-tuned, revealing “phantom progress” in the field. In another paper posted on arXiv in March, Kevin Musgrave, a computer scientist at Cornell University, took a look at loss functions, the part of an algorithm that mathematically specifies its objective. Musgrave compared a dozen of them on equal footing, in a task involving image retrieval, and found that, contrary to their developers’ claims, accuracy had not improved since 2006. “There’s always been these waves of hype,” Musgrave says….(More)”.

Policy Priority Inference


Turing Institute: “…Policy Priority Inference builds on a behavioural computational model, taking into account the learning process of public officials, coordination problems, incomplete information, and imperfect governmental monitoring mechanisms. The approach is a unique mix of economic theory, behavioural economics, network science and agent-based modelling. The data that feeds the model for a specific country (or a sub-national unit, such as a state) includes measures of the country’s DIs and how they have moved over the years, specified government policy goals in relation to DIs, the quality of government monitoring of expenditure, and the quality of the country’s rule of law.

From these data alone – and, crucially, with no specific information on government expenditure, which is rarely made available – the model can infer the transformative resources a country has historically allocated to transform its SDGs, and assess the importance of SDG interlinkages between DIs. Importantly, it can also reveal where previously hidden inefficiencies lie.

How does it work? The researchers modelled the socioeconomic mechanisms of the policy-making process using agent-computing simulation. They created a simulator featuring an agent called “Government”, which makes decisions about how to allocate public expenditure, and agents called “Bureaucrats”, each of which is essentially a policy-maker linked to a single DI. If a Bureaucrat is allocated some resource, they will use a portion of it to improve their DI, with the rest lost to some degree of inefficiency (in reality, inefficiencies range from simple corruption to poor quality policies and inefficient government departments).

How much resource a Bureaucrat puts towards moving their DI depends on that agent’s experience: if becoming inefficient pays off, they’ll keep doing it. During the process, Government monitors the Bureaucrats, occasionally punishing inefficient ones, who may then improve their behaviour. In the model, a Bureaucrat’s chances of getting caught is linked to the quality of a government’s real-world monitoring of expenditure, and the extent to which they are punished is reflected in the strength of that country’s rule of law.

Diagram of the Policy Priority Inference model
Using data on a country or state’s development indicators and its governance, Policy Priority Inference techniques can model how a government and its policy-makers allocate “transformational resources” to reach their sustainable development goals.

When the historical movements of a country’s DIs are reproduced through the internal workings of the model, the researchers have a powerful proxy for the real-world relationships between government activity, the movement of DIs, and the effects of the interlinkages between DIs, all of which are unique to that country. “Once we can match outcomes, we can discern something that’s going on in reality. But the fact that the method is matching the dynamics of real-world development indicators is just one of multiple ways that we validate our results,” Guerrero notes. This proxy can then be used to project which policy areas should be prioritised in future to best achieve the government’s specified development goals, including predictions of likely timescales.

What’s more, in combination with techniques from evolutionary computation, the model can identify DIs that are linked to large positive spillover effects. These DIs are dubbed “accelerators”. Targeting government resources at such development accelerators fosters not only more rapid results, but also more generalised development…(More)”.

AI governance in the public sector: Three tales from the frontiers of automated decision-making in democratic settings


Paper by Maciej Kuziemski and Gianluca Misuraca: “The rush to understand new socio-economic contexts created by the wide adoption of AI is justified by its far-ranging consequences, spanning almost every walk of life. Yet, the public sector’s predicament is a tragic double bind: its obligations to protect citizens from potential algorithmic harms are at odds with the temptation to increase its own efficiency – or in other words – to govern algorithms, while governing by algorithms. Whether such dual role is even possible, has been a matter of debate, the challenge stemming from algorithms’ intrinsic properties, that make them distinct from other digital solutions, long embraced by the governments, create externalities that rule-based programming lacks.

As the pressures to deploy automated decision making systems in the public sector become prevalent, this paper aims to examine how the use of AI in the public sector in relation to existing data governance regimes and national regulatory practices can be intensifying existing power asymmetries. To this end, investigating the legal and policy instruments associated with the use of AI for strenghtening the immigration process control system in Canada; “optimising” the employment services” in Poland, and personalising the digital service experience in Finland, the paper advocates for the need of a common framework to evaluate the potential impact of the use of AI in the public sector.

In this regard, it discusses the specific effects of automated decision support systems on public services and the growing expectations for governments to play a more prevalent role in the digital society and to ensure that the potential of technology is harnessed, while negative effects are controlled and possibly avoided. This is of particular importance in light of the current COVID-19 emergency crisis where AI and the underpinning regulatory framework of data ecosystems, have become crucial policy issues as more and more innovations are based on large scale data collections from digital devices, and the real-time accessibility of information and services, contact and relationships between institutions and citizens could strengthen – or undermine – trust in governance systems and democracy….(More)”.

Apparent Algorithmic Bias and Algorithmic Learning


Paper by Anja Lambrecht and Catherine E. Tucker: “It is worrying to think that algorithms might discriminate against minority groups and reinforce existing inequality. Typically, such concerns have focused on the idea that the algorithm’s code could reflect bias, or the data that feeds the algorithm might lead the algorithm to produce uneven outcomes.

In this paper, we highlight another reason for why algorithms might appear biased against minority groups which is the length of time algorithms need to learn: if an algorithm has access to less data for particular groups, or accesses this data at differential speeds, it will produce differential outcomes, potentially disadvantaging minority groups.

Specifically, we revisit a classic study which documents that searches on Google for black names were more likely to return ads that highlighted the need for a criminal background check than searches for white names. We show that at least a partial explanation for this finding is that if consumer demand for a piece of information is low, an algorithm accumulates information at a lesser speed and thus takes longer to learn about consumer preferences. Since black names are less common, the algorithm learns about the quality of the underlying ad more slowly, and as a result an ad is more likely to persist for searches next to black names even if the algorithm judges the ad to be of low-quality. Therefore, the algorithm may be likely to show an ad — including an undesirable ad — in the context of searches for a disadvantaged group for a longer period of time.

We replicate this result using the context of religious affiliations and present evidence that ads targeted towards searches for religious groups persists for longer for groups that are less searched for. This suggests that the process of algorithmic learning can lead to differential outcomes across those whose characteristics are more common and those who are rarer in society….(More)”.

MEPs chart path for a European approach to Artificial Intelligence


Samuel Stolton at Euractiv: “As part of a series of debates in Parliament’s Legal Affairs Committee on Tuesday afternoon, MEPs exchanged ideas concerning several reports on Artificial Intelligence, covering ethics, civil liability, and intellectual property.

The reports represent Parliament’s recommendations to the Commission on the future for AI technology in the bloc, following the publication of the executive’s White Paper on Artificial Intelligence, which stated that high-risk technologies in ‘critical sectors’ and those deemed to be of ‘critical use’ should be subjected to new requirements.

One Parliament initiative on the ethical aspects of AI, led by Spanish Socialist Ibán García del Blanco, notes that he believes a uniform regulatory framework in the field of AI in Europe is necessary to avoid member states adopting different approaches.

“We felt that regulation is important to make sure that there is no restriction on the internal market. If we leave scope to the member states, I think we’ll see greater legal uncertainty,” García del Blanco said on Tuesday.

In the context of the current public health crisis, García del Blanco also said the use of certain biometric applications and remote recognition technologies should be proportionate, while respecting the EU’s data protection regime and the EU Charter of Fundamental Rights.

A new EU agency for Artificial Intelligence?

One of the most contested areas of García del Blanco’s report was his suggestion that the EU should establish a new agency responsible for overseeing compliance with future ethical principles in Artificial Intelligence.

“We shouldn’t get distracted by the idea of setting up an agency, European Union citizens are not interested in setting up further bodies,” said the conservative EPP’s shadow rapporteur on the file, Geoffroy Didier.

The centrist-liberal Renew group also did not warm up to the idea of establishing a new agency for AI, with MEP Stephane Sejourne saying that there already exist bodies that could have their remits extended.

In the previous mandate, as part of a 2017 resolution on Civil Law Rules on Robotics, Parliament had called upon the Commission to ‘consider’ whether an EU Agency for Robotics and Artificial Intelligence could be worth establishing in the future.

Another point of divergence consistently raised by MEPs on Tuesday was the lack of harmony in key definitions related to Artificial Intelligence across different Parliamentary texts, which could create legal loopholes in the future.

In this vein, members highlighted the need to work towards joint definitions for Artificial intelligence operations, in order to ensure consistency across Parliament’s four draft recommendations to the Commission….(More)”.

Our weird behavior during the pandemic is messing with AI models


Will Douglas Heaven at MIT Technology Review: “In the week of April 12-18, the top 10 search terms on Amazon.com were: toilet paper, face mask, hand sanitizer, paper towels, Lysol spray, Clorox wipes, mask, Lysol, masks for germ protection, and N95 mask. People weren’t just searching, they were buying too—and in bulk. The majority of people looking for masks ended up buying the new Amazon #1 Best Seller, “Face Mask, Pack of 50”.

When covid-19 hit, we started buying things we’d never bought before. The shift was sudden: the mainstays of Amazon’s top ten—phone cases, phone chargers, Lego—were knocked off the charts in just a few days. Nozzle, a London-based consultancy specializing in algorithmic advertising for Amazon sellers, captured the rapid change in this simple graph.

It took less than a week at the end of February for the top 10 Amazon search terms in multiple countries to fill up with products related to covid-19. You can track the spread of the pandemic by what we shopped for: the items peaked first in Italy, followed by Spain, France, Canada, and the US. The UK and Germany lag slightly behind. “It’s an incredible transition in the space of five days,” says Rael Cline, Nozzle’s CEO. The ripple effects have been seen across retail supply chains.

But they have also affected artificial intelligence, causing hiccups for the algorithms that run behind the scenes in inventory management, fraud detection, marketing, and more. Machine-learning models trained on normal human behavior are now finding that normal has changed, and some are no longer working as they should. 

How bad the situation is depends on whom you talk to. According to Pactera Edge, a global AI consultancy, “automation is in tailspin.” Others say they are keeping a cautious eye on automated systems that are just about holding up, stepping in with a manual correction when needed.

What’s clear is that the pandemic has revealed how intertwined our lives are with AI, exposing a delicate codependence in which changes to our behavior change how AI works, and changes to how AI works change our behavior. This is also a reminder that human involvement in automated systems remains key. “You can never sit and forget when you’re in such extraordinary circumstances,” says Cline….(More)”.