Research Shows Political Acumen, Not Just Analytical Skills, is Key to Evidence-Informed Policymaking


Press Release: “Results for Development (R4D) has released a new study unpacking how evidence translators play a key and somewhat surprising role in ensuring policymakers have the evidence they need to make informed decisions. Translators — who can be evidence producers, policymakers, or intermediaries such as journalists, advocates and expert advisors — identify, filter, interpret, adapt, contextualize and communicate data and evidence for the purposes of policymaking.

The study, Translators’ Role in Evidence-Informed Policymaking, provides a better understanding of who translators are and how different factors influence translators’ ability to promote the use of evidence in policymaking. This research shows translation is an essential function and that, absent individuals or organizations taking up the translator role, evidence translation and evidence-informed policymaking often do not take place.

“We began this research assuming that translators’ technical skills and analytical prowess would prove to be among the most important factors in predicting when and how evidence made its way into public sector decision making,” Nathaniel Heller, executive vice president for integrated strategies at Results for Development, said. “Surprisingly, that turned out not to be the case, and other ‘soft’ skills play a far larger role in translators’ efficacy than we had imagined.”

Key findings include:

  • Translator credibility and reputation are crucial to the ability to gain access to policymakers and to promote the uptake of evidence.
  • Political savvy and stakeholder engagement are among the most critical skills for effective translators.
  • Conversely, analytical skills and the ability to adapt, transform and communicate evidence were identified as being less important stand-alone translator skills.
  • Evidence translation is most effective when initiated by those in power or when translators place those in power at the center of their efforts.

The study includes a definitional and theoretical framework as well as a set of research questions about key enabling and constraining factors that might affect evidence translators’ influence. It also focuses on two cases in Ghana and Argentina to validate and debunk some of the intellectual frameworks around policy translators that R4D and others in the field have already developed. The first case focuses on Ghana’s blue-ribbon commission formed by the country’s president in 2015, which was tasked with reviewing Ghana’s national health insurance scheme. The second case looks at Buenos Aires’ 2016 government-led review of the city’s right to information regime….(More)”.

Ontario is trying a wild experiment: Opening access to its residents’ health data


Dave Gershorn at Quartz: “The world’s most powerful technology companies have a vision for the future of healthcare. You’ll still go to your doctor’s office, sit in a waiting room, and explain your problem to someone in a white coat. But instead of relying solely on their own experience and knowledge, your doctor will consult an algorithm that’s been trained on the symptoms, diagnoses, and outcomes of millions of other patients. Instead of a radiologist reading your x-ray, a computer will be able to detect minute differences and instantly identify a tumor or lesion. Or at least that’s the goal.

AI systems like these, currently under development by companies including Google and IBM, can’t read textbooks and journals, attend lectures, and do rounds—they need millions of real life examples to understand all the different variations between one patient and another. In general, AI is only as good as the data it’s trained on, but medical data is exceedingly private—most developed countries have strict health data protection laws, such as HIPAA in the United States….

These approaches, which favor companies with considerable resources, are pretty much the only way to get large troves of health data in the US because the American health system is so disparate. Healthcare providers keep personal files on each of their patients, and can only transmit them to other accredited healthcare workers at the patient’s request. There’s no single place where all health data exists. It’s more secure, but less efficient for analysis and research.

Ontario, Canada, might have a solution, thanks to its single-payer healthcare system. All of Ontario’s health data exists in a few enormous caches under government control. (After all, the government needs to keep track of all the bills its paying.) Similar structures exist elsewhere in Canada, such as Quebec, but Toronto, which has become a major hub for AI research, wants to lead the charge in providing this data to businesses.

Until now, the only people allowed to study this data were government organizations or researchers who partnered with the government to study disease. But Ontario has now entrusted the MaRS Discovery District—a cross between a tech incubator and WeWork—to build a platform for approved companies and researchers to access this data, dubbed Project Spark. The project, initiated by MaRS and Canada’s University Health Network, began exploring how to share this data after both organizations expressed interest to the government about giving broader health data access to researchers and companies looking to build healthcare-related tools.

Project Spark’s goal is to create an API, or a way for developers to request information from the government’s data cache. This could be used to create an app for doctors to access the full medical history of a new patient. Ontarians could access their health records at any time through similar software, and catalog health issues as they occur. Or researchers, like the ones trying to build AI to assist doctors, could request a different level of access that provides anonymized data on Ontarians who meet certain criteria. If you wanted to study every Ontarian who had Alzheimer’s disease over the last 40 years, that data would only be authorization and a few lines of code away.

There are currently 100 companies lined up to get access to data, comprised of health records from Ontario’s 14 million residents. (MaRS won’t say who the companies are). …(More)”

AI Nationalism


Blog by Ian Hogarth: “The central prediction I want to make and defend in this post is that continued rapid progress in machine learning will drive the emergence of a new kind of geopolitics; I have been calling it AI Nationalism. Machine learning is an omni-use technology that will come to touch all sectors and parts of society.

The transformation of both the economy and the military by machine learning will create instability at the national and international level forcing governments to act. AI policy will become the single most important area of government policy. An accelerated arms race will emerge between key countries and we will see increased protectionist state action to support national champions, block takeovers by foreign firms and attract talent. I use the example of Google, DeepMind and the UK as a specific example of this issue.

This arms race will potentially speed up the pace of AI development and shorten the timescale for getting to AGI. Although there will be many common aspects to this techno-nationalist agenda, there will also be important state specific policies. There is a difference between predicting that something will happen and believing this is a good thing. Nationalism is a dangerous path, particular when the international order and international norms will be in flux as a result and in the concluding section I discuss how a period of AI Nationalism might transition to one of global cooperation where AI is treated as a global public good….(More)”.

Activating Agency or Nudging?


Article by Michael Walton: “Two ideas in development – activating agency of citizens and using “nudges” to change their behavior – seem diametrically opposed in spirit: activating latent agency at the ground level versus  top-down designs that exploit people’s behavioral responses. Yet both start from a psychological focus and a belief that changes in people’s behavior can lead to “better” outcomes, for the individuals involved and for society.  So how should we think of these contrasting sets of ideas? When should each approach be used?…

Let’s compare the two approaches with respect to diagnostic frame, practice and ethics.

Diagnostic frame.  

The common ground is recognition that people use short-cuts for decision-making, in ways that can hurt their own interests.  In both approaches, there is an emphasis that decision-making is particularly tough for poor people, given the sheer weight of daily problem-solving.  In behavioral economics one core idea is that we have limited mental “bandwidth” and this form of scarcity hampers decision-making. However, in the “agency” tradition, there is much more emphasis on unearthing and working with the origins of the prevailing mental models, with respect to social exclusion, stigmatization, and the typically unequal economic and cultural relations with respect to more powerful groups in a society.  One approach works more with symptoms, the other with root causes.

Implications for practice.  

The two approaches on display in Cerrito both concern social gains, and both involve a role for an external actor.  But here the contrast is sharp. In the “nudge” approach the external actor is a beneficent technocrat, trying out alternative offers to poor (or non-poor) people to improve outcomes.  A vivid example is alternative messages to tax payers in Guatemala, that induce varying improvements in tax payments.  In the “agency” approach the essence of the interaction is between a front-line worker and an individual or family, with a co-created diagnosis and plan, designed around goals and specific actions that the poor person chooses.  This is akin to what anthropologist Arjun Appadurai termed increasing the “capacity to aspire,” and can extend to greater engagement in civic and political life.

Ethics.

In both approaches, ethics is central.  As implicated in the “nudging for social good as opposed to electoral gain,” some form of ethical regulation is surely needed. In “action to activate agency,” the central ethical issue is of maintaining equality in design between activist and citizen, and explicit owning of any decisions.

What does this imply?

To some degree this is a question of domain of action.  Nudging is most appropriate in a program for which there is a fully supported political and social program, and the issue is how to make it work (as in paying taxes).  The agency approach has a broader ambition, but starts from domains that are potentially within an individual’s control once the sources of “ineffective” or inhibited behavior are tackled, including via front-line interactions with public or private actors….(More)”.

Do Delivery Units Deliver?: Assessing Government Innovations


Technical note by Lafuente, Mariano and González, Sebastián prepared as part of the Inter-American Development Bank’s (IDB) agenda on Center of Government: “… analyzes how delivery units (DU) have been adapted by Latin American and Caribbean governments, the degree to which they have contributed to meeting governments’ priority goals between 2007 and 2018, and the lessons learned along the way. The analysis, which draws lessons from 14 governments in the region, shows that the implementation of the DU model has varied as it has been tailored to each country’s context and that, under certain preconditions, has contributed to: (i) improved management using specific tools in contexts where institutional development is low; and (ii) attaining results that have a direct impact on citizens. The objective of this document is to serve as a guide for governments interested in applying similar management models as well as to set out an agenda for the future of DU in the region….(More)“.

New Technologies Won’t Reduce Scarcity, but Here’s Something That Might


Vasilis Kostakis and Andreas Roos at the Harvard Business Review: “In a book titled Why Can’t We All Just Get Along?, MIT scientists Henry Lieberman and Christopher Fry discuss why we have wars, mass poverty, and other social ills. They argue that we cannot cooperate with each other to solve our major problems because our institutions and businesses are saturated with a competitive spirit. But Lieberman and Fry have some good news: modern technology can address the root of the problem. They believe that we compete when there is scarcity, and that recent technological advances, such as 3D printing and artificial intelligence, will end widespread scarcity. Thus, a post-scarcity world, premised on cooperation, would emerge.

But can we really end scarcity?

We believe that the post-scarcity vision of the future is problematic because it reflects an understanding of technology and the economy that could worsen the problems it seeks to address. This is the bad news. Here’s why:

New technologies come to consumers as finished products that can be exchanged for money. What consumers often don’t understand is that the monetary exchange hides the fact that many of these technologies exist at the expense of other humans and local environments elsewhere in the global economy….

The good news is that there are alternatives. The wide availability of networked computers has allowed new community-driven and open-source business models to emerge. For example, consider Wikipedia, a free and open encyclopedia that has displaced the Encyclopedia Britannica and Microsoft Encarta. Wikipedia is produced and maintained by a community of dispersed enthusiasts primarily driven by other motives than profit maximization.  Furthermore, in the realm of software, see the case of GNU/Linux on which the top 500 supercomputers and the majority of websites run, or the example of the Apache Web Server, the leading software in the web-server market. Wikipedia, Apache and GNU/Linux demonstrate how non-coercive cooperation around globally-shared resources (i.e. a commons) can produce artifacts as innovative, if not more, as those produced by industrial capitalism.

In the same way, the emergence of networked micro-factories are giving rise to new open-source business models in the realm of design and manufacturing. Such spaces can either be makerspaces, fab labs, or other co-working spaces, equipped with local manufacturing technologies, such as 3D printing and CNC machines or traditional low-tech tools and crafts. Moreover, such spaces often offer collaborative environments where people can meet in person, socialize and co-create.

This is the context in which a new mode of production is emerging. This mode builds on the confluence of the digital commons of knowledge, software, and design with local manufacturing technologies.  It can be codified as “design global, manufacture local” following the logic that what is light (knowledge, design) becomes global, while what is heavy (machinery) is local, and ideally shared. Design global, manufacture local (DGML) demonstrates how a technology project can leverage the digital commons to engage the global community in its development, celebrating new forms of cooperation. Unlike large-scale industrial manufacturing, the DGML model emphasizes application that is small-scale, decentralized, resilient, and locally controlled. DGML could recognize the scarcities posed by finite resources and organize material activities accordingly. First, it minimizes the need to ship materials over long distances, because a considerable part of the manufacturing takes place locally. Local manufacturing also makes maintenance easier, and also encourages manufacturers to design products to last as long as possible. Last, DGML optimizes the sharing of knowledge and design as there are no patent costs to pay for….(More)”

Big Data against Child Obesity


European Commission: “Childhood and adolescent obesity is a major global and European public health problem. Currently, public actions are detached from local needs, mostly including indiscriminate blanket policies and single-element strategies, limiting their efficacy and effectiveness. The need for community-targeted actions has long been obvious, but the lack of monitoring and evaluation framework and the methodological inability to objectively quantify the local community characteristics, in a reasonable timeframe, has hindered that.

Graph showing BigO policy planner

Big Data based Platform

Technological achievements in mobile and wearable electronics and Big Data infrastructures allow the engagement of European citizens in the data collection process, allowing us to reshape policies at a regional, national and European level. In BigO, that will be facilitated through the development of a platform, allowing the quantification of behavioural community patterns through Big Data provided by wearables and eHealth- devices.

Estimate child obesity through community data

BigO has set detailed scientific, technological, validation and business objectives in order to be able to build a system that collects Big Data on children’s behaviour and helps planning health policies against obesity. In addition, during the project, BigO will reach out to more than 25.000 school and age-matched obese children and adolescents as sources for community data. Comprehensive models of the obesity prevalence dependence matrix will be created, allowing the data-driven effectiveness predictions about specific policies on a community and the real-time monitoring of the population response, supported by powerful real-time data visualisations….(More)

Ten Reasons Not to Measure Impact—and What to Do Instead


Essay by Mary Kay Gugerty & Dean Karlan in the Stanford Social Innovation Review: “Good impact evaluations—those that answer policy-relevant questions with rigor—have improved development knowledge, policy, and practice. For example, the NGO Living Goods conducted a rigorous evaluation to measure the impact of its community health model based on door-to-door sales and promotions. The evidence of impact was strong: Their model generated a 27-percent reduction in child mortality. This evidence subsequently persuaded policy makers, replication partners, and major funders to support the rapid expansion of Living Goods’ reach to five million people. Meanwhile, rigorous evidence continues to further validate the model and help to make it work even better.

Of course, not all rigorous research offers such quick and rosy results. Consider the many studies required to discover a successful drug and the lengthy process of seeking regulatory approval and adoption by the healthcare system. The same holds true for fighting poverty: Innovations for Poverty Action (IPA), a research and policy nonprofit that promotes impact evaluations for finding solutions to global poverty, has conducted more than 650 randomized controlled trials (RCTs) since its inception in 2002. These studies have sometimes provided evidence about how best to use scarce resources (e.g., give away bed nets for free to fight malaria), as well as how to avoid wasting them (e.g., don’t expand traditional microcredit). But the vast majority of studies did not paint a clear picture that led to immediate policy changes. Developing an evidence base is more like building a mosaic: Each individual piece does not make the picture, but bit by bit a picture becomes clearer and clearer.

How do these investments in evidence pay off? IPA estimated the benefits of its research by looking at its return on investment—the ratio of the benefit from the scale-up of the demonstrated large-scale successes divided by the total costs since IPA’s founding. The ratio was 74x—a huge result. But this is far from a precise measure of impact, since IPA cannot establish what would have happened had IPA never existed. (Yes, IPA recognizes the irony of advocating for RCTs while being unable to subject its own operations to that standard. Yet IPA’s approach is intellectually consistent: Many questions and circumstances do not call for RCTs.)

Even so, a simple thought exercise helps to demonstrate the potential payoff. IPA never works alone—all evaluations and policy engagements are conducted in partnership with academics and implementing organizations, and increasingly with governments. Moving from an idea to the research phase to policy takes multiple steps and actors, often over many years. But even if IPA deserves only 10 percent of the credit for the policy changes behind the benefits calculated above, the ratio of benefits to costs is still 7.4x. That is a solid return on investment.

Despite the demonstrated value of high-quality impact evaluations, a great deal of money and time has been wasted on poorly designed, poorly implemented, and poorly conceived impact evaluations. Perhaps some studies had too small of a sample or paid insufficient attention to establishing causality and quality data, and hence any results should be ignored; others perhaps failed to engage stakeholders appropriately, and as a consequence useful results were never put to use.

The push for more and more impact measurement can not only lead to poor studies and wasted money, but also distract and take resources from collecting data that can actually help improve the performance of an effort. To address these difficulties, we wrote a book, The Goldilocks Challenge, to help guide organizations in designing “right-fit” evidence strategies. The struggle to find the right fit in evidence resembles the predicament that Goldilocks faces in the classic children’s fable. Goldilocks, lost in the forest, finds an empty house with a large number of options: chairs, bowls of porridge, and beds of all sizes. She tries each but finds that most do not suit her: The porridge is too hot or too cold, the bed too hard or too soft—she struggles to find options that are “just right.” Like Goldilocks, the social sector has to navigate many choices and challenges to build monitoring and evaluation systems that fit their needs. Some will push for more and more data; others will not push for enough….(More)”.

The 2018 Atlas of Sustainable Development Goals: an all-new visual guide to data and development


World Bank Data Team: “We’re pleased to release the 2018 Atlas of Sustainable Development Goals. With over 180 maps and charts, the new publication shows the progress societies are making towards the 17 SDGs.

It’s filled with annotated data visualizations, which can be reproducibly built from source code and data. You can view the SDG Atlas onlinedownload the PDF publication (30Mb), and access the data and source code behind the figures.

This Atlas would not be possible without the efforts of statisticians and data scientists working in national and international agencies around the world. It is produced in collaboration with the professionals across the World Bank’s data and research groups, and our sectoral global practices.

Trends and analysis for the 17 SDGs

The Atlas draws on World Development Indicators, a database of over 1,400 indicators for more than 220 economies, many going back over 50 years. For example, the chapter on SDG4 includes data from the UNESCO Institute for Statistics on education and its impact around the world.

Throughout the Atlas, data are presented by country, region and income group and often disaggregated by sex, wealth and geography.

The Atlas also explores new data from scientists and researchers where standards for measuring SDG targets are still being developed. For example, the chapter on SDG14 features research led by Global Fishing Watch, published this year in Science. Their team has tracked over 70,000 industrial fishing vessels from 2012 to 2016, processed 22 billion automatic identification system messages to map and quantify fishing around the world….(More)”.

How Policymakers Can Foster Algorithmic Accountability


Report by Joshua New and Daniel Castro: “Increased automation with algorithms, particularly through the use of artificial intelligence (AI), offers opportunities for the public and private sectors to complete increasingly complex tasks with a level of productivity and effectiveness far beyond that of humans, generating substantial social and economic benefits in the process. However, many believe an increased use of algorithms will lead to a host of harms, including exacerbating existing biases and inequalities, and have therefore called for new public policies, such as establishing an independent commission to regulate algorithms or requiring companies to explain publicly how their algorithms make decisions. Unfortunately, all of these proposals would lead to less AI use, thereby hindering social and economic progress.

Policymakers should reject these proposals and instead support algorithmic decision-making by promoting policies that ensure its robust development and widespread adoption. Like any new technology, there are strong incentives among both developers and adopters to improve algorithmic decision-making and ensure its applications do not contain flaws, such as bias, that reduce their effectiveness. Thus, rather than establish a master regulatory framework for all algorithms, policymakers should do what they have always done with regard to technology regulation: enact regulation only where it is required, targeting specific harms in particular application areas through dedicated regulatory bodies that are already charged with oversight of that particular sector. To accomplish this, regulators should pursue algorithmic accountability—the principle that an algorithmic system should employ a variety of controls to ensure the operator (i.e., the party responsible for deploying the algorithm) can verify it acts in accordance with its intentions, as well as identify and rectify harmful outcomes. Adopting this framework would both promote the vast benefits of algorithmic decision-making and minimize harmful outcomes, while also ensuring laws that apply to human decisions can be effectively applied to algorithmic decisions….(More)”.