These 3 barriers make it hard for policymakers to use the evidence that development researchers produce


Michael Callen, Adnan Khan, Asim I. Khwaja, Asad Liaqat and Emily Myers at the Monkey Cage/Washington Post: “In international development, the “evidence revolution” has generated a surge in policy research over the past two decades. We now have a clearer idea of what works and what doesn’t. In India, performance pay for teachers works: students in schools where bonuses were on offer got significantly higher test scores. In Kenya, charging small fees for malaria bed nets doesn’t work — and is actually less cost-effective than free distribution. The American Economic Association’s registry for randomized controlled trials now lists 1,287 studies in 106 countries, many of which are testing policies that very well may be expanded.

But can policymakers put this evidence to use?

Here’s how we did our research

We assessed the constraints that keep policymakers from acting on evidence. We surveyed a total of 1,509 civil servants in Pakistan and 108 in India as part of a program called Building Capacity to Use Research Evidence (BCURE), carried out by Evidence for Policy Design (EPoD)at Harvard Kennedy School and funded by the British government. We found that simply presenting evidence to policymakers doesn’t necessarily improve their decision-making. The link between evidence and policy is complicated by several factors.

1. There are serious constraints in policymakers’ ability to interpret evidence….

2. Organizational and structural barriers get in the way of using evidence….

 

3. When presented with quantitative vs. qualitative evidence, policymakers update their beliefs in unexpected ways....(More)

Automation Beyond the Physical: AI in the Public Sector


Ben Miller at Government Technology: “…The technology is, by nature, broadly applicable. If a thing involves data — “data” itself being a nebulous word — then it probably has room for AI. AI can help manage the data, analyze it and find patterns that humans might not have thought of. When it comes to big data, or data sets so big that they become difficult for humans to manually interact with, AI leverages the speedy nature of computing to find relationships that might otherwise be proverbial haystack needles.

One early area of government application is in customer service chatbots. As state and local governments started putting information on websites in the past couple of decades, they found that they could use those portals as a means of answering questions that constituents used to have to call an office to ask.

Ideally that results in a cyclical victory: Government offices didn’t have as many calls to answer, so they could devote more time and resources to other functions. And when somebody did call in, their call might be answered faster.

With chatbots, governments are betting they can answer even more of those questions. When he was the chief technology and innovation officer of North Carolina, Eric Ellis oversaw the setup of a system that did just that for IT help desk calls.

Turned out, more than 80 percent of the help desk’s calls were people who wanted to change their passwords. For something like that, where the process is largely the same each time, a bot can speed up the process with a little help from AI. Then, just like with the government Web portal, workers are freed up to respond to the more complicated calls faster….

Others are using AI to recognize and report objects in photographs and videos — guns, waterfowl, cracked concrete, pedestrians, semi-trucks, everything. Others are using AI to help translate between languages dynamically. Some want to use it to analyze the tone of emails. Some are using it to try to keep up with cybersecurity threats even as they morph and evolve. After all, if AI can learn to beat professional poker players, then why can’t it learn how digital black hats operate?

Castro sees another use for the technology, a more introspective one. The problem is this: The government workforce is a lot older than the private sector, and that can make it hard to create culture change. According to U.S. Census Bureau data, about 27 percent of public-sector workers are millennials, compared with 38 percent in the private sector.

“The traditional view [of government work] is you fill out a lot of forms, there are a lot of boring meetings. There’s a lot of bureaucracy in government,” Castro said. “AI has the opportunity to change a lot of that, things like filling out forms … going to routine meetings and stuff.”

As AI becomes more and more ubiquitous, people who work both inside and with government are coming up with an ever-expanding list of ways to use it. Here’s an inexhaustive list of specific use cases — some of which are already up and running and some of which are still just ideas….(More)”.

Civic Tech in the Global South


New book edited by Tiago Peixoto: “Civic Tech in the Global South is comprised of one study and three field evaluations of civic tech initiatives in developing countries. The study reviews evidence on the use of 23 digital platforms designed to amplify citizen voices to improve service delivery, highlighting citizen uptake and the degree of responsiveness by public service providers. The initiatives are an SMS-based polling platform run by UNICEF in Uganda, a complaints-management system run by the water sector in Kenya, and an internet-based participatory budgeting program in Brazil. Based on these experiences, the authors examine: i) the extent to which technologies have promoted inclusiveness, ii) the effect of these initiatives on public service delivery, and iii) the extent to which these effects can be attributed to technology….(More)”.

The Mobility Space Report: What the Street!?


What the Street!? was derived out of the question “How do new and old mobility concepts change our cities?”. It was raised by Michael Szell and Stephan Bogner during their residency at moovel lab. With support of the lab team they set out to wrangle data of cities around the world to develop and design this unique Mobility Space Report.

What the Street!? was made out of open-source software and resources. Thanks to the OpenStreetMap contributors and many other pieces we put together the puzzle of urban mobility space seen above….

If you take a snapshot of Berlin from space on a typical time of the day, you see 60,000 cars on the streets and 1,200,000 cars parked. Why are so many cars parked? Because cars are used only 36 minutes per day, while 95% of the time they just stand around unused. In Berlin, these 1.2 million parking spots take up the area of 64,000 playgrounds, or the area of 4 Central Parks.

If you look around the world, wasted public space is not particular to Berlin – many cities have the same problem. But why is so much space wasted in the first place? How “fair” is the distribution of space towards other forms of mobility like bikes and trams? Is there an arrogance of space? If so, how could we raise awareness or even improve the situation?

Who “owns” the city?

Let us first look at how much space there is in a city for moving around, and how it is allocated between bikes, rails, and cars. With What the Street!? – The Mobility Space Report, we set out to provide a public tool for exploring this urban mobility space and to answer our questions systematically, interactively, and above all, in a fun way. Inspired by recently developed techniques in data visualization of unrollingpacking, and ordering irregular shapes, we packed and rolled all mobility spaces into rectangular bins to visualize the areas they take up.

How do you visualize the total area taken by parking spaces? – You pack them tightly.
How do you visualize the total area taken by streets and tracks? – You roll them up tightly.…(More)”.

Making Sense of Corruption


Book by Bo Rothstein and Aiysha Varraich: “Corruption is a serious threat to prosperity, democracy and human well-being, with mounting empirical evidence highlighting its detrimental effects on society. Yet defining this threat has resulted in profound disagreement, producing a multidimensional concept. Tackling this important and provocative topic, the authors provide an accessible and systematic analysis of how our understanding of corruption has evolved. They identify gaps in the research and make connections between related concepts such as clientelism, patronage, patrimonialism, particularism and state capture. A fundamental issue discussed is how the opposite of corruption should be defined. By arguing for the possibility of a universal understanding of corruption, and specifically what corruption is not, an innovative solution to this problem is presented. This book provides an accessible overview of corruption, allowing scholars and students alike to see the far reaching place it has within academic research….(More)”.

 

Massive Ebola data site planned to combat outbreaks


Amy Maxmen at Nature: “More than 11,000 people died when Ebola tore through West Africa between 2014 and 2016, and yet clinicians still lack data that would enable them to reliably identify the disease when a person first walks into a clinic. To fill that gap and others before the next outbreak hits, researchers are developing a platform to organize and share Ebola data that have so far been scattered beyond reach.

The information system is coordinated by the Infectious Diseases Data Observatory (IDDO), an international research network based at the University of Oxford, UK, and is expected to launch by the end of the year. …

During the outbreak, for example, a widespread rumour claimed that the plague was an experiment conducted by the West, which led some people to resist going to clinics and helped Ebola to spread.

Merson and her collaborators want to avoid the kind of data fragmentation that hindered efforts to stop the outbreak in Liberia, Guinea and Sierra Leone. As the Ebola crisis was escalating in October 2014, she visited treatment units in the three countries to advise on research. Merson found tremendous variation in practices, which complicated attempts to merge and analyse the information. For instance, some record books listed lethargy and hiccups as symptoms, whereas others recorded fatigue but not hiccups.

“People were just collecting what they could,” she recalls. Non-governmental organizations “were keeping their data private; academics take a year to get it out; and West Africa had set up surveillance but they were siloed from the international systems”, she says. …

In July 2015, the IDDO received pilot funds from the UK charity the Wellcome Trust to pool anonymized data from the medical records of people who contracted Ebola — and those who survived it — as well as data from clinical trials and public health projects during outbreaks in West Africa, Uganda and the Democratic Republic of Congo. The hope is that a researcher could search for data to help in diagnosing, treating and understanding the disease. The platform would also provide a home for new data as they emerge. A draft research agenda lists questions that the information might answer, such as how long the virus can survive outside the human body, and what factors are associated with psychological issues in those who survive Ebola.

One sensitive issue is deciding who will control the data. …It’s vital that these discussions happen now, in a period of relative calm, says Jeremy Farrar, director of the Wellcome Trust in London. When the virus emerges again, clinicians, scientists, and regulatory boards will need fast access to data so as not to repeat mistakes made last time. “We need to sit down and make sure we have a data platform in place so that we can respond to a new case of Ebola in hours and days, and not in months and years,” he says. “A great danger is that the world will move on and forget the horror of Ebola in West Africa.”…(More)”

The Internet of Us


Book by Michael P. Lynch: “We used to say “seeing is believing”; now, googling is believing. With 24/7 access to nearly all of the world’s information at our fingertips, we no longer trek to the library or the encyclopedia shelf in search of answers. We just open our browsers, type in a few keywords and wait for the information to come to us. Now firmly established as a pioneering work of modern philosophy, The Internet of Us has helped revolutionize our understanding of what it means to be human in the digital age. Indeed, demonstrating that knowledge based on reason plays an essential role in society and that there is more to “knowing” than just acquiring information, leading philosopher Michael P. Lynch shows how our digital way of life makes us value some ways of processing information over others, and thus risks distorting the greatest traits of mankind. Charting a path from Plato’s cave to Google Glass, the result is a necessary guide on how to navigate the philosophical quagmire that is the “Internet of Things.”…(More)”.

How to Regulate Artificial Intelligence


Oren Etzioni in the New York Times: “…we should regulate the tangible impact of A.I. systems (for example, the safety of autonomous vehicles) rather than trying to define and rein in the amorphous and rapidly developing field of A.I.

I propose three rules for artificial intelligence systems that are inspired by, yet develop further, the “three laws of robotics” that the writer Isaac Asimov introduced in 1942: A robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey the orders given it by human beings, except when such orders would conflict with the previous law; and a robot must protect its own existence as long as such protection does not conflict with the previous two laws.

These three laws are elegant but ambiguous: What, exactly, constitutes harm when it comes to A.I.? I suggest a more concrete basis for avoiding A.I. harm, based on three rules of my own.

First, an A.I. system must be subject to the full gamut of laws that apply to its human operator. This rule would cover private, corporate and government systems. We don’t want A.I. to engage in cyberbullying, stock manipulation or terrorist threats; we don’t want the F.B.I. to release A.I. systems that entrap people into committing crimes. We don’t want autonomous vehicles that drive through red lights, or worse, A.I. weapons that violate international treaties.

Our common law should be amended so that we can’t claim that our A.I. system did something that we couldn’t understand or anticipate. Simply put, “My A.I. did it” should not excuse illegal behavior.

My second rule is that an A.I. system must clearly disclose that it is not human. As we have seen in the case of bots — computer programs that can engage in increasingly sophisticated dialogue with real people — society needs assurances that A.I. systems are clearly labeled as such. In 2016, a bot known as Jill Watson, which served as a teaching assistant for an online course at Georgia Tech, fooled students into thinking it was human. A more serious example is the widespread use of pro-Trump political bots on social media in the days leading up to the 2016 elections, according to researchers at Oxford….

My third rule is that an A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information…(More)”

Data-Driven Policy Making: The Policy Lab Approach


Paper by Anne Fleur van Veenstra and Bas Kotterink: “Societal challenges such as migration, poverty, and climate change can be considered ‘wicked problems’ for which no optimal solution exists. To address such problems, public administrations increasingly aim for datadriven policy making. Data-driven policy making aims to make optimal use of sensor data, and collaborate with citizens to co-create policy. However, few public administrations have realized this so far. Therefore, in this paper an approach for data-driven policy making is developed that can be used in the setting of a Policy Lab. A Policy Lab is an experimental environment in which stakeholders collaborate to develop and test policy. Based on literature, we first identify innovations in data-driven policy making. Subsequently, we map these innovations to the stages of the policy cycle. We found that most innovations are concerned with using new data sources in traditional statistics and that methodologies capturing the benefits of data-driven policy making are still under development. Further research should focus on policy experimentation while developing new methodologies for data-driven policy making at the same time….(More)”.

Dictionaries and crowdsourcing, wikis and user-generated content


Living Reference Work Entry by Michael Rundel: “It is tempting to dismiss crowdsourcing as a largely trivial recent development which has nothing useful to contribute to serious lexicography. This temptation should be resisted. When applied to dictionary-making, the broad term “crowdsourcing” in fact describes a range of distinct methods for creating or gathering linguistic data. A provisional typology is proposed, distinguishing three approaches which are often lumped under the heading “crowdsourcing.” These are: user-generated content (UGC), the wiki model, and what is referred to here as “crowd-sourcing proper.” Each approach is explained, and examples are given of their applications in linguistic and lexicographic projects. The main argument of this chapter is that each of these methods – if properly understood and carefully managed – has significant potential for lexicography. The strengths and weaknesses of each model are identified, and suggestions are made for exploiting them in order to facilitate or enhance different operations within the process of developing descriptions of language. Crowdsourcing – in its various forms – should be seen as an opportunity rather than as a threat or diversion….(More)”.