How to keep good research from dying a bad death: Strategies for co-creating research with impact


Blog post by Bridget Konadu Gyamfi and Bethany Park…:”Researchers are often invested in disseminating the results of their research to the practitioners and policymakers who helped enable it—but disseminating a paper, developing a brief, or even holding an event may not truly empower decision-makers to make changes based on the research.  

Disseminate results in stages and determine next steps

Mapping evidence to real-world decisions and processes in order to determine the right course of action can be complex. Together with our partners, we gather the troops—researchers, implementers, and IPA’s research and policy team—and have a discussion around what the implications of the research are for policy and practice.

This staged dissemination is critically important: having private discussions first helps partners digest the results and think through their reactions in a lower-stakes setting. We help the partners think about not only the results, but how their stakeholders will respond to the results, and how we can support their ongoing learning, whether results are “good” or not as hoped. Later, we hold larger dissemination events to inform the public. But we try to work closely with researchers and implementers to think through next steps right after results are available—before the window of opportunity passes.

Identify & prioritize policy opportunities

Many of our partners have already written smart advice about how to identify policy opportunities (windows, openings… etc.), so there’s no need for us to restate all that great thinking (go read it!). However, we get asked frequently how we prioritize policy opportunities, and we do have a clear internal process for making that decision. Here are our criteria:

High Impact Policy Activities.png
  1. A body of evidence to build on: One single study doesn’t often present the best policy opportunities. This is a generalization, of course, and there are exceptions, but typically our policy teams pay the most attention to bodies of evidence that are coming to a consensus. These are the opportunities for which we feel most able to recommend next steps related to policy and practice—there is a clearer message to communicate and research conclusions we can state with greater confidence.
  2. Relationships to open doors: Our long-term in-country presence and deep involvement with partners through research projects means that we have many relationships and doors open to us. Yet some of these relationships are stronger than others, and some partners are more influential in the processes we want to impact. We use stakeholder mapping tools to clarify who is invested and who has influence. We also track our stakeholder outreach to make sure our relationships stay strong and mutually beneficial.
  3. A concrete decision or process that we can influence: This is the typical understanding of a “policy opening,” and it’s an important one. What are the partner’s priorities, felt needs, and open questions? Where do those create opportunities for our influence? If the evidence would indicate one course of action, but that course isn’t even an option our partner would consider or be able to consider (for cost or other practical reasons), we have to give the opportunity a pass.
  4. Implementation funding: In the countries where we work, even when we have strong relationships, strong evidence, and the partner is open to influence, there is still one crucial ingredient missing: implementation funding. Addressing this constraint means getting evidence-based programming onto the agenda of major donors.

Get partners on board

Forming a coalition of partners and funders who will partner with us as we move forward is crucial. As a research and policy organization, we can’t scale effective solutions alone—nor is that the specialty that we want to develop, since there are others to fill that role. We need partners like Evidence Action Beta to help us pressure test solutions as they move towards scale, or partners like Living Goods who already have nationwide networks of community health workers who can reach communities efficiently and effectively. And we need governments who are willing to make public investments and decisions based on evidence….(More)”.

Impact of a nudging intervention and factors associated with vegetable dish choice among European adolescents


Paper by Q. Dos Santos et al: “To test the impact of a nudge strategy (dish of the day strategy) and the factors associated with vegetable dish choice, upon food selection by European adolescents in a real foodservice setting.

A cross-sectional quasi-experimental study was implemented in restaurants in four European countries: Denmark, France, Italy and United Kingdom. In total, 360 individuals aged 12-19 years were allocated into control or intervention groups, and asked to select from meat-based, fish-based, or vegetable-based meals. All three dishes were identically presented in appearance (balls with similar size and weight) and with the same sauce (tomato sauce) and side dishes (pasta and salad). In the intervention condition, the vegetable-based option was presented as the “dish of the day” and numbers of dishes chosen by each group were compared using the Pearson chi-square test. Multivariate logistic regression analysis was run to assess associations between choice of vegetable-based dish and its potential associated factors (adherence to Mediterranean diet, food neophobia, attitudes towards nudging for vegetables, food choice questionnaire, human values scale, social norms and self-estimated health, country, gender and belonging to control or intervention groups). All analyses were run in SPSS 22.0.

The nudging strategy (dish of the day) did not show a difference on the choice of the vegetable-based option among adolescents tested (p = 0.80 for Denmark and France and p = 0.69 and p = 0.53 for Italy and UK, respectively). However, natural dimension of food choice questionnaire, social norms and attitudes towards vegetable nudging were all positively associated with the choice of the vegetable-based dish. Being male was negatively associated with choosing the vegetable-based dish.

The “dish of the day” strategy did not work under the study conditions. Choice of the vegetable-based dish was predicted by natural dimension, social norms, gender and attitudes towards vegetable nudging. An understanding of factors related to choosing vegetable based dishes is necessary for the development and implementation of public policy interventions aiming to increase the consumption of vegetables among adolescents….(More)”

Show me the Data! A Systematic Mapping on Open Government Data Visualization


Paper by André Eberhardt and Milene Selbach Silveira: “During the last years many government organizations have adopted Open Government Data policies to make their data publicly available. Although governments are having success on publishing their data, the availability of the datasets is not enough to people to make use of it due to lack of technical expertise such as programming skills and knowledge on data management. In this scenario, Visualization Techniques can be applied to Open Government Data in order to help to solve this problem.

In this sense, we analyzed previously published papers related to Open Government Data Visualization in order to provide an overview about how visualization techniques are being applied to Open Government Data and which are the most common challenges when dealing with it. A systematic mapping study was conducted to survey the papers that were published in this area. The study found 775 papers and, after applying all inclusion and exclusion criteria, 32 papers were selected. Among other results, we found that datasets related to transportation are the main ones being used and Map is the most used visualization technique. Finally, we report that data quality is the main challenge being reported by studies that applied visualization techniques to Open Government Data…(More)”.

Urban Computing


Book by Yu Zheng:”…Urban computing brings powerful computational techniques to bear on such urban challenges as pollution, energy consumption, and traffic congestion. Using today’s large-scale computing infrastructure and data gathered from sensing technologies, urban computing combines computer science with urban planning, transportation, environmental science, sociology, and other areas of urban studies, tackling specific problems with concrete methodologies in a data-centric computing framework. This authoritative treatment of urban computing offers an overview of the field, fundamental techniques, advanced models, and novel applications.

Each chapter acts as a tutorial that introduces readers to an important aspect of urban computing, with references to relevant research. The book outlines key concepts, sources of data, and typical applications; describes four paradigms of urban sensing in sensor-centric and human-centric categories; introduces data management for spatial and spatio-temporal data, from basic indexing and retrieval algorithms to cloud computing platforms; and covers beginning and advanced topics in mining knowledge from urban big data, beginning with fundamental data mining algorithms and progressing to advanced machine learning techniques. Urban Computing provides students, researchers, and application developers with an essential handbook to an evolving interdisciplinary field….(More)”

This is how AI bias really happens—and why it’s so hard to fix


Karen Hao at MIT Technology Review: “Over the past few months, we’ve documented how the vast majority of AI’s applications today are based on the category of algorithms known as deep learning, and how deep-learning algorithms find patterns in data. We’ve also covered how these technologies affect people’s lives: how they can perpetuate injustice in hiring, retail, and security and may already be doing so in the criminal legal system.

But it’s not enough just to know that this bias exists. If we want to be able to fix it, we need to understand the mechanics of how it arises in the first place.

How AI bias happens

We often shorthand our explanation of AI bias by blaming it on biased training data. The reality is more nuanced: bias can creep in long before the data is collected as well as at many other stages of the deep-learning process. For the purposes of this discussion, we’ll focus on three key stages.Sign up for the The AlgorithmArtificial intelligence, demystified

By signing up you agree to receive email newsletters and notifications from MIT Technology Review. You can change your preferences at any time. View our Privacy Policy for more detail.

Framing the problem. The first thing computer scientists do when they create a deep-learning model is decide what they actually want it to achieve. A credit card company, for example, might want to predict a customer’s creditworthiness, but “creditworthiness” is a rather nebulous concept. In order to translate it into something that can be computed, the company must decide whether it wants to, say, maximize its profit margins or maximize the number of loans that get repaid. It could then define creditworthiness within the context of that goal. The problem is that “those decisions are made for various business reasons other than fairness or discrimination,” explains Solon Barocas, an assistant professor at Cornell University who specializes in fairness in machine learning. If the algorithm discovered that giving out subprime loans was an effective way to maximize profit, it would end up engaging in predatory behavior even if that wasn’t the company’s intention.

Collecting the data. There are two main ways that bias shows up in training data: either the data you collect is unrepresentative of reality, or it reflects existing prejudices. The first case might occur, for example, if a deep-learning algorithm is fed more photos of light-skinned faces than dark-skinned faces. The resulting face recognition system would inevitably be worse at recognizing darker-skinned faces. The second case is precisely what happened when Amazon discovered that its internal recruiting tool was dismissing female candidates. Because it was trained on historical hiring decisions, which favored men over women, it learned to do the same.

Preparing the data. Finally, it is possible to introduce bias during the data preparation stage, which involves selecting which attributes you want the algorithm to consider. (This is not to be confused with the problem-framing stage. You can use the same attributes to train a model for very different goals or use very different attributes to train a model for the same goal.) In the case of modeling creditworthiness, an “attribute” could be the customer’s age, income, or number of paid-off loans. In the case of Amazon’s recruiting tool, an “attribute” could be the candidate’s gender, education level, or years of experience. This is what people often call the “art” of deep learning: choosing which attributes to consider or ignore can significantly influence your model’s prediction accuracy. But while its impact on accuracy is easy to measure, its impact on the model’s bias is not.

Why AI bias is hard to fix

Given that context, some of the challenges of mitigating bias may already be apparent to you. Here we highlight four main ones….(More)”

Fact-Based Policy: How Do State and Local Governments Accomplish It?


Report and Proposal by Justine Hastings: “Fact-based policy is essential to making government more effective and more efficient, and many states could benefit from more extensive use of data and evidence when making policy. Private companies have taken advantage of declining computing costs and vast data resources to solve problems in a fact-based way, but state and local governments have not made as much progress….

Drawing on her experience in Rhode Island, Hastings proposes that states build secure, comprehensive, integrated databases, and that they transform those databases into data lakes that are optimized for developing insights. Policymakers can then use the insights from this work to sharpen policy goals, create policy solutions, and measure progress against those goals. Policymakers, computer scientists, engineers, and economists will work together to build the data lake and analyze the data to generate policy insights….(More)”.

Bureaucracy vs. Democracy


Philip Howard in The American Interest: “…For 50 years since the 1960s, modern government has been rebuilt on what I call the “philosophy of correctness.” The person making the decision must be able to demonstrate its correctness by compliance with a precise rule or metric, or by objective evidence in a trial-type proceeding. All day long, Americans are trained to ask themselves, “Can I prove that what I’m about to do is legally correct?”

In the age of individual rights, no one talks about the rights of institutions. But the disempowerment of institutional authority in the name of individual rights has led, ironically, to the disempowerment of individuals at every level of responsibility. Instead of striding confidently toward their goals, Americans tiptoe through legal minefields. In virtually every area of social interaction—schools, healthcare, business, public agencies, public works, entrepreneurship, personal services, community activities, nonprofit organizations, churches and synagogues, candor in the workplace, children’s play, speech on campus, and more—studies and reports confirm all the ways that sensible choices are prevented, delayed, or skewed by overbearing regulation, by an overemphasis on objective metrics,3 or by legal fear of violating someone’s alleged rights.

A Three-Part Indictment of Modern Bureaucracy

Reformers have promised to rein in bureaucracy for 40 years, and it’s only gotten more tangled. Public anger at government has escalated at the same time, and particularly in the past decade.  While there’s a natural reluctance to abandon a bureaucratic structure that is well-intended, public anger is unlikely to be mollified until there is change, and populist solutions do not bode well for the future of democracy.  Overhauling operating structures to permit practical governing choices would re-energize democracy as well as relieve the pressures Americans feel from Big Brother breathing down their necks.

Viewed in hindsight, the operating premise of modern bureaucracy was utopian and designed to fail. Here’s the three-part indictment of why we should abandon it.

1. The Economic Dysfunction of Modern Bureaucracy

Regulatory programs are indisputably wasteful, and frequently extract costs that exceed benefits. The total cost of compliance is high, about $2 trillion for federal regulation alone….

2. Bureaucracy Causes Cognitive Overload

The complex tangle of bureaucratic rules impairs a human’s ability to focus on the actual problem at hand. The phenomenon of the unhelpful bureaucrat, famously depicted in fiction by Dickens, Balzac, Kafka, Gogol, Heller, and others, has generally been characterized as a cultural flaw of the bureaucratic personality. But studies of cognitive overload suggest that the real problem is that people who are thinking about rules actually have diminished capacity to think about solving problems. This overload not only impedes drawing on what  calls “system 2” thinking (questioning assumptions and reflecting on long term implications); it also impedes access to what they call “system 1” thinking (drawing on their instincts and heuristics to make intuitive judgments)….

3. Bureaucracy Subverts the Rule of Law

The purpose of law is to enhance freedom. By prohibiting bad conduct, such as crime or pollution, law liberates each of us to focus our energies on accomplishment instead of self-protection. Societies that protect property rights and the sanctity of contracts enjoy far greater economic opportunity and output than those that do not enforce the rule of law….(More)”.

Institutions as Social Theory


Blogpost by Titus Alexander: “The natural sciences comprise of a set of institutions and methods designed to improve our understanding of the physical world. One of the most powerful things science does is to produce theories – models of reality – that are used by others to change the world. The benefits of using science are so great that societies have created many channels to develop and use research to improve the human condition.

Social scientists also seek to improve the human condition. However, the channels from research to application are often weak and most social research is buried in academic papers and books. Some will inform policy via think tanks, civil servants or pressure groups but practitioners and politicians often prefer their own judgement and prejudices, using research only when it suits them. But a working example – the institution as the method – has more influence than a research paper. The evidence is tangible, like an experiment in natural science, and includes all the complexities of real life. It demonstrates its reliability over time and provides proof of what works.

Reflexivity is key to social science

In the physical sciences the investigator is separate from the subject of investigation and she or he has no influence on what they observe. Generally, theories in the human sciences cannot provide this kind of detached explanation, because societies are reflexive. When we study human behaviour we also influence it. People change what they do in response to being studied. They use theories to change their own behaviour or the behaviour of others. Many scholars and practitioners have explored reflexivity, including Albert BanduraPierre Bourdieu and the financier George Soros. Anthony Giddens called it the ‘double hermeneutic’.

The fact that society is reflexive is the key to effective social science. Like scientists, societies create systematic detachment to increase objectivity in decision-making, through advisers, boards, regulators, opinion polls and so on. Peer reviewed social science research is a form of detachment, but it is often so detached to be irrelevant….(More)”.

Hundreds of Bounty Hunters Had Access to AT&T, T-Mobile, and Sprint Customer Location Data for Years


Joseph Cox at Motherboard: ” In January, Motherboard revealed that AT&T, T-Mobile, and Sprint were selling their customers’ real-time location data, which trickled down through a complex network of companies until eventually ending up in the hands of at least one bounty hunter. Motherboard was also able to purchase the real-time location of a T-Mobile phone on the black market from a bounty hunter source for $300. In response, telecom companies said that this abuse was a fringe case.

In reality, it was far from an isolated incident.

Around 250 bounty hunters and related businesses had access to AT&T, T-Mobile, and Sprint customer location data, with one bail bond firm using the phone location service more than 18,000 times, and others using it thousands or tens of thousands of times, according to internal documents obtained by Motherboard from a company called CerCareOne, a now-defunct location data seller that operated until 2017. The documents list not only the companies that had access to the data, but specific phone numbers that were pinged by those companies.

In some cases, the data sold is more sensitive than that offered by the service used by Motherboard last month, which estimated a location based on the cell phone towers that a phone connected to. CerCareOne sold cell phone tower data, but also sold highly sensitive and accurate GPS data to bounty hunters; an unprecedented move that means users could locate someone so accurately so as to see where they are inside a building. This company operated in near-total secrecy for over 5 years by making its customers agree to “keep the existence of CerCareOne.com confidential,” according to a terms of use document obtained by Motherboard.

Some of these bounty hunters then resold location data to those unauthorized to handle it, according to two independent sources familiar with CerCareOne’s operations.

The news shows how widely available Americans’ sensitive location data was to bounty hunters. This ease-of-access dramatically increased the risk of abuse….(More)”.

Artificial Intelligence and National Security


Report by Congressional Research Service: “Artificial intelligence (AI) is a rapidly growing field of technology with potentially significant implications for national security. As such, the U.S. Department of Defense (DOD) and other nations are developing AI applications for a range of military functions. AI research is underway in the fields of intelligence collection and analysis, logistics, cyber operations, information operations, command and control, and in a variety of semi-autonomous and autonomous vehicles.

Already, AI has been incorporated into military operations in Iraq and Syria. Congressional action has the potential to shape the technology’s development further, with budgetary and legislative decisions influencing the growth of military applications as well as the pace of their adoption.

AI technologies present unique challenges for military integration, particularly because the bulk of AI development is happening in the commercial sector. Although AI is not unique in this regard, the defense acquisition process may need to be adapted for acquiring emerging technologies like AI.

In addition, many commercial AI applications must undergo significant modification prior to being functional for the military. A number of cultural issues also challenge AI acquisition, as some commercial AI companies are averse to partnering with DOD due to ethical concerns, and even within the department, there can be resistance to incorporating AI technology into existing weapons systems and processes.

Potential international rivals in the AI market are creating pressure for the United States to compete for innovative military AI applications. China is a leading competitor in this regard, releasing a plan in 2017 to capture the global lead in AI development by 2030. Currently, China is primarily focused on using AI to make faster and more well-informed decisions, as well as on developing a variety of autonomous military vehicles. Russia is also active in military AI development, with a primary focus on robotics. Although AI has the potential to impart a number of advantages in the military context, it may also introduce distinct challenges.

AI technology could, for example, facilitate autonomous operations, lead to more informed military decisionmaking, and increase the speed and scale of military action. However, it may also be unpredictable or vulnerable to unique forms of manipulation. As a result of these factors, analysts hold a broad range of opinions on how influential AI will be in future combat operations.

While a small number of analysts believe that the technology will have minimal impact, most believe that AI will have at least an evolutionary—if not revolutionary—effect….(More)”.