Behavior Change for Good Initiative


“At the Behavior Change for Good Initiativewe know that solving the mystery of enduring behavior change offers an enormous opportunity to improve lives. We unite an interdisciplinary team of scientists with leading practitioners in education, healthcare, and consumer financial services, all of whom seek to address the question: How can we make behavior change stick?…

We are developing an interactive digital platform to improve daily decisions about health, education, and savings. For the first time, a world-class team of scientific experts will be able to continually test and improve a behavior change program by seamlessly incorporating the latest insights from their research into massive random-assignment experiments. Their interactive digital platform seeks to improve daily health, education, and savings decisions of millions…(More)”.

Friends with Academic Benefits


The new or interesting story isn’t just that Valerie, Betsy, and Steve’s friends had different social and academic impacts, but that they had various types of friendship networks. My research points to the importance of network structure—that is, the relationships among their friends—for college students’ success. Different network structures result from students’ experiences—such as race- and class-based marginalization on this predominantly White campus—and shape students’ experiences by helping or hindering them academically and socially.

I used social network techniques to analyze the friendship networks of 67 MU students and found they clumped into three distinctive types—tight-knitters, compartmentalizers, and samplers. Tight-knitters have one densely woven friendship group in which nearly all their friends are friends with one another. Compartmentalizers’ friends form two to four clusters, where friends know each other within clusters but rarely across them. And samplers make a friend or two from a variety of places, but the friends remain unconnected to each other. As shown in the figures, tight-knitters’ networks resemble a ball of yarn, compartmentalizers’ a bow-tie, and samplers’ a daisy. In these network maps, the person I interviewed is at the center and every other dot represents a friend, with lines representing connections among friends (that is, whether the person I interviewed believed that the two people knew each other). During the interviews, participants defined what friendship meant to them and listed as many friends as they liked (ranging from three to 45).

The students’ friendship network types influenced how friends matter for their academic and social successes and failures. Like Valerie, most Black and Latina/o students were tight-knitters. Their dense friendship networks provided a sense of home as a minority on a predominantly White campus. Tight-knit networks could provide academic support and motivation (as they did for Valerie) or pull students down academically if their friends lacked academic skills and motivation. Most White students were compartmentalizers like Betsy, and they succeeded with moderate levels of social support from friends and with social support and academic support from different clusters. Samplers came from a range of class and race backgrounds. Like Steve, samplers typically succeeded academically without relying on their friends. Friends were fun people who neither help nor hurt them academically. Socially, however, samplers reported feeling lonely and lacking social support….(More)”.

How open contracting helped fix Colombia’s biggest school meal program


Open Contracting Stories: “In the early hours of the morning, in an industrial area of Colombia’s capital, Bogotá, a warehouse hums with workers, their faces barely visible under white masks and hair nets. The walls are stacked with colored plastic crates. Filled with various fruit, cereals, drinks, and desserts, they will be packed into refrigerated trucks and delivered to public schools all over Bogotá before most children have settled in for their first classes. A similar operation is underway in five other warehouses across the city, as part of a $170 million program to ensure fresh, nutritious food reaches more than 800,000 hungry students between the age of four and 18 every day.

Food delivery and its quality have not always been so streamlined in the past. High poverty rates in the city mean that many children consume their main meal for the day at school. And getting those refreshments to the schools at over 700 locations each day is a huge logistical challenge. With a population of nearly nine million inhabitants, Bogotá is one of the largest cities in Latin America and one of the most traffic congested cities in the world.

Then there’s the notorious inefficiency and corruption in the provision of school meals across Colombia. Suppliers throughout the country have regularly been accused of failing to deliver food or inflating prices in scandalsthat made national headlines. In the city of Cartagena, chicken breasts sent to schools cost four times the amount as those at markets and the children reportedly never received 30 million meals. In the Amazonas region, an investigation by the Comptroller General found the price of a food contract was inflated by more than 297 million pesos (US$100,000), including pasta purchased at more than three times the market rate….

The solution, based on the pilot and these conversations, was to divide the process in two to cut out the middlemen and reduce transaction costs. The first part was sourcing the food. The second was to organize the assembly and distribution of the snacks to every school.

Suppliers are now commissioned by participating in a tender for a framework agreement that sets the general conditions and price caps, while quantities and final prices are established when a purchase is needed.

“In a normal contract, we say, for example, ‘you will give me five apples and they will cost 100.’ In a framework agreement, we say ‘you will provide me apples for one year at a maximum price of X’, and each time we put up a purchase order, we have several suppliers and capped prices. So they bid on purchase orders when needed,” explains Penagos.

Each food item has several suppliers under this new framework agreement. So if one supplier can’t fulfill the purchase order or has a logistical issue, another supplier can take over. This prevents a situation where suppliers have so much bargaining power that they can set their own prices and conditions knowing that the administration can’t refuse because it would mean the children don’t receive the food.

The purchase orders are filled each month on the government’s online marketplace, with the details of the order published for the public to see which supplier won…

Sharing information with the public, parents and potential suppliers was an important part of the plan, too. Details about how the meals were procured became available on a public online platform for all to see, in a way that was easy to understand.

Through a public awareness campaign, Angulo, the education secretary, told the public about the faults in the market that the secretariat had detected. They had changed the process of public contracting to be more transparent….(More).

Behavioral Economics: Are Nudges Cost-Effective?


Carla Fried at UCLA Anderson Review: “Behavioral science does not suffer from a lack of academic focus. A Google Scholar search for the term delivers more than three million results.

While there is an abundance of research into how human nature can muck up our decision making process and the potential for well-placed nudges to help guide us to better outcomes, the field has kept rather mum on a basic question: Are behavioral nudges cost-effective?

That’s an ever more salient question as the art of the nudge is increasingly being woven into public policy initiatives. In 2009, the Obama administration set up a nudge unit within the White House Office of Information and Technology, and a year later the U.K. government launched its own unit. Harvard’s Cass Sunstein, co-author of the book Nudge, headed the U.S. effort. His co-author, the University of Chicago’s Richard Thaler — who won the 2017 Nobel Prize in Economics — helped develop the U.K.’s Behavioral Insights office. Nudge units are now humming away in other countries, including Germany and Singapore, as well as at the World Bank, various United Nations agencies and the Organisation for Economic Co-operation and Development (OECD).

Given the interest in the potential for behavioral science to improve public policy outcomes, a team of nine experts, including UCLA Anderson’s Shlomo Benartzi, Sunstein and Thaler, set out to explore the cost-effectiveness of behavioral nudges relative to more traditional forms of government interventions.

In addition to conducting their own experiments, the researchers looked at published research that addressed four areas where public policy initiatives aim to move the needle to improve individuals’ choices: saving for retirement, applying to college, energy conservation and flu vaccinations.

For each topic, they culled studies that focused on both nudge approaches and more traditional mandates such as tax breaks, education and financial incentives, and calculated cost-benefit estimates for both types of studies. Research used in this study was published between 2000 and 2015. All cost estimates were inflation-adjusted…

The study itself should serve as a nudge for governments to consider adding nudging to their policy toolkits, as this approach consistently delivered a high return on investment, relative to traditional mandates and policies….(More)”.

Replicating the Justice Data Lab in the USA: Key Considerations


Blog by Tracey Gyateng and Tris Lumley: “Since 2011, NPC has researched, supported and advocated for the development of impact-focussed Data Labs in the UK. The goal has been to unlock government administrative data so that organisations (primarily nonprofits) who provide a social service can understand the impact of their services on the people who use them.

So far, one of these Data Labs has been developed to measure re-offending outcomes- the Justice Data Lab-, and others are currently being piloted for employment and education. Given our seven years of work in this area, we at NPC have decided to reflect on the key factors needed to create a Data Lab with our report: How to Create an Impact Data Lab. This blog outlines these factors, examines whether they are present in the USA, and asks what the next steps should be — drawing on the research undertaken with the Governance Lab….Below we examine the key factors and to what extent they appear to be present within the USA.

Environment: A broad culture that supports impact measurement. Similar to the UK, nonprofits in the USA are increasingly measuring the impact they have had on the participants of their service and sharing the difficulties of undertaking robust, high quality evaluations.

Data: Individual person-level administrative data. A key difference between the two countries is that, in the USA, personal data on social services tends to be held at a local, rather than central level. In the UK social services data such as reoffending, education and employment are collated into a central database. In the USA, the federal government has limited centrally collated personal data, instead this data can be found at state/city level….

A leading advocate: A Data Lab project team, and strong networks. Data Labs do not manifest by themselves. They requires a lead agency to campaign with, and on behalf of, nonprofits to set out a persuasive case for their development. In the USA, we have developed a partnership with the Governance Lab to seek out opportunities where Data Labs can be established but given the size of the country, there is scope for further collaborations/ and or advocates to be identified and supported.

Customers: Identifiable organisations that would use the Data Lab. Initial discussions with several US nonprofits and academia indicate support for a Data Lab in their context. Broad consultation based on an agreed region and outcome(s) will be needed to fully assess the potential customer base.

Data owners: Engaged civil servants. Generating buy-in and persuading various stakeholders including data owners, analysts and politicians is a critical part of setting up a data lab. While the exact profiles of the right people to approach can only be assessed once a region and outcome(s) of interest have been chosen, there are encouraging signs, such as the passing of the Foundations for Evidence-Based Policy Making Act of 2017 in the house of representatives which, among other things, mandates the appointment of “Chief Evaluation Officers” in government departments- suggesting that there is bipartisan support for increased data-driven policy evaluation.

Legal and ethical governance: A legal framework for sharing data. In the UK, all personal data is subject to data protection legislation, which provides standardised governance for how personal data can be processed across the country and within the European Union. A universal data protection framework does not exist within the USA, therefore data sharing agreements between customers and government data-owners will need to be designed for the purposes of Data Labs, unless there are existing agreements that enable data sharing for research purposes. This will need to be investigated at the state/city level of a desired Data Lab.

Funding: Resource and support for driving the set-up of the Data Lab. Most of our policy lab case studies were funded by a mixture of philanthropy and government grants. It is expected that a similar mixed funding model will need to be created to establish Data Labs. One alternative is the model adopted by the Washington State Institute for Public Policy (WSIPP), which was created by the Washington State Legislature and is funded on a project basis, primarily by the state. Additionally funding will be needed to enable advocates of a Data Lab to campaign for the service….(More)”.

How Democracy Can Survive Big Data


Colin Koopman in The New York Times: “…The challenge of designing ethics into data technologies is formidable. This is in part because it requires overcoming a century-long ethos of data science: Develop first, question later. Datafication first, regulation afterward. A glimpse at the history of data science shows as much.

The techniques that Cambridge Analytica uses to produce its psychometric profiles are the cutting edge of data-driven methodologies first devised 100 years ago. The science of personality research was born in 1917. That year, in the midst of America’s fevered entry into war, Robert Sessions Woodworth of Columbia University created the Personal Data Sheet, a questionnaire that promised to assess the personalities of Army recruits. The war ended before Woodworth’s psychological instrument was ready for deployment, but the Army had envisioned its use according to the precedent set by the intelligence tests it had been administering to new recruits under the direction of Robert Yerkes, a professor of psychology at Harvard at the time. The data these tests could produce would help decide who should go to the fronts, who was fit to lead and who should stay well behind the lines.

The stakes of those wartime decisions were particularly stark, but the aftermath of those psychometric instruments is even more unsettling. As the century progressed, such tests — I.Q. tests, college placement exams, predictive behavioral assessments — would affect the lives of millions of Americans. Schoolchildren who may have once or twice acted out in such a way as to prompt a psychometric evaluation could find themselves labeled, setting them on an inescapable track through the education system.

Researchers like Woodworth and Yerkes (or their Stanford colleague Lewis Terman, who formalized the first SAT) did not anticipate the deep consequences of their work; they were too busy pursuing the great intellectual challenges of their day, much like Mr. Zuckerberg in his pursuit of the next great social media platform. Or like Cambridge Analytica’s Christopher Wylie, the twentysomething data scientist who helped build psychometric profiles of two-thirds of all Americans by leveraging personal information gained through uninformed consent. All of these researchers were, quite understandably, obsessed with the great data science challenges of their generation. Their failure to consider the consequences of their pursuits, however, is not so much their fault as it is our collective failing.

For the past 100 years we have been chasing visions of data with a singular passion. Many of the best minds of each new generation have devoted themselves to delivering on the inspired data science promises of their day: intelligence testing, building the computer, cracking the genetic code, creating the internet, and now this. We have in the course of a single century built an entire society, economy and culture that runs on information. Yet we have hardly begun to engineer data ethics appropriate for our extraordinary information carnival. If we do not do so soon, data will drive democracy, and we may well lose our chance to do anything about it….(More)”.

Psychographics: the behavioural analysis that helped Cambridge Analytica know voters’ minds


Michael Wade at The Conversation: “Much of the discussion has been on how Cambridge Analytica was able to obtain data on more than 50m Facebook users – and how it allegedly failed to delete this data when told to do so. But there is also the matter of what Cambridge Analytica actually did with the data. In fact the data crunching company’s approach represents a step change in how analytics can today be used as a tool to generate insights – and to exert influence.

For example, pollsters have long used segmentation to target particular groups of voters, such as through categorising audiences by gender, age, income, education and family size. Segments can also be created around political affiliation or purchase preferences. The data analytics machine that presidential candidate Hillary Clinton used in her 2016 campaign – named Ada after the 19th-century mathematician and early computing pioneer – used state-of-the-art segmentation techniques to target groups of eligible voters in the same way that Barack Obama had done four years previously.

Cambridge Analytica was contracted to the Trump campaign and provided an entirely new weapon for the election machine. While it also used demographic segments to identify groups of voters, as Clinton’s campaign had, Cambridge Analytica also segmented using psychographics. As definitions of class, education, employment, age and so on, demographics are informational. Psychographics are behavioural – a means to segment by personality.

This makes a lot of sense. It’s obvious that two people with the same demographic profile (for example, white, middle-aged, employed, married men) can have markedly different personalities and opinions. We also know that adapting a message to a person’s personality – whether they are open, introverted, argumentative, and so on – goes a long way to help getting that message across….

There have traditionally been two routes to ascertaining someone’s personality. You can either get to know them really well – usually over an extended time. Or you can get them to take a personality test and ask them to share it with you. Neither of these methods is realistically open to pollsters. Cambridge Analytica found a third way, with the assistance of two University of Cambridge academics.

The first, Aleksandr Kogan, sold them access to 270,000 personality tests completed by Facebook users through an online app he had created for research purposes. Providing the data to Cambridge Analytica was, it seems, against Facebook’s internal code of conduct, but only now in March 2018 has Kogan been banned by Facebook from the platform. In addition, Kogan’s data also came with a bonus: he had reportedly collected Facebook data from the test-takers’ friends – and, at an average of 200 friends per person, that added up to some 50m people.

However, these 50m people had not all taken personality tests. This is where the second Cambridge academic, Michal Kosinski, came in. Kosinski – who is said to believe that micro-targeting based on online data could strengthen democracy – had figured out a way to reverse engineer a personality profile from Facebook activity such as likes. Whether you choose to like pictures of sunsets, puppies or people apparently says a lot about your personality. So much, in fact, that on the basis of 300 likes, Kosinski’s model is able to predict someone’s personality profile with the same accuracy as a spouse….(More)”

Truth Decay: An Initial Exploration of the Diminishing Role of Facts and Analysis in American Public Life


Report by Jennifer Kavanagh and Michael D. Rich: “Over the past two decades, national political and civil discourse in the United States has been characterized by “Truth Decay,” defined as a set of four interrelated trends: an increasing disagreement about facts and analytical interpretations of facts and data; a blurring of the line between opinion and fact; an increase in the relative volume, and resulting influence, of opinion and personal experience over fact; and lowered trust in formerly respected sources of factual information. These trends have many causes, but this report focuses on four: characteristics of human cognitive processing, such as cognitive bias; changes in the information system, including social media and the 24-hour news cycle; competing demands on the education system that diminish time spent on media literacy and critical thinking; and polarization, both political and demographic. The most damaging consequences of Truth Decay include the erosion of civil discourse, political paralysis, alienation and disengagement of individuals from political and civic institutions, and uncertainty over national policy.

This report explores the causes and consequences of Truth Decay and how they are interrelated, and examines past eras of U.S. history to identify evidence of Truth Decay’s four trends and observe similarities with and differences from the current period. It also outlines a research agenda, a strategy for investigating the causes of Truth Decay and determining what can be done to address its causes and consequences….(More)”.

The Metric God That Failed


Jerry Muller in PS Long Reads: “Over the past few decades, formal institutions have increasingly been subjected to performance measurements that define success or failure according to narrow and arbitrary metrics. The outcome should have been predictable: institutions have done what they can to boost their performance metrics, often at the expense of performance itself.

…In 1986, the American management guru Tom Peters popularized the organizational theorist Mason Haire’s dictum that, “What gets measured gets done,” and with it a credo of measured performance that I call “metric fixation.” In time, the devotees of measured performance would arrive at a naive article of faith that is nonetheless appealing for its mix of optimism and scientism: “Anything that can be measured can be improved.”

In the intervening decades, this faith-based conceit has developed into a dogma about the relationship between measurement and performance. Evangelists of “disruption” and “best practices” have carried the new gospel to ever more distant shores. If you work in health care, education, policing, or the civil service, you have probably been subjected to the policies and practices wrought by metric-centrism.

There are three tenets to the metrical canon. The first holds that it is both possible and desirable to replace judgment – acquired through personal experience and talent – with numerical indicators of comparative performance based on standardized data. Second, making such metrics public and transparent ensures that institutions are held accountable. And, third, the best way to motivate people within organizations is to attach monetary or reputational rewards and penalties to their measured performance….(More)”.

Technology Landscape for Digital Identification


World Bank Report: “Robust, inclusive, and responsible identification systems can increase access to finance, healthcare, education, and other critical services and benefits. Identification systems are also key to improving efficiency and enabling innovation for public- and private-sector services, such as greater efficiency in the delivery of social safety nets and facilitating the development of digital economies. However, the World Bank estimates that more than 1.1 billion individuals do not have official proof of their identity.10 New technologies provide countries with the opportunity to leapfrog paper-based systems and rapidly establish a robust identification infrastructure. As a result, the countries are increasingly adopting nationwide digital identification (ID) programs and leveraging them in other sectors.

Whether a country is enhancing existing ID systems or implementing new systems from the ground up, technology choices are critical to the success of digital identification systems. A number of new technologies are emerging to enable various aspects of ID lifecycle. For some of these technologies, no large-scale studies have been done; for others, current speculation makes objective evaluations difficult.

This report is a first attempt to develop a comprehensive overview of the current technology landscape for digital identification. It is intended to serve as a framework for understanding the myriad options and considerations of technology in this rapidly advancing agenda and in no way is intended to provide advice on specific technologies, particularly given there are a number of other considerations and country contexts which need to be considered. This report also does not advocate the use of a certain technology from a particular vendor for any particular application.

While some technologies are relatively easy to use and affordable, others are costly or so complex that using them on a large scale presents daunting challenges. This report provides practitioners with an overview of various technologies and advancements that are especially relevant for digital identification systems. It highlights key benefits and challenges associated with each technology. It also provides a framework for assessing each technology on multiple criteria, including length of time it has been in use, its ease of integration with legacy and future systems, and its interoperability with other technologies. The practitioners and stakeholders who read this are reminded to bear in mind that the technologies associated with ID systems are rapidly evolving, and that this report, prepared in early 2018, is a snapshot in time. Therefore, technology limitations and challenges highlighted in this report today may not be applicable in the years to come….(More)”