Seven routes to experimentation in policymaking: a guide to applied behavioural science methods


OECD Resource: “…offers guidelines and a visual roadmap to help policymakers choose the most fit-for-purpose evidence collection method for their specific policy challenge.

Source: Elaboration of the authors: Varazzani, C., Emmerling. T., Brusoni, S., Fontanesi, L., and Tuomaila, H., (2023), “Seven routes to experimentation: A guide to applied behavioural science methods,” OECD Working Papers on Public Governance, OECD Publishing, Paris. Note: The authors elaborated the map based on a previous map ideated, researched, and designed by Laura Castro Soto, Judith Wagner, and Torben Emmerling (sevenroutes.com).

The seven applied behavioural science methods:

  • Randomised Controlled Trials (RCTs) are experiments that can demonstrate a causal relationship between an intervention and an outcome, by randomly assigning individuals to an intervention group and a control group.
  • A/B testing tests two or more manipulations (such as variants of a webpage) to assess which performs better in terms of a specific goal or metric.
  • Difference-in-Difference is an experimental method that estimates the causal effect of an intervention by comparing changes in outcomes between an intervention group and a control group before and after the intervention.
  • Before-After studies assess the impact of an intervention or event by comparing outcomes or measurements before and after its occurrence, without a control group.
  • Longitudinal studies collect data from the same individuals or groups over an extended period to assess trends over time.
  • Correlational studies help to investigate the relationship between two or more variables to determine if they vary together (without implying causation).
  • Qualitative studies explore the underlying meanings and nuances of a phenomenon through interviews, focus group sessions, or other exploratory methods based on conversations and observations…(More)”.

Disaster preparedness: Will a “norm nudge” sink or swim?


Article by Jantsje Mol: “In these times of unprecedented climate change, one critical question persists: how do we motivate homeowners to protect their homes and loved ones from the ever-looming threat of flooding? This question led to a captivating behavioral science study, born from a research visit to the Wharton Risk Management and Decision Processes Center in 2019 (currently the Wharton Climate Center). Co-founded and co-directed by the late Howard Kunreuther, the Center has been at the forefront of understanding and mitigating the impact of natural disasters. In this study, we explored the potential of social norms to boost flood preparedness among homeowners. While the results may not align with initial expectations, they shed light on the complexities of human behavior, the significance of meticulous testing, and the enduring legacy of a visionary scholar.

The Power of Social Norms

Before we delve into the results, let’s take a moment to understand what social norms are and why they matter. Social norms dictate what is considered acceptable or expected in a given community. A popular behavioral intervention based on social norms is a norm-nudge: reading information about what others do (say, energy saving behavior of neighbors or tax compliance rates of fellow citizens) may adjust one’s own behavior closer. Norm-nudges are cheap, easy to implement and less prone to political resistance than traditional interventions such as taxes, but they might be ineffective or even backfire. Norm-nudges have been applied to health, finance and the environment, but not yet to the context of natural disaster risk-reduction…(More)”.

The Rapid Growth of Behavioral Science


Article by Steve Wendel: “It’s hard to miss the rapid growth of our field: into new sectors, into new countries, and into new collaborations with other fields. Over the years, I’ve sought to better understand that growth by collecting data about our field and sharing the results. A few weeks ago, I launched the most recent effort – a survey for behavioral science & behavioral design practitioners and one for behavioral researchers around the globe. Here, I’ll share a bit about what we’re seeing so far in the data, and ask for your help to spread it more widely.

First, our field has seen rapid growth since 2008 – which is, naturally, when Thaler and Sunstein’s Nudge first came out. The number of teams and practitioners in the space has grown more or less in tandem, though with a recent slowing in the creation of new teams since 2020. The most productive year was 2019, with 59 new teams starting; the subsequent three years have averaged 28 per year[1].

Behavioral science and design practitioners are also increasingly spread around the world. Just a few years ago, it was difficult to find practitioners outside of BeSci centers in the US, UK, and a few other countries. While we are still heavily concentrated in these areas, there are now active practitioners in 72 countries: from Paraguay to Senegal to Bhutan.

Figure 1: Where practitioners are located. Note – the live and interactive map is available on BehavioralTeams.com.

The majority of practitioners (52%) are in full-time behavioral science or behavioral design roles. The rest are working in other disciplines such as product design and marketing in which they aren’t dedicated to BeSci but have the opportunity to apply it in their work (38%). A minority of individuals have BeSci side jobs (9%).

Among respondents thus far, the most common challenge they are facing is making the case for behavioral science with senior leaders in their organizations (63%) and being able to measure the impact of their inventions (65%). Anecdotally, many practitioners in the field complain that they are asked for their recommendations on what to do, but aren’t given the opportunity to follow up and see if those recommendations were implemented or, when implemented, were actually effective.

The survey asks many more questions about the experiences and backgrounds of practitioners, but we’re still gathering data and will release new results when we have them…(More)”.

The Benefits of Statistical Noise


Article by Ruth Schmidt: “The year was 1999. Chicago’s public housing was in distress, with neglect and gang activity hastening the decline of already depressed neighborhoods. In response, the city launched the Plan for Transformation to offer relief to residents and rejuvenate the city’s public housing system: residents would be temporarily relocated during demolition, after which the real estate would be repurposed for a mixed-income community. Once the building phase was completed, former residents were to receive vouchers to move back into their safer and less stigmatized old neighborhood.

But a billion dollars and over 20 years later, the jury is still out about the plan’s effectiveness and side effects. While many residents do now live in safer, more established communities, many had to move multiple times before settling, or remain in high-poverty, highly segregated neighborhoods. And the idealized notion of former residents as “moving on up” in a free market system rewarded those who knew how to play the game—like private real estate developers—over those with little practice. Some voices were drowned out.

Chicago’s Plan for Transformation shared the same challenges—cost, time, a diverse set of stakeholders—as many similar large-scale civic initiatives. But it also highlights another equally important issue that’s often hidden in plain sight: informational “noise.”

Noise, defined as extraneous data that intrudes on fair and consistent decision-making, is nearly uniformly considered a negative influence on judgment that can lead experts to reach variable findings in contexts as wide-ranging as medicine, public policy, court decisions, and insurance claims. In fact, Daniel Kahneman himself has suggested that for all the attention to bias, noise in decision-making may actually be an equal-opportunity contributor to irrational judgment.

Kahneman and his colleagues have used the metaphor of a target to explain how both noise and bias result in inaccurate judgments, failing to predictably hit the bull’s-eye in different ways. Where bias looks like a tight cluster of shots that all consistently miss the mark, the erratic judgments caused by noise look like a scattershot combination of precise hits and wild misses…(More)”.

Harvard fraud claims fuel doubts over science of behaviour


Article by Andrew Hill and Andrew Jack: “Claims that fraudulent data was used in papers co-authored by a star Harvard Business School ethics expert have fuelled a growing controversy about the validity of behavioural science, whose findings are routinely taught in business schools and applied within companies.

While the professor has not yet responded to details of the claims, the episode is the latest blow to a field that has risen to prominence over the past 15 years and whose findings in areas such as decision-making and team-building are widely put into practice.

Companies from Coca-Cola to JPMorgan Chase have executives dedicated to behavioural science, while governments around the world have also embraced its findings. But well-known principles in the field such as “nudge theory” are now being called into question.

The Harvard episode “is topic number one in business school circles”, said André Spicer, executive dean of London’s Bayes Business School. “There has been a large-scale replication crisis in psychology — lots of the results can’t be reproduced and some of the underlying data has found to be faked.”…

That cast a shadow over the use of behavioural science by government-linked “nudge units” such as the UK’s Behavioural Insights Team, which was spun off into a company in 2014, and the US Office of Evaluation Sciences.

However, David Halpern, now president of BIT, countered that publication bias is not unique to the field. He said he and his peers use far larger-scale, more representative and robust testing than academic research.

Halpern argued that behavioural research can help to effectively deploy government budgets. “The dirty secret of most governments and organisations is that they spend a lot of money, but have no idea if they are spending in ways that make things better.”

Academics point out that testing others’ results is part of normal scientific practice. The difference with behavioural science is that initial results that have not yet been replicated are often quickly recycled into sensational headlines, popular self-help books and business practice.

“Scientists should be better at pointing out when non-scientists over-exaggerate these things and extrapolate, but they are worried that if they do this they will ruin the positive trend [towards their field],” said Pelle Guldborg Hansen, chief executive of iNudgeyou, a centre for applied behavioural research.

Many consultancies have sprung up to cater to corporate demand for behavioural insights. “What I found was that almost anyone who had read Nudge had a licence to set up as a behavioural scientist,” said Nuala Walsh, who formed the Global Association of Applied Behavioural Scientists in 2020 to try to set some standards…(More)”.

How Leaders in Higher Education Can Embed Behavioral Science in Their Institutions


Essay by Ross E. O’Hara: “…Once we view student success through a behavioral science lens and see the complex systems underlying student decision making, it becomes clear that behavioral scientists work best not as mechanics who repair broken systems, but as engineers who design better systems. Higher education, therefore, needs to diffuse those engineers throughout the organization.

To that end, Hallsworth recommends that organizations change their view of behavioral science “from projects to processes, from commissions to culture.” Only when behavioral science expertise is diffused across units and incorporated into all key organizational functions can a college become behaviorally enabled. So how might higher education go about this transformation?

1. Leverage the faculty

Leaders with deep expertise in behavioral science are likely already employed in social and behavioral sciences departments. Consider ways to focus their energy inward to tackle institutional challenges, perhaps using their own classrooms or departments as testing grounds. As they find promising solutions, build the infrastructure to disseminate and implement those ideas college and system wide. Unlike higher education’s normal approach—giving faculty additional unpaid and underappreciated committee work—provide funding and recognition that incentivizes faculty to make higher education policy an important piece of their academic portfolio.

2. Practice cross-functional training

I have spent the past several years providing colleges with behavioral science professional development, but too often this work is focused on a single functional unit, like academic advisors or faculty. Instead, create trainings that include representatives from across campus (e.g., enrollment; financial aid; registrar; student affairs). Not only will this diffuse behavioral science knowledge across the institution, but it will bring together the key players that impact student experience and make it easier for them to see the adaptive system that determines whether a student graduates or withdraws.

3. Let behavioral scientists be engineers

Whether you look for faculty or outside consultants, bring behavioral science experts into conversations early. From redesigning college-to-career pathways to building a new cafeteria, behavioral scientists can help gather and interpret student voices, foresee and circumvent behavioral challenges, and identify measurable and meaningful evaluation metrics. The impact of their expertise will be even greater when they work in an environment with a diffuse knowledge of behavioral science already in place…(More)”

Tap into the Wisdom of Your ‘Inner Crowd


Essay by Emir Efendić and Philippe Van de Calseyde: “Take your best guess for the questions below. Without looking up the answers, jot down your guess in your notes app or on a piece of paper. 

  1. What is the weight of the Liberty Bell? 
  2. Saudi Arabia consumes what percentage of the oil it produces? 
  3. What percent of the world’s population lives in China, India, and the European Union combined?

Next, we want you to take a second guess at these questions. But here’s the catch, this time try answering from the perspective a friend whom you often disagree with. (For us, it’s the colleague with whom we shared an office in grad school, ever the contrarian.) How would your friend answer these questions? Write down the second guesses. 

Now, the correct answers. The Liberty Bell weighs 2,080 pounds, and, when we conducted the study in 2021, Saudi Arabia consumed 32.5 percent of the oil it produced, and 43.2 percent of the world’s population lived in China, India, and the European Union combined.

For the final step, compare your first guess with the average of both your guesses.

If you’re like most of the participants in our experiment, averaging the two guesses for each question brings you closer to the answer. Why this is has to do with the fascinating way in which people make estimates and how principles of aggregation can be used to improve numerical estimates. 

A lot of research has shown that the aggregate of individual judgements can be quite accurate, in what has been termed the “wisdom of the crowds.” What makes a crowd so wise? Its wisdom relies on a relatively simple principle: when people’s guesses are sufficiently diverse and independent, averaging judgments increases accuracy by canceling out errors across individuals. 

Interestingly, research suggests that the same principles underlying wise crowds also apply when multiple estimates from a single person are averaged—a phenomenon known as the “wisdom of the inner crowd.” As it turns out, the average guess of the same person is often more accurate than each individual guess on its own.

Although effective, multiple guesses from a single person do suffer from a major drawback. They are typically quite similar to one another, as people tend to anchor on their first guess when generating a second guess….(More)”.

Psychological Processes in Social Media: Why We Click


Book by Rosanna Guadagno: “Incorporating relevant theory and research from psychology (social, cognitive, clinical, developmental, and personality), mass communication, and media studies, Psychological Processes in Social Media: Why We Click examines both the positive and negative psychological impact of social media use. The book covers a broad range of topics such as research methods, social influence and the viral spread of information, the use of social media in political movements, prosocial behavior, trolling and cyberbullying, friendship and romantic relationships, and much more. Emphasizing the integration of theory and application throughout, Psychological Processes in Social Media: Why We Click offers an illuminating look at the psychological implications and processes around the use of social media..(More)”.

When What’s Right Is Also Wrong: The Pandemic As A Corporate Social Responsibility Paradox


Article by Heidi Reed: “When the COVID-19 pandemic first hit, businesses were faced with difficult decisions where making the ‘right choice’ just wasn’t possible. For example, if a business chose to shut down, it might protect employees from catching COVID, but at the same time, it would leave them without a paycheck. This was particularly true in the U.S. where the government played a more limited role in regulating business behavior, leaving managers and owners to make hard choices.

In this way, the pandemic is a societal paradox in which the social objectives of public health and economic prosperity are both interdependent and contradictory. How does the public judge businesses then when they make decisions favoring one social objective over another? To answer this question, I qualitatively surveyed the American public at the start of the COVID-19 crisis about what they considered to be responsible and irresponsible business behavior in response to the pandemic. Analyzing their answers led me to create the 4R Model of Moral Sensemaking of Competing Social Problems.

The 4R Model relies on two dimensions: the extent to which people prioritize one social problem over another and the extent to which they exhibit psychological discomfort (i.e. cognitive dissonance). In the first mode, Reconcile, people view the problems as compatible. There is no need to prioritize then and no resulting dissonance. These people think, “Businesses can just convert to making masks to help the cause and still make a profit.”

The second mode, Resign, similarly does not prioritize one problem over another; however, the problems are seen as competing, suggesting a high level of cognitive dissonance. These people might say, “It’s dangerous to stay open, but if the business closes, people will lose their jobs. Both decisions are bad.”

In the third mode, Ranking, people use prioritizing to reduce cognitive dissonance. These people say things like, “I understand people will be fired, but it’s more important to stop the virus.”

In the fourth and final mode, Rectify, people start by ranking but show signs of lingering dissonance as they acknowledge the harm created by prioritizing one problem over another. Unlike with the Resign mode, they try to find ways to reduce this harm. A common response in this mode would be, “Businesses should shut down, but they should also try to help employees file for unemployment.”

The 4R model has strong implications for other grand challenges where there may be competing social objectives such as in addressing climate change. To this end, the typology helps corporate social responsibility (CSR) decision-makers understand how they may be judged when businesses are forced to re- or de-prioritize CSR dimensions. In other words, it helps us understand how people make moral sense of business behavior when the right thing to do is paradoxically also the wrong thing…(More)”

Systems Thinking, Big Data and Public Policy


Article by Mauricio Covarrubias: “Systems thinking and big data analysis are two fundamental tools in the formulation of public policies due to their potential to provide a more comprehensive and evidence-based understanding of the problems and challenges that a society faces.

Systems thinking is important in the formulation of public policies because it allows for a holistic and integrated approach to addressing the complex challenges and issues that a society faces. According to Ilona Kickbusch and David Gleicher, “Addressing wicked problems requires a high level of systems thinking. If there is a single lesson to be drawn from the first decade of the 21st century, it is that surprise, instability and extraordinary change will continue to be regular features of our lives.”

Public policies often involve multiple stakeholders, interrelated factors and unintended consequences, which require a deep understanding of how the system as a whole operates. Systems thinking enables policymakers to identify the key factors that influence a problem and how they relate to each other, enabling them to develop solutions that more effectively address the issues. Instead of trying to address a problem in isolation, systems thinking considers the problem as part of a whole and seeks solutions that address the root causes.

Additionally, systems thinking helps policymakers anticipate the unintended consequences of their decisions and actions. By understanding how different components of the system interact, they can predict the possible side effects of a policy in other areas. This can help avoid decisions that have unintended consequences…(More)”.