Give People Choices, Not Edicts


Peter Orszag and Cass Sunstein in Bloomberg: “Over the past few years, many nations have adopted policies that promise to improve people’s lives while preserving their freedom of choice. These approaches, informed by behavioral economics, are sometimes called nudges.
Nudges include disclosure policies, as in the idea that borrowers should “know before they owe.” They include simplification, as in recent reductions in the paperwork requirements for the Free Application for Federal Student Aid.
Nudges include default rules, which establish what happens if people do nothing at all — as with automatic enrollment in a savings plan. They also include reminders, such as text messages informing people they are about to go over their monthly allowance of mobile-phone minutes.
When the two of us worked in the Obama administration, we were interested in approaches of this kind, because the evidence suggests they work. For example, the Credit Card Accountability Responsibility and Disclosure Act of 2009 imposes numerous disclosure requirements, which are helping to save consumers more than $20 billion in annual late fees and overuse charges.
In the U.S. and other nations, automatic enrollment has significantly increased participation in savings plans. A recent study found that in Denmark, automatic enrollment has had a larger impact than significant tax incentives in getting people to save. The study found that 99 percent of the retirement contributions made in response to tax incentives would have been saved anyway; by contrast, the bulk of the contributions made by people who were automatically enrolled in a retirement plan represented a net addition to saving.
Big Benefits
In an economically challenging time, the nudge approach can deliver major benefits without imposing big costs on the public or private sector. And, like a GPS, nudges still have the virtue of allowing people to go their own way. If informed consumers want to run a risk, they can do that. A nudge isn’t a shove. Yet this approach to government has stirred up objections from both the right and the left.
What makes it legitimate for public officials to nudge people they are supposed to serve? Whenever government acts, isn’t there a risk of error, bias and overreaching?
These are good questions, and some nudges should be avoided. But the whole point of the approach is to preserve freedom of choice, and being nudged is part of the human condition. Both private and public institutions are inevitably engaged in nudging, simply because they design the background against which people make choices, and no choice is ever made without a background.
Whenever the government is designing applications and forms, its choices affect people’s decisions. Complexity produces different results from simplicity. Many laws require disclosure from the government or the private sector, and this can occur in different ways. The architecture of disclosure (including which items are placed first, font size, color, readability) is likely to influence what people select.
Life would be impossible to navigate without default rules. Computers, mobile phones, health-care plans and mortgages come with defaults, which you can change if you wish. An employer might say that you must opt in to be enrolled in a savings plan, or alternatively that you must opt out if you don’t want to participate. In either case, a default rule is involved.
Some skeptics (especially on the left) object that nudges may be ineffective or even counterproductive. In their view, coercion is often both necessary and justified. The objections are most pointed, as New York University School of Law professors Ryan Bubb and Richard Pildes argue in a forthcoming article in the Harvard Law Review, when nudges are seen as affirmatively harmful.
Automatic Enrollment
An example involves automatic enrollment in savings plans, which both of us have supported. Critics point out that if employers choose a low contribution rate, automatic enrollment can decrease employees’ total savings — a perverse effect. That observation, however, is a reason for smarter nudging, not for coercion, and is thus not a persuasive critique of nudges in general. One smarter approach in this area is “automatic escalation,” a complement to automatic enrollment.
With automatic escalation, as time goes on and people earn more money, a higher share of their wages goes into savings — unless they opt out. The objection that nudges reduce retirement savings collapses.
And guess what? A survey from Towers Watson & Co. found that in 2012, 71 percent of plans with automatic enrollment included escalation. In 2009, 50 percent did. So much for the critique that contributions in these plans are fixed at their initial levels.
To be sure, coercion might turn out to be justified when the benefits clearly outweigh the costs. But behaviorally informed approaches, which maintain freedom of choice, have growing appeal. As we continue to learn what works, we will identify numerous ways to improve people’s lives while avoiding the costs and the rigidity of more heavy-handed alternatives”

Selected Readings on Smart Disclosure


The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of smart disclosure was originally published in 2013.

While much attention is paid to open data, data transparency need not be managed by a simple On/Off switch: It’s often desirable to make specific data available to the public or individuals in targeted ways. A prime example is the use of government data in Smart Disclosure, which provides consumers with data they need to make difficult marketplace choices in health care, financial services, and other important areas. Governments collect two kinds of data that can be used for Smart Disclosure: First, governments collect information on services of high interest to consumers, and are increasingly releasing this kind of data to the public. In the United States, for example, the Department of Health and Human Services collects and releases online data on health insurance options, while the Department of Education helps consumers understand the true cost (after financial aid) of different colleges. Second, state, local, or national governments hold information on consumers themselves that can be useful to them. In the U.S., for example, the Blue Button program was launched to help veterans easily access their own medical records.

Selected Reading List (in alphabetical order)

Annotated Selected Reading List (in alphabetical order)

Better Choices: Better Deals Report on Progress in the Consumer Empowerment Strategy. Progress Report. Consumer Empowerment Strategy. United Kingdom: Department for Business Innovation & Skills, December 2012. http://bit.ly/17MqnL3.

  • The report details the progress made through the United Kingdom’s consumer empowerment strategy, Better Choices: Better Deals. The plan seeks to mitigate knowledge imbalances through information disclosure programs and targeted nudges.
  • The empowerment strategy’s four sections demonstrate the potential benefits of Smart Disclosure: 1. The power of information; 2. The power of the crowd; 3. Helping the vulnerable; and 4. A new approach to Government working with business.
Braunstein, Mark L.,. “Empowering the Patient.” In Health Informatics in the Cloud, 67–79. Springer Briefs in Computer Science. Springer New York Heidelberg Dordrecht London, 2013. https://bit.ly/2UB4jTU.
  • This book discusses the application of computing to healthcare delivery, public health and community based clinical research.
  • Braunstein asks and seeks to answer critical questions such as: Who should make the case for smart disclosure when the needs of consumers are not being met? What role do non-profits play in the conversation on smart disclosure especially when existing systems (or lack thereof) of information provision do not work or are unsafe?

Brodi, Elisa. “Product-Attribute Information” and “Product-Use Information”: Smart Disclosure and New Policy Implications for Consumers’ Protection. SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, September 4, 2012. http://bit.ly/17hssEK.

  • This paper from the Research Area of the Bank of Italy’s Law and Economics Department “surveys the literature on product use information and analyzes whether and to what extent Italian regulator is trying to ensure consumers’ awareness as to their use pattern.” Rather than focusing on the type of information governments can release to citizens, Brodi proposes that governments require private companies to provide valuable use pattern information to citizens to inform decision-making.
  • The form of regulation proposed by Brodi and other proponents “is based on a basic concept: consumers can be protected if companies are forced to disclose data on the customers’ consumption history through electronic files.”
National Science and Technology Council. Smart Disclosure and Consumer Decision Making: Report of the Task Force on Smart Disclosure. Task Force on Smart Disclosure: Information and Efficiency in Consumer Markets. Washington, DC: United States Government: Executive Office of the President, May 30, 2013. http://1.usa.gov/1aamyoT.
    • This inter-agency report is a comprehensive description of smart disclosure approaches being used across the Federal Government. The report not only highlights the importance of making data available to consumers but also to innovators to build better options for consumers.
  • In addition to providing context about government policies that guide smart disclosure initiatives, the report raises questions about what parties have influence in this space.

“Policies in Practice: The Download Capability.” Markle Connecting for Health Work Group on Consumer Engagement, August 2010. http://bit.ly/HhMJyc.

  • This report from the Markle Connecting for Health Work Group on Consumer Engagement — the creator of the Blue Button system for downloading personal health records — features a “set of privacy and security practices to help people download their electronic health records.”
  • To help make health information easily accessible for all citizens, the report lists a number of important steps:
    • Make the download capability a common practice
    • Implement sound policies and practices to protect individuals and their information
    • Collaborate on sample data sets
    • Support the download capability as part of Meaningful Use and qualified or certified health IT
    • Include the download capability in procurement requirements.
  • The report also describes the rationale for the development of the Blue Button — perhaps the best known example of Smart Disclosure currently in existence — and the targeted release of health information in general:
    • Individual access to information is rooted in fair information principles and law
    • Patients need and want the information
    • The download capability would encourage innovation
    • A download capability frees data sources from having to make many decisions about the user interface
    • A download capability would hasten the path to standards and interoperability.
Sayogo, Djoko Sigit, and Theresa A. Pardo. “Understanding Smart Data Disclosure Policy Success: The Case of Green Button.” In Proceedings of the 14th Annual International Conference on Digital Government Research, 72–81. New York: ACM New York, NY, USA, 2013. http://bit.ly/1aanf1A.
  • This paper from the Proceedings of the 14th Annual International Conference on Digital Government Research explores the implementation of the Green Button Initiative, analyzing qualitative data from interviews with experts involved in Green Button development and implementation.
  • Moving beyond the specifics of the Green Button initiative, the authors raise questions on the motivations and success factors facilitating successful collaboration between public and private organizations to support smart disclosure policy.

Thaler, Richard H., and Will Tucker. “Smarter Information, Smarter Consumers.” Harvard Business Review January – February 2013. The Big Idea. http://bit.ly/18gimxw.

  • In this article, Thaler and Tucker make three key observations regarding the challenges related to smart disclosure:
    • “We are constantly confronted with information that is highly important but extremely hard to navigate or understand.”
    • “Repeated attempts to improve disclosure, including efforts to translate complex contracts into “plain English,” have met with only modest success.”
    • “There is a fundamental difficulty of explaining anything complex in simple terms. Most people find it difficult to write instructions explaining how to tie a pair of shoelaces.

Behavioural Public Policy


New book by Adam Oliver (Cambridge University Press): “How can individuals best be encouraged to take more responsibility for their well-being and their environment or to behave more ethically in their business transactions? Across the world, governments are showing a growing interest in using behavioural economic research to inform the design of nudges which, some suggest, might encourage citizens to adopt beneficial patterns of behaviour. In this fascinating collection, leading academic economists, psychologists and philosophers reflect on how behavioural economic findings can be used to help inform the design of policy initiatives in the areas of health, education, the environment, personal finances and worker remuneration. Each chapter is accompanied by a shorter ‘response’ that provides critical commentary and an alternative perspective. This accessible book will interest academic researchers, graduate students and policy-makers across a range of disciplinary perspectives.”

When Nudges Fail: Slippery Defaults


New paper by Lauren E. Willis “Inspired by the success of “automatic enrollment” in increasing participation in defined contribution retirement savings plans, policymakers have put similar policy defaults in place in a variety of other contexts, from checking account overdraft coverage to home-mortgage escrows. Internet privacy appears poised to be the next arena. But how broadly applicable are the results obtained in the retirement savings context? Evidence from other contexts indicates two problems with this approach: the defaults put in place by the law are not always sticky, and the people who opt out may be those who would benefit the most from the default. Examining the new default for consumer checking account overdraft coverage reveals that firms can systematically undermine each of the mechanisms that might otherwise operate to make defaults sticky. Comparing the retirement-savings default to the overdraft default, four boundary conditions on the use of defaults as a policy tool are apparent: policy defaults will not be sticky when (1) motivated firms oppose them, (2) these firms have access to the consumer, (3) consumers find the decision environment confusing, and (4) consumer preferences are uncertain. Due to constitutional and institutional constraints, government regulation of the libertarian-paternalism variety is unlikely to be capable of overcoming these bounds. Therefore, policy defaults intended to protect individuals when firms have the motivation and means to move consumers out of the default are unlikely to be effective unless accompanied by substantive regulation. Moreover, the same is likely to be true of “nudges” more generally, when motivated firms oppose them.”

What Government Can and Should Learn From Hacker Culture


in The Atlantic: “Can the open-source model work for federal government? Not in every way—for security purposes, the government’s inner workings will never be completely open to the public. Even in the inner workings of government, fears of triggering the next Wikileaks or Snowden scandal may scare officials away from being more open with one another. While not every area of government can be more open, there are a few areas ripe for change.

Perhaps the most glaring need for an open-source approach is in information sharing. Today, among and within several federal agencies, a culture of reflexive and unnecessary information withholding prevails. This knee-jerk secrecy can backfire with fatal consequences, as seen in the 1998 embassy bombings in Africa, the 9/11 attacks, and the Boston Marathon bombings. What’s most troubling is that decades after the dangers of information-sharing were identified, the problem persists.
What’s preventing reform? The answer starts with the government’s hierarchical structure—though an information-is-power mentality and “need to know” Cold War-era culture contribute too. To improve the practice of information sharing, government needs to change the structure of information sharing. Specifically, it needs to flatten the hierarchy.
Former Obama Administration regulation czar Cass Sunstein’s “nudge” approach shows how this could work. In his book Simpler: The Future of Government, he describes how making even small changes to an environment can affect significant changes in behavior. While Sunstein focuses on regulations, the broader lesson is clear: Change the environment to encourage better behavior and people tend to exhibit better behavior. Without such strict adherence to the many tiers of the hierarchy, those working within it could be nudged towards, rather than fight to, share information.
One example of where this worked is in with the State Department’s annual Religious Engagement Report (RER). In 2011, the office in charge of the RER decided that instead of having every embassy submit their data via email, they would post it on a secure wiki. On the surface, this was a decision to change an information-sharing procedure. But it also changed the information-sharing culture. Instead of sharing information only along the supervisor-subordinate axis, it created a norm of sharing laterally, among colleagues.
Another advantage to flattening information-sharing hierarchies is that it reduces the risk of creating “single points of failure,” to quote technology scholar Beth Noveck. The massive amounts of data now available to us may need massive amounts of eyeballs in order to spot patterns of problems—small pools of supervisors atop the hierarchy cannot be expected to shoulder those burdens alone. And while having the right tech tools to share information is part of the solution—as the wiki made it possible for the RER—it’s not enough. Leadership must also create a culture that nudges their staff to use these tools, even if that means relinquishing a degree of their own power.
Finally, a more open work culture would help connect interested parties across government to let them share the hard work of bringing new ideas to fruition. Government is filled with examples of interesting new projects that stall in their infancy. Creating a large pool of collaborators dedicated to a project increases the likelihood that when one torchbearer burns out, others in the agency will pick up for them.
When Linus Torvalds released Linux, it was considered, in Raymond’s words, “subversive” and “a distinct shock.” Could the federal government withstand such a shock?
Evidence suggests it can—and the transformation is already happening in small ways. One of the winners of the Harvard Kennedy School’s Innovations in Government award is State’s Consular Team India (CTI), which won for joining their embassy and four consular posts—each of which used to have its own distinct set of procedures-into a single, more effective unit who could deliver standardized services. As CTI describes it, “this is no top-down bureaucracy” but shares “a common base of information and shared responsibilities.” They flattened the hierarchy, and not only lived, but thrived.”

Making government simpler is complicated


Mike Konczal in The Washington Post: “Here’s something a politician would never say: “I’m in favor of complex regulations.” But what would the opposite mean? What would it mean to have “simple” regulations?

There are two definitions of “simple” that have come to dominate liberal conversations about government. One is the idea that we should make use of “nudges” in regulation. The other is the idea that we should avoid “kludges.” As it turns out, however, these two definitions conflict with each other —and the battle between them will dominate conversations about the state in the years ahead.

The case for “nudges”

The first definition of a “simple” regulation is one emphasized in Cass Sunstein’s recent book titled Simpler: The Future of Government (also see here). A simple policy is one that simply “nudges” people into one choice or another using a variety of default rules, disclosure requirements, and other market structures. Think, for instance, of rules that require fast-food restaurants to post calories on their menus, or a mortgage that has certain terms clearly marked in disclosures.

These sorts of regulations are deemed “choice preserving.” Consumers are still allowed to buy unhealthy fast-food meals or sign up for mortgages they can’t reasonably afford. The regulations are just there to inform people about their choices. These rules are designed to keep the market “free,” where all possibilities are ultimately possible, although there are rules to encourage certain outcomes.
In his book, however, Sunstein adds that there’s another very different way to understand the term “simple.” What most people mean when they think of simple regulations is a rule that is “simple to follow.” Usually a rule is simple to follow because it outright excludes certain possibilities and thus ensures others. Which means, by definition, it limits certain choices.

The case against “kludges”
This second definition of simple plays a key role in political scientist Steve Teles’ excellent recent essay, “Kludgeocracy in America.” For Teles, a “kludge” is a “clumsy but temporarily effective” fix for a policy problem. (The term comes from computer science.) These kludges tend to pile up over time, making government cumbersome and inefficient overall.
Teles focuses on several ways that kludges are introduced into policy, with a particularly sharp focus on overlapping jurisdictions and the related mess of federal and state overlap in programs. But, without specifically invoking it, he also suggests that a reliance on “nudge” regulations can lead to more kludges.
After all, non-kludge policy proposal is one that will be simple to follow and will clearly cause a certain outcome, with an obvious causality chain. This is in contrast to a web of “nudges” and incentives designed to try and guide certain outcomes.

Why “nudges” aren’t always simpler
The distinction between the two is clear if we take a specific example core to both definitions: retirement security.
For Teles, “one of the often overlooked benefits of the Social Security program… is that recipients automatically have taxes taken out of their paychecks, and, then without much effort on their part, checks begin to appear upon retirement. It’s simple and direct. By contrast, 401(k) retirement accounts… require enormous investments of time, effort, and stress to manage responsibly.”

Yet 401(k)s are the ultimately fantasy laboratory for nudge enthusiasts. A whole cottage industry has grown up around figuring out ways to default people into certain contributions, on designing the architecture of choices of investments, and trying to effortlessly and painlessly guide people into certain savings.
Each approach emphasizes different things. If you want to focus your energy on making people better consumers and market participations, expanding our government’s resources and energy into 401(k)s is a good choice. If you want to focus on providing retirement security directly, expanding Social Security is a better choice.
The first is “simple” in that it doesn’t exclude any possibility but encourages market choices. The second is “simple” in that it is easy to follow, and the result is simple as well: a certain amount of security in old age is provided directly. This second approach understands the government as playing a role in stopping certain outcomes, and providing for the opposite of those outcomes, directly….

Why it’s hard to create “simple” regulations
Like all supposed binaries this is really a continuum. Taxes, for instance, sit somewhere in the middle of the two definitions of “simple.” They tend to preserve the market as it is but raise (or lower) the price of certain goods, influencing choices.
And reforms and regulations are often most effective when there’s a combination of these two types of “simple” rules.
Consider an important new paper, “Regulating Consumer Financial Products: Evidence from Credit Cards,” by Sumit Agarwal, Souphala Chomsisengphet, Neale Mahoney and Johannes Stroebel. The authors analyze the CARD Act of 2009, which regulated credit cards. They found that the nudge-type disclosure rules “increased the number of account holders making the 36-month payment value by 0.5 percentage points.” However, more direct regulations on fees had an even bigger effect, saving U.S. consumers $20.8 billion per year with no notable reduction in credit access…..
The balance between these two approaches of making regulations simple will be front and center as liberals debate the future of government, whether they’re trying to pull back on the “submerged state” or consider the implications for privacy. The debate over the best way for government to be simple is still far from over.”

Our Privacy Problem is a Democracy Problem in Disguise


Evgeny Morozov in MIT Technology Review: “Intellectually, at least, it’s clear what needs to be done: we must confront the question not only in the economic and legal dimensions but also in a political one, linking the future of privacy with the future of democracy in a way that refuses to reduce privacy either to markets or to laws. What does this philosophical insight mean in practice?

First, we must politicize the debate about privacy and information sharing. Articulating the existence—and the profound political consequences—of the invisible barbed wire would be a good start. We must scrutinize data-intensive problem solving and expose its occasionally antidemocratic character. At times we should accept more risk, imperfection, improvisation, and inefficiency in the name of keeping the democratic spirit alive.
Second, we must learn how to sabotage the system—perhaps by refusing to self-track at all. If refusing to record our calorie intake or our whereabouts is the only way to get policy makers to address the structural causes of problems like obesity or climate change—and not just tinker with their symptoms through nudging—information boycotts might be justifiable. Refusing to make money off your own data might be as political an act as refusing to drive a car or eat meat. Privacy can then reëmerge as a political instrument for keeping the spirit of democracy alive: we want private spaces because we still believe in our ability to reflect on what ails the world and find a way to fix it, and we’d rather not surrender this capacity to algorithms and feedback loops.
Third, we need more provocative digital services. It’s not enough for a website to prompt us to decide who should see our data. Instead it should reawaken our own imaginations. Designed right, sites would not nudge citizens to either guard or share their private information but would reveal the hidden political dimensions to various acts of information sharing. We don’t want an electronic butler—we want an electronic provocateur. Instead of yet another app that could tell us how much money we can save by monitoring our exercise routine, we need an app that can tell us how many people are likely to lose health insurance if the insurance industry has as much data as the NSA, most of it contributed by consumers like us. Eventually we might discern such dimensions on our own, without any technological prompts.
Finally, we have to abandon fixed preconceptions about how our digital services work and interconnect. Otherwise, we’ll fall victim to the same logic that has constrained the imagination of so many well-­meaning privacy advocates who think that defending the “right to privacy”—not fighting to preserve democracy—is what should drive public policy. While many Internet activists would surely argue otherwise, what happens to the Internet is of only secondary importance. Just as with privacy, it’s the fate of democracy itself that should be our primary goal.

Why Nudge?: The Politics of Libertarian Paternalism


New and forthcoming book by Cass Sunstein: “Based on a series of pathbreaking lectures given at Yale University in 2012, this powerful, thought-provoking work by national best-selling author Cass R. Sunstein combines legal theory with behavioral economics to make a fresh argument about the legitimate scope of government, bearing on obesity, smoking, distracted driving, health care, food safety, and other highly volatile, high-profile public issues. Behavioral economists have established that people often make decisions that run counter to their best interests—producing what Sunstein describes as “behavioral market failures.” Sometimes we disregard the long term; sometimes we are unrealistically optimistic; sometimes we do not see what is in front of us. With this evidence in mind, Sunstein argues for a new form of paternalism, one that protects people against serious errors but also recognizes the risk of government overreaching and usually preserves freedom of choice.
Against those who reject paternalism of any kind, Sunstein shows that “choice architecture”—government-imposed structures that affect our choices—is inevitable, and hence that a form of paternalism cannot be avoided. He urges that there are profoundly moral reasons to ensure that choice architecture is helpful rather than harmful—and that it makes people’s lives better and longer.”

Nudge Nation: A New Way to Prod Students Into and Through College


Ben Wildavsky at EducationSector: “Thanks in part to Thaler and Sunstein’s work, the power of nudges has become well-established—including on many college campuses, where students around the country are beginning the fall semester. While online education and software-driven pedagogy on college campuses have received a good deal of attention, a less visible set of technology-driven initiatives also has gained a foothold: behavioral nudges designed to keep students on track to succeed. Just as e-commerce entrepreneurs have drawn on massive troves of consumer data to create algorithms for firms such as Netflix and Amazon, which unbundle the traditional storefront consumer experience through customized, online delivery, architects of campus technology nudges also rely on data analytics or data mining to improve the student experience.

By giving students information-driven suggestions that lead to smarter actions, technology nudges are intended to tackle a range of problems surrounding the process by which students begin college and make their way to graduation.
New approaches are certainly needed….
There are many reasons for low rates of persistence and graduation, including financial problems, the difficulty of juggling non-academic responsibilities such as work and family, and, for some first-generation stu­dents, culture shock. But academic engagement and success are major contributors. That’s why colleges are using behavioral nudges, drawing on data analytics and behavioral psychology, to focus on problems that occur along the academic pipeline:
• Poor student organization around the logistics of going to college
• Unwise course selections that increase the risk of failure and extend time to degree
• Inadequate information about academic progress and the need for academic help
• Unfocused support systems that identify struggling students but don’t directly engage with them
• Difficulty tapping into counseling services
These new ventures, whether originating within colleges or created by outside entrepreneurs, are doing things with data that just couldn’t be done in the past—creating giant databases of student course records, for example, to find patterns of success and failure that result when certain kinds of students take certain kinds of courses.”

Public Policies, Made to Fit People


Richard Thaler in the New York Times: “I HAVE written here before about the potential gains to government from involving social and behavioral scientists in designing public policies. My enthusiasm comes in part from my experiences as an academic adviser to the Behavioral Insights Team created in Britain by Prime Minister David Cameron.

Thus I was pleased to hear reports that the White House is building a similar initiative here in the United States. Maya Shankar, a cognitive scientist and senior policy adviser at the White House Office of Science and Technology Policy, is coordinating this cross-agency group, called the Social and Behavioral Science Team; it is part of a larger effort to use evidence and innovation to promote government performance and efficiency. I am among a number of academics who have shared ideas with the administration about how research findings in social and behavioral science can improve policy.

It makes sense for social scientists to become more involved in policy, because many of society’s most challenging problems are, in essence, behavioral. Using social scientists’ findings to create plausible interventions, then testing their efficacy with randomized controlled trials, can improve — and sometimes save — people’s lives, all while reducing the need for more government spending to fix problems later.

Here are three examples of social science issues that have attracted the team’s attention…
THE 30-MILLION-WORD GAP One of society’s thorniest problems is that children from poor families start school lagging badly behind their more affluent classmates in readiness. By the age of 3, children from affluent families have vocabularies that are roughly double those of children from poor families, according to research published in 1995….
DOMESTIC VIOLENCE The team will primarily lend support and expertise to federal agency initiatives. One example concerns the effort to reduce domestic violence, a problem for which there is no quick fix….
HEALTH COMPLIANCE One reason for high health care costs is that patients fail to follow their treatment regimen….”