Imagining the Next Decade of Behavioral Science


Evan Nesterak at the Behavioral Scientist: “If you asked Richard Thaler in 2010, what he thought would become of the then very new field of behavioral science over the next decade, he would have been wrong, at least for the most part. Could he have predicted the expansion of behavioral economics research? Probably. The Nobel Prize? Maybe. The nearly 300 and counting behavioral teams in governments, businesses, and other organizations around the world? Not a chance. 

When we asked him a year and a half ago to sum up the 10 years since the publication of Nudgehe replied “Am I too old to just say OMG? … [Cass Sunstein and I] would never have anticipated one “nudge unit” much less 200….Every once in a while, one of us will send the other an email that amounts to just ‘wow.’”

As we closed last year (and the last decade), we put out a call to help us imagine the next decade of behavioral science. We asked you to share your hopes and fears, predictions and warnings, open questions and big ideas. 

We received over 120 submissions from behavioral scientists around the world. We picked the most thought-provoking submissions and curated them below.

We’ve organized the responses into three sections. The first section, Promises and Pitfalls, houses the responses about the field as whole—its identity, purpose, values. In that section, you’ll find authors challenging the field to be bolder. You’ll also find ideas to unite the field, which in its growth has felt for some like the “Wild West.” Ethical concerns are also top of mind. “Behavioral science has confronted ethical dilemmas before … but never before has the essence of the field been so squarely in the wheelhouse of corporate interests,” writes Phillip Goff.

In the second section, we’ve placed the ideas about specific domains. This includes “Technology: Nightmare or New Norm,” where Tania Ramos considers the possibility of a behaviorally optimized tech dystopia. In “The Future of Work,” Lazslo Bock imagines that well-timed, intelligent nudges will foster healthier company cultures, and Jon Jachomiwcz emphasizes the importance of passion in an economy increasingly dominated by A.I. In “Climate Change: Targeting Individuals and Systems” behavioral scientists grapple with how the field can pull its weight in this existential fight. You’ll also find sections on building better governments, health care at the digital frontier and final mile, and the next steps for education. 

The third and final section gets the most specific of all. Here you’ll find commentary on the opportunities (and obligations) for research and application. For instance, George Lowenstein suggests we pay more attention to attention—an increasingly scarce resource. Others, on the application side, ponder how behavioral science will influence the design of our neighborhoods and wonder what it will take to bring behavioral science into the courtroom. The section closes with ideas on the future of intervention design and ways we can continue to master our methods….(More)”.

How does participating in a deliberative citizens panel on healthcare priority setting influence the views of participants?


Paper by Vivian Reckers-Droog et al: “A deliberative citizens panel was held to obtain insight into criteria considered relevant for healthcare priority setting in the Netherlands. Our aim was to examine whether and how panel participation influenced participants’ views on this topic. Participants (n = 24) deliberated on eight reimbursement cases in September and October, 2017. Using Q methodology, we identified three distinct viewpoints before (T0) and after (T1) panel participation. At T0, viewpoint 1 emphasised that access to healthcare is a right and that prioritisation should be based solely on patients’ needs. Viewpoint 2 acknowledged scarcity of resources and emphasised the importance of treatment-related health gains. Viewpoint 3 focused on helping those in need, favouring younger patients, patients with a family, and treating diseases that heavily burden the families of patients. At T1, viewpoint 1 had become less opposed to prioritisation and more considerate of costs. Viewpoint 2 supported out-of-pocket payments more strongly. A new viewpoint 3 emerged that emphasised the importance of cost-effectiveness and that prioritisation should consider patient characteristics, such as their age. Participants’ views partly remained stable, specifically regarding equal access and prioritisation based on need and health gains. Notable changes concerned increased support for prioritisation, consideration of costs, and cost-effectiveness. Further research into the effects of deliberative methods is required to better understand how they may contribute to the legitimacy of and public support for allocation decisions in healthcare….(More)”.

Artificial intelligence, geopolitics, and information integrity


Report by John Villasenor: “Much has been written, and rightly so, about the potential that artificial intelligence (AI) can be used to create and promote misinformation. But there is a less well-recognized but equally important application for AI in helping to detect misinformation and limit its spread. This dual role will be particularly important in geopolitics, which is closely tied to how governments shape and react to public opinion both within and beyond their borders. And it is important for another reason as well: While nation-state interest in information is certainly not new, the incorporation of AI into the information ecosystem is set to accelerate as machine learning and related technologies experience continued advances.

The present article explores the intersection of AI and information integrity in the specific context of geopolitics. Before addressing that topic further, it is important to underscore that the geopolitical implications of AI go far beyond information. AI will reshape defense, manufacturing, trade, and many other geopolitically relevant sectors. But information is unique because information flows determine what people know about their own country and the events within it, as well as what they know about events occurring on a global scale. And information flows are also critical inputs to government decisions regarding defense, national security, and the promotion of economic growth. Thus, a full accounting of how AI will influence geopolitics of necessity requires engaging with its application in the information ecosystem.

This article begins with an exploration of some of the key factors that will shape the use of AI in future digital information technologies. It then considers how AI can be applied to both the creation and detection of misinformation. The final section addresses how AI will impact efforts by nation-states to promote–or impede–information integrity….(More)”.

Life and the Law in the Era of Data-Driven Agency


Book edited by Mireille Hildebrandt and Kieron O’Hara: “This ground-breaking and timely book explores how big data, artificial intelligence and algorithms are creating new types of agency, and the impact that this is having on our lives and the rule of law. Addressing the issues in a thoughtful, cross-disciplinary manner, the authors examine the ways in which data-driven agency is transforming democratic practices and the meaning of individual choice.

Leading scholars in law, philosophy, computer science and politics analyse the latest innovations in data science and machine learning, assessing the actual and potential implications of these technologies. They investigate how this affects our understanding of such concepts as agency, epistemology, justice, transparency and democracy, and advocate a precautionary approach that takes the effects of data-driven agency seriously without taking it for granted….(More)”.

Artificial Morality


Essay by Bruce Sterling: “This is an essay about lists of moral principles for the creators of Artificial Intelligence. I collect these lists, and I have to confess that I find them funny.

Nobody but AI mavens would ever tiptoe up to the notion of creating godlike cyber-entities that are much smarter than people. I hasten to assure you — I take that weird threat seriously. If we could wipe out the planet with nuclear physics back in the late 1940s, there must be plenty of other, novel ways to get that done.

What I find comical is a programmer’s approach to morality — the urge to carefully type out some moral code before raising unholy hell. Many professions other than programming have stern ethical standards: lawyers and doctors, for instance. Are lawyers and doctors evil? It depends. If a government is politically corrupt, then a nation’s lawyers don’t escape that moral stain. If a health system is slaughtering the population with mis-prescribed painkillers, then doctors can’t look good, either.

So if AI goes south, for whatever reason, programmers are just bound to look culpable and sinister. Careful lists of moral principles will not avert that moral judgment, no matter how many earnest efforts they make to avoid bias, to build and test for safety, to provide feedback for user accountability, to design for privacy, to consider the colossal abuse potential, and to eschew any direct involvement in AI weapons, AI spyware, and AI violations of human rights.

I’m not upset by moral judgments in well-intentioned manifestos, but it is an odd act of otherworldly hubris. Imagine if car engineers claimed they could build cars fit for all genders and races, test cars for safety-first, mirror-plate the car windows and the license plates for privacy, and make sure that military tanks and halftracks and James Bond’s Aston-Martin spy-mobile were never built at all. Who would put up with that presumptuous behavior? Not a soul, not even programmers.

In the hermetic world of AI ethics, it’s a given that self-driven cars will kill fewer people than we humans do. Why believe that? There’s no evidence for it. It’s merely a cranky aspiration. Life is cheap on traffic-choked American roads — that social bargain is already a hundred years old. If self-driven vehicles doubled the road-fatality rate, and yet cut shipping costs by 90 percent, of course those cars would be deployed.

I’m not a cynic about morality per se, but everybody’s got some. The military, the spies, the secret police and organized crime, they all have their moral codes. Military AI flourishes worldwide, and so does cyberwar AI, and AI police-state repression….(More)”.

What if you ask and they say yes? Consumers' willingness to disclose personal data is stronger than you think


Grzegorz Mazurek and Karolina Małagocka at Business Horizons: “Technological progress—including the development of online channels and universal access to the internet via mobile devices—has advanced both the quantity and the quality of data that companies can acquire. Private information such as this may be considered a type of fuel to be processed through the use of technologies, and represents a competitive market advantage.

This article describes situations in which consumers tend to disclose personal information to companies and explores factors that encourage them to do so. The empirical studies and examples of market activities described herein illustrate to managers just how rewards work and how important contextual integrity is to customer digital privacy expectations. Companies’ success in obtaining client data depends largely on three Ts: transparency, type of data, and trust. These three Ts—which, combined, constitute a main T (i.e., the transfer of personal data)—deserve attention when seeking customer information that can be converted to competitive advantage and market success….(More)”.

The Information Trade: How Big Tech Conquers Countries, Challenges Our Rights, and Transforms Our World


Book by Alexis Wichowski: “… considers the unchecked rise of tech giants like Facebook, Google, Amazon, Apple, Microsoft, and Tesla—what she calls “net states”— and their unavoidable influence in our lives. Rivaling nation states in power and capital, today’s net states are reaching into our physical world, inserting digital services into our lived environments in ways both unseen and, at times, unknown to us. They are transforming the way the world works, putting our rights up for grabs, from personal privacy to national security.  

Combining original reporting and insights drawn from more than 100 interviews with technology and government insiders, including Microsoft president Brad Smith, Google CEO Eric Schmidt, the former Federal Trade Commission chair under President Obama, and the managing director of Jigsaw—Google’s Department of Counter-terrorism against extremism and cyber-attacks—The Information Trade explores what happens we give up our personal freedom and individual autonomy in exchange for an easy, plugged-in existence, and shows what we can do to control our relationship with net states before they irreversibly change our future….(More)

How to use evidence in policymaking


Inês Prates at apolitical: “…Evidence should feed into policymaking; there is no doubt about that. However, the truth is that using evidence in policy is often a very complex process and the stumbling blocks along the way are numerous.

The world has never had a larger wealth of data and information, and that is a great opportunity to open up public debate and democratise access to knowledge. At the same time, however, we are currently living in a “post-truth” era, where personal beliefs can trump scientific knowledge.

Technology and digital platforms have given room for populists to question well-established facts and evidence, and dangerously spread misinformation, while accusing scientists and policymakers of elitism for their own political gain.

Another challenge is that political interests can strategically manipulate or select (“cherry-pick”) evidence that justifies prearranged positions. A stark example of this is the evidence “cherry-picking” done by climate change sceptics who choose restricted time periods (for example of 8 to 12 years) that may not show a global temperature increase.

In addition, to unlock the benefits of evidence informed policy, we need to bridge the “policy-research gap”. Policymakers are not always aware of the latest evidence on an issue. Very often, critical decisions are made under a lot of pressure and the very nature of democracy makes policy complex and messy, making it hard to systematically integrate evidence into the process.

At the same time, researchers may be oblivious to what the most pressing policy challenges are, or how to communicate actionable insights to a non-expert audience. This constructive guide provides tips on how scientists can handle the most challenging aspects of engaging with policymakers.

Institutions like the European Commission’s in-house science service, the Joint Research Centre (JRC) sit precisely at the intersection between science and policy. Researchers from the JRC work together with policymakers on several key policy challenges. A nice example is their work on the scarcity of critical raw materials needed for the EU’s energy transition, using a storytelling tool to raise the awareness of non-experts on an extremely complex issue.

Lastly, we cannot forget about the importance of the buy-in from the public. Although policymakers can willingly ignore or manipulate evidence, they have very little incentives to ignore the will of a critical mass. Let us go back to the climate movement; it is hard to dismiss the influence of the youth-led worldwide protests on world leaders and their climate policy efforts.

Using evidence in policymaking is key to solving the world’s most pressing climate and environmental challenges. To do so effectively, we need to connect and establish trust between government, researchers and the public…(More)”.

10 Privacy Risks and 10 Privacy Enhancing Technologies to Watch in the Next Decade


Future of Privacy Forum: “Today, FPF is publishing a white paper co-authored by CEO Jules Polonetsky and hackylawyER Founder Elizabeth Renieris to help corporate officers, nonprofit leaders, and policymakers better understand privacy risks that will grow in prominence during the 2020s, as well as rising technologies that will be used to help manage privacy through the decade. Leaders must understand the basics of technologies like biometric scanning, collaborative robotics, and spatial computing in order to assess how existing and proposed policies, systems, and laws will address them, and to support appropriate guidance for the implementation of new digital products and services.

The white paper, Privacy 2020: 10 Privacy Risks and 10 Privacy Enhancing Technologies to Watch in the Next Decade, identifies ten technologies that are likely to create increasingly complex data protection challenges. Over the next decade, privacy considerations will be driven by innovations in tech linked to human bodies, health, and social networks; infrastructure; and computing power. The white paper also highlights ten developments that can enhance privacy – providing cause for optimism that organizations will be able to manage data responsibly. Some of these technologies are already in general use, some will soon be widely deployed, and others are nascent….(More)”.

Incentive Competitions and the Challenge of Space Exploration


Article by Matthew S. Williams: “Bill Joy, the famed computer engineer who co-founded Sun Microsystems in 1982, once said, “No matter who you are, most of the smartest people work for someone else.” This has come to be known as “Joy’s Law” and is one of the inspirations for concepts such as “crowdsourcing”.

Increasingly, government agencies, research institutions, and private companies are looking to the power of the crowd to find solutions to problems. Challenges are created and prizes offered – that, in basic terms, is an “incentive competition.”

The basic idea of an incentive competition is pretty straightforward. When confronted with a particularly daunting problem, you appeal to the general public to provide possible solutions and offer a reward for the best one. Sounds simple, doesn’t it?

But in fact, this concept flies in the face of conventional problem-solving, which is for companies to recruit people with knowledge and expertise and solve all problems in-house. This kind of thinking underlies most of our government and business models, but has some significant limitations….

Another benefit to crowdsourcing is the way it takes advantage of the exponential growth in human population in the past few centuries. Between 1650 and 1800, the global population doubled, to reach about 1 billion. It took another one-hundred and twenty years (1927) before it doubled again to reach 2 billion.

However, it only took fifty-seven years for the population to double again and reach 4 billion (1974), and just fifteen more for it to reach 6 billion. As of 2020, the global population has reached 7.8 billion, and the growth trend is expected to continue for some time.

This growth has paralleled another trend, the rapid development of new ideas in science and technology. Between 1650 and 2020, humanity has experienced multiple technological revolutions, in what is a comparatively very short space of time….(More)”.