Imagining the Next Decade of Behavioral Science


Evan Nesterak at the Behavioral Scientist: “If you asked Richard Thaler in 2010, what he thought would become of the then very new field of behavioral science over the next decade, he would have been wrong, at least for the most part. Could he have predicted the expansion of behavioral economics research? Probably. The Nobel Prize? Maybe. The nearly 300 and counting behavioral teams in governments, businesses, and other organizations around the world? Not a chance. 

When we asked him a year and a half ago to sum up the 10 years since the publication of Nudgehe replied “Am I too old to just say OMG? … [Cass Sunstein and I] would never have anticipated one “nudge unit” much less 200….Every once in a while, one of us will send the other an email that amounts to just ‘wow.’”

As we closed last year (and the last decade), we put out a call to help us imagine the next decade of behavioral science. We asked you to share your hopes and fears, predictions and warnings, open questions and big ideas. 

We received over 120 submissions from behavioral scientists around the world. We picked the most thought-provoking submissions and curated them below.

We’ve organized the responses into three sections. The first section, Promises and Pitfalls, houses the responses about the field as whole—its identity, purpose, values. In that section, you’ll find authors challenging the field to be bolder. You’ll also find ideas to unite the field, which in its growth has felt for some like the “Wild West.” Ethical concerns are also top of mind. “Behavioral science has confronted ethical dilemmas before … but never before has the essence of the field been so squarely in the wheelhouse of corporate interests,” writes Phillip Goff.

In the second section, we’ve placed the ideas about specific domains. This includes “Technology: Nightmare or New Norm,” where Tania Ramos considers the possibility of a behaviorally optimized tech dystopia. In “The Future of Work,” Lazslo Bock imagines that well-timed, intelligent nudges will foster healthier company cultures, and Jon Jachomiwcz emphasizes the importance of passion in an economy increasingly dominated by A.I. In “Climate Change: Targeting Individuals and Systems” behavioral scientists grapple with how the field can pull its weight in this existential fight. You’ll also find sections on building better governments, health care at the digital frontier and final mile, and the next steps for education. 

The third and final section gets the most specific of all. Here you’ll find commentary on the opportunities (and obligations) for research and application. For instance, George Lowenstein suggests we pay more attention to attention—an increasingly scarce resource. Others, on the application side, ponder how behavioral science will influence the design of our neighborhoods and wonder what it will take to bring behavioral science into the courtroom. The section closes with ideas on the future of intervention design and ways we can continue to master our methods….(More)”.

How does participating in a deliberative citizens panel on healthcare priority setting influence the views of participants?


Paper by Vivian Reckers-Droog et al: “A deliberative citizens panel was held to obtain insight into criteria considered relevant for healthcare priority setting in the Netherlands. Our aim was to examine whether and how panel participation influenced participants’ views on this topic. Participants (n = 24) deliberated on eight reimbursement cases in September and October, 2017. Using Q methodology, we identified three distinct viewpoints before (T0) and after (T1) panel participation. At T0, viewpoint 1 emphasised that access to healthcare is a right and that prioritisation should be based solely on patients’ needs. Viewpoint 2 acknowledged scarcity of resources and emphasised the importance of treatment-related health gains. Viewpoint 3 focused on helping those in need, favouring younger patients, patients with a family, and treating diseases that heavily burden the families of patients. At T1, viewpoint 1 had become less opposed to prioritisation and more considerate of costs. Viewpoint 2 supported out-of-pocket payments more strongly. A new viewpoint 3 emerged that emphasised the importance of cost-effectiveness and that prioritisation should consider patient characteristics, such as their age. Participants’ views partly remained stable, specifically regarding equal access and prioritisation based on need and health gains. Notable changes concerned increased support for prioritisation, consideration of costs, and cost-effectiveness. Further research into the effects of deliberative methods is required to better understand how they may contribute to the legitimacy of and public support for allocation decisions in healthcare….(More)”.

Artificial intelligence, geopolitics, and information integrity


Report by John Villasenor: “Much has been written, and rightly so, about the potential that artificial intelligence (AI) can be used to create and promote misinformation. But there is a less well-recognized but equally important application for AI in helping to detect misinformation and limit its spread. This dual role will be particularly important in geopolitics, which is closely tied to how governments shape and react to public opinion both within and beyond their borders. And it is important for another reason as well: While nation-state interest in information is certainly not new, the incorporation of AI into the information ecosystem is set to accelerate as machine learning and related technologies experience continued advances.

The present article explores the intersection of AI and information integrity in the specific context of geopolitics. Before addressing that topic further, it is important to underscore that the geopolitical implications of AI go far beyond information. AI will reshape defense, manufacturing, trade, and many other geopolitically relevant sectors. But information is unique because information flows determine what people know about their own country and the events within it, as well as what they know about events occurring on a global scale. And information flows are also critical inputs to government decisions regarding defense, national security, and the promotion of economic growth. Thus, a full accounting of how AI will influence geopolitics of necessity requires engaging with its application in the information ecosystem.

This article begins with an exploration of some of the key factors that will shape the use of AI in future digital information technologies. It then considers how AI can be applied to both the creation and detection of misinformation. The final section addresses how AI will impact efforts by nation-states to promote–or impede–information integrity….(More)”.

Artificial Morality


Essay by Bruce Sterling: “This is an essay about lists of moral principles for the creators of Artificial Intelligence. I collect these lists, and I have to confess that I find them funny.

Nobody but AI mavens would ever tiptoe up to the notion of creating godlike cyber-entities that are much smarter than people. I hasten to assure you — I take that weird threat seriously. If we could wipe out the planet with nuclear physics back in the late 1940s, there must be plenty of other, novel ways to get that done.

What I find comical is a programmer’s approach to morality — the urge to carefully type out some moral code before raising unholy hell. Many professions other than programming have stern ethical standards: lawyers and doctors, for instance. Are lawyers and doctors evil? It depends. If a government is politically corrupt, then a nation’s lawyers don’t escape that moral stain. If a health system is slaughtering the population with mis-prescribed painkillers, then doctors can’t look good, either.

So if AI goes south, for whatever reason, programmers are just bound to look culpable and sinister. Careful lists of moral principles will not avert that moral judgment, no matter how many earnest efforts they make to avoid bias, to build and test for safety, to provide feedback for user accountability, to design for privacy, to consider the colossal abuse potential, and to eschew any direct involvement in AI weapons, AI spyware, and AI violations of human rights.

I’m not upset by moral judgments in well-intentioned manifestos, but it is an odd act of otherworldly hubris. Imagine if car engineers claimed they could build cars fit for all genders and races, test cars for safety-first, mirror-plate the car windows and the license plates for privacy, and make sure that military tanks and halftracks and James Bond’s Aston-Martin spy-mobile were never built at all. Who would put up with that presumptuous behavior? Not a soul, not even programmers.

In the hermetic world of AI ethics, it’s a given that self-driven cars will kill fewer people than we humans do. Why believe that? There’s no evidence for it. It’s merely a cranky aspiration. Life is cheap on traffic-choked American roads — that social bargain is already a hundred years old. If self-driven vehicles doubled the road-fatality rate, and yet cut shipping costs by 90 percent, of course those cars would be deployed.

I’m not a cynic about morality per se, but everybody’s got some. The military, the spies, the secret police and organized crime, they all have their moral codes. Military AI flourishes worldwide, and so does cyberwar AI, and AI police-state repression….(More)”.

The Information Trade: How Big Tech Conquers Countries, Challenges Our Rights, and Transforms Our World


Book by Alexis Wichowski: “… considers the unchecked rise of tech giants like Facebook, Google, Amazon, Apple, Microsoft, and Tesla—what she calls “net states”— and their unavoidable influence in our lives. Rivaling nation states in power and capital, today’s net states are reaching into our physical world, inserting digital services into our lived environments in ways both unseen and, at times, unknown to us. They are transforming the way the world works, putting our rights up for grabs, from personal privacy to national security.  

Combining original reporting and insights drawn from more than 100 interviews with technology and government insiders, including Microsoft president Brad Smith, Google CEO Eric Schmidt, the former Federal Trade Commission chair under President Obama, and the managing director of Jigsaw—Google’s Department of Counter-terrorism against extremism and cyber-attacks—The Information Trade explores what happens we give up our personal freedom and individual autonomy in exchange for an easy, plugged-in existence, and shows what we can do to control our relationship with net states before they irreversibly change our future….(More)

10 Privacy Risks and 10 Privacy Enhancing Technologies to Watch in the Next Decade


Future of Privacy Forum: “Today, FPF is publishing a white paper co-authored by CEO Jules Polonetsky and hackylawyER Founder Elizabeth Renieris to help corporate officers, nonprofit leaders, and policymakers better understand privacy risks that will grow in prominence during the 2020s, as well as rising technologies that will be used to help manage privacy through the decade. Leaders must understand the basics of technologies like biometric scanning, collaborative robotics, and spatial computing in order to assess how existing and proposed policies, systems, and laws will address them, and to support appropriate guidance for the implementation of new digital products and services.

The white paper, Privacy 2020: 10 Privacy Risks and 10 Privacy Enhancing Technologies to Watch in the Next Decade, identifies ten technologies that are likely to create increasingly complex data protection challenges. Over the next decade, privacy considerations will be driven by innovations in tech linked to human bodies, health, and social networks; infrastructure; and computing power. The white paper also highlights ten developments that can enhance privacy – providing cause for optimism that organizations will be able to manage data responsibly. Some of these technologies are already in general use, some will soon be widely deployed, and others are nascent….(More)”.

Incentive Competitions and the Challenge of Space Exploration


Article by Matthew S. Williams: “Bill Joy, the famed computer engineer who co-founded Sun Microsystems in 1982, once said, “No matter who you are, most of the smartest people work for someone else.” This has come to be known as “Joy’s Law” and is one of the inspirations for concepts such as “crowdsourcing”.

Increasingly, government agencies, research institutions, and private companies are looking to the power of the crowd to find solutions to problems. Challenges are created and prizes offered – that, in basic terms, is an “incentive competition.”

The basic idea of an incentive competition is pretty straightforward. When confronted with a particularly daunting problem, you appeal to the general public to provide possible solutions and offer a reward for the best one. Sounds simple, doesn’t it?

But in fact, this concept flies in the face of conventional problem-solving, which is for companies to recruit people with knowledge and expertise and solve all problems in-house. This kind of thinking underlies most of our government and business models, but has some significant limitations….

Another benefit to crowdsourcing is the way it takes advantage of the exponential growth in human population in the past few centuries. Between 1650 and 1800, the global population doubled, to reach about 1 billion. It took another one-hundred and twenty years (1927) before it doubled again to reach 2 billion.

However, it only took fifty-seven years for the population to double again and reach 4 billion (1974), and just fifteen more for it to reach 6 billion. As of 2020, the global population has reached 7.8 billion, and the growth trend is expected to continue for some time.

This growth has paralleled another trend, the rapid development of new ideas in science and technology. Between 1650 and 2020, humanity has experienced multiple technological revolutions, in what is a comparatively very short space of time….(More)”.

Are these the 20 top multi-stakeholder processes in 2020 to advance a digital ecosystem for the planet?


Paper by David Jensen, Karen Bakker and Christopher Reimer: “As outlined in our recent article, The promise and peril of a digital ecosystem for the planet, we propose that the ongoing digital revolution needs to be harnessed to drive a transformation towards global sustainability, environmental stewardship, and human well-being. Public, private and civil society actors must take deliberate action and collaborate to build a global digital ecosystem for the planet. A digital ecosystem that mobilizes hardware, software and digital infrastructures together with data analytics to generate dynamic, real-time insights that can power various structural transformations are needed to achieve collective sustainability.

The digital revolution must also be used to abolish extreme poverty and reduce inequalities that jeopardize social cohesion and stability. Often, these social inequalities are tied to and overlap with ecological challenges. Ultimately, then, we must do nothing less than direct the digital revolution for planet, people, prosperity and peace.

To achieve this goal, we must embed the vision of a fair digital ecosystem for the planet into all of the key multi-stakeholder processes that are currently unfolding. We aim to do this through two new articles on Medium: a companion article on Building a digital ecosystem for the planet: 20 substantive priorities for 2020, and this one. In the companion article, we identify three primary engagement tracks: system architecture, applications, and governance. Within these three tracks, we outline 20 priorities for the new decade. Building from these priorities, our focus for this article is to identify a preliminary list of the top 20 most important multi-stakeholder processes that we must engage and influence in 2020….(More).

Shining light into the dark spaces of chat apps


Sharon Moshavi at Columbia Journalism Review: “News has migrated from print to the web to social platforms to mobile. Now, at the dawn of a new decade, it is heading to a place that presents a whole new set of challenges: the private, hidden spaces of instant messaging apps.  

WhatsApp, Facebook Messenger, Telegram, and their ilk are platforms that journalists cannot ignore — even in the US, where chat-app usage is low. “I believe a privacy-focused communications platform will become even more important than today’s open platforms,” Mark Zuckerberg, Facebook’s CEO, wrote in March 2019. By 2022, three billion people will be using them on a regular basis, according to Statista

But fewer journalists worldwide are using these platforms to disseminate news than they were two years ago, as ICFJ discovered in its 2019 “State of Technology in Global Newsrooms” survey. That’s a particularly dangerous trend during an election year, because messaging apps are potential minefields of misinformation. 

American journalists should take stock of recent elections in India and Brazil, ahead of which misinformation flooded WhatsApp. ICFJ’s “TruthBuzz” projects found coordinated and widespread disinformation efforts using text, videos, and photos on that platform.  

It is particularly troubling given that more people now use it as a primary source for information. In Brazil, one in four internet users consult WhatsApp weekly as a news source. A recent report from New York University’s Center for Business and Human Rights warned that WhatsApp “could become a troubling source of false content in the US, as it has been during elections in Brazil and India.” It’s imperative that news media figure out how to map the contours of these opaque, unruly spaces, and deliver fact-based news to those who congregate there….(More)”.

The Gray Spectrum: Ethical Decision Making with Geospatial and Open Source Analysis


Report by The Stanley Center for Peace and Security: “Geospatial and open source analysts face decisions in their work that can directly or indirectly cause harm to individuals, organizations, institutions, and society. Though analysts may try to do the right thing, such ethically-informed decisions can be complex. This is particularly true for analysts working on issues related to nuclear nonproliferation or international security, analysts whose decisions on whether to publish certain findings could have far-reaching consequences.

The Stanley Center for Peace and Security and the Open Nuclear Network (ONN) program of One Earth Future Foundation convened a workshop to explore these ethical challenges, identify resources, and consider options for enhancing the ethical practices of geospatial and open source analysis communities.

This Readout & Recommendations brings forward observations from that workshop. It describes ethical challenges that stakeholders from relevant communities face. It concludes with a list of needs participants identified, along with possible strategies for promoting sustaining behaviors that could enhance the ethical conduct of the community of nonproliferation analysts working with geospatial and open source data.

Some Key Findings

  • A code of ethics could serve important functions for the community, including giving moral guidance to practitioners, enhancing public trust in their work, and deterring unethical behavior. Participants in the workshop saw a significant value in such a code and offered ideas for developing one.
  • Awareness of ethical dilemmas and strong ethical reasoning skills are essential for sustaining ethical practices, yet professionals in this field might not have easy access to such training. Several approaches could improve ethics education for the field overall, including starting a body of literature, developing model curricula, and offering training for students and professionals.
  • Other stakeholders—governments, commercial providers, funders, organizations, management teams, etc.—should contribute to the discussion on ethics in the community and reinforce sustaining behaviors….(More)”.