Imagining the Next Decade of Behavioral Science


Evan Nesterak at the Behavioral Scientist: “If you asked Richard Thaler in 2010, what he thought would become of the then very new field of behavioral science over the next decade, he would have been wrong, at least for the most part. Could he have predicted the expansion of behavioral economics research? Probably. The Nobel Prize? Maybe. The nearly 300 and counting behavioral teams in governments, businesses, and other organizations around the world? Not a chance. 

When we asked him a year and a half ago to sum up the 10 years since the publication of Nudgehe replied “Am I too old to just say OMG? … [Cass Sunstein and I] would never have anticipated one “nudge unit” much less 200….Every once in a while, one of us will send the other an email that amounts to just ‘wow.’”

As we closed last year (and the last decade), we put out a call to help us imagine the next decade of behavioral science. We asked you to share your hopes and fears, predictions and warnings, open questions and big ideas. 

We received over 120 submissions from behavioral scientists around the world. We picked the most thought-provoking submissions and curated them below.

We’ve organized the responses into three sections. The first section, Promises and Pitfalls, houses the responses about the field as whole—its identity, purpose, values. In that section, you’ll find authors challenging the field to be bolder. You’ll also find ideas to unite the field, which in its growth has felt for some like the “Wild West.” Ethical concerns are also top of mind. “Behavioral science has confronted ethical dilemmas before … but never before has the essence of the field been so squarely in the wheelhouse of corporate interests,” writes Phillip Goff.

In the second section, we’ve placed the ideas about specific domains. This includes “Technology: Nightmare or New Norm,” where Tania Ramos considers the possibility of a behaviorally optimized tech dystopia. In “The Future of Work,” Lazslo Bock imagines that well-timed, intelligent nudges will foster healthier company cultures, and Jon Jachomiwcz emphasizes the importance of passion in an economy increasingly dominated by A.I. In “Climate Change: Targeting Individuals and Systems” behavioral scientists grapple with how the field can pull its weight in this existential fight. You’ll also find sections on building better governments, health care at the digital frontier and final mile, and the next steps for education. 

The third and final section gets the most specific of all. Here you’ll find commentary on the opportunities (and obligations) for research and application. For instance, George Lowenstein suggests we pay more attention to attention—an increasingly scarce resource. Others, on the application side, ponder how behavioral science will influence the design of our neighborhoods and wonder what it will take to bring behavioral science into the courtroom. The section closes with ideas on the future of intervention design and ways we can continue to master our methods….(More)”.

Artificial Morality


Essay by Bruce Sterling: “This is an essay about lists of moral principles for the creators of Artificial Intelligence. I collect these lists, and I have to confess that I find them funny.

Nobody but AI mavens would ever tiptoe up to the notion of creating godlike cyber-entities that are much smarter than people. I hasten to assure you — I take that weird threat seriously. If we could wipe out the planet with nuclear physics back in the late 1940s, there must be plenty of other, novel ways to get that done.

What I find comical is a programmer’s approach to morality — the urge to carefully type out some moral code before raising unholy hell. Many professions other than programming have stern ethical standards: lawyers and doctors, for instance. Are lawyers and doctors evil? It depends. If a government is politically corrupt, then a nation’s lawyers don’t escape that moral stain. If a health system is slaughtering the population with mis-prescribed painkillers, then doctors can’t look good, either.

So if AI goes south, for whatever reason, programmers are just bound to look culpable and sinister. Careful lists of moral principles will not avert that moral judgment, no matter how many earnest efforts they make to avoid bias, to build and test for safety, to provide feedback for user accountability, to design for privacy, to consider the colossal abuse potential, and to eschew any direct involvement in AI weapons, AI spyware, and AI violations of human rights.

I’m not upset by moral judgments in well-intentioned manifestos, but it is an odd act of otherworldly hubris. Imagine if car engineers claimed they could build cars fit for all genders and races, test cars for safety-first, mirror-plate the car windows and the license plates for privacy, and make sure that military tanks and halftracks and James Bond’s Aston-Martin spy-mobile were never built at all. Who would put up with that presumptuous behavior? Not a soul, not even programmers.

In the hermetic world of AI ethics, it’s a given that self-driven cars will kill fewer people than we humans do. Why believe that? There’s no evidence for it. It’s merely a cranky aspiration. Life is cheap on traffic-choked American roads — that social bargain is already a hundred years old. If self-driven vehicles doubled the road-fatality rate, and yet cut shipping costs by 90 percent, of course those cars would be deployed.

I’m not a cynic about morality per se, but everybody’s got some. The military, the spies, the secret police and organized crime, they all have their moral codes. Military AI flourishes worldwide, and so does cyberwar AI, and AI police-state repression….(More)”.

Incentive Competitions and the Challenge of Space Exploration


Article by Matthew S. Williams: “Bill Joy, the famed computer engineer who co-founded Sun Microsystems in 1982, once said, “No matter who you are, most of the smartest people work for someone else.” This has come to be known as “Joy’s Law” and is one of the inspirations for concepts such as “crowdsourcing”.

Increasingly, government agencies, research institutions, and private companies are looking to the power of the crowd to find solutions to problems. Challenges are created and prizes offered – that, in basic terms, is an “incentive competition.”

The basic idea of an incentive competition is pretty straightforward. When confronted with a particularly daunting problem, you appeal to the general public to provide possible solutions and offer a reward for the best one. Sounds simple, doesn’t it?

But in fact, this concept flies in the face of conventional problem-solving, which is for companies to recruit people with knowledge and expertise and solve all problems in-house. This kind of thinking underlies most of our government and business models, but has some significant limitations….

Another benefit to crowdsourcing is the way it takes advantage of the exponential growth in human population in the past few centuries. Between 1650 and 1800, the global population doubled, to reach about 1 billion. It took another one-hundred and twenty years (1927) before it doubled again to reach 2 billion.

However, it only took fifty-seven years for the population to double again and reach 4 billion (1974), and just fifteen more for it to reach 6 billion. As of 2020, the global population has reached 7.8 billion, and the growth trend is expected to continue for some time.

This growth has paralleled another trend, the rapid development of new ideas in science and technology. Between 1650 and 2020, humanity has experienced multiple technological revolutions, in what is a comparatively very short space of time….(More)”.

Shining light into the dark spaces of chat apps


Sharon Moshavi at Columbia Journalism Review: “News has migrated from print to the web to social platforms to mobile. Now, at the dawn of a new decade, it is heading to a place that presents a whole new set of challenges: the private, hidden spaces of instant messaging apps.  

WhatsApp, Facebook Messenger, Telegram, and their ilk are platforms that journalists cannot ignore — even in the US, where chat-app usage is low. “I believe a privacy-focused communications platform will become even more important than today’s open platforms,” Mark Zuckerberg, Facebook’s CEO, wrote in March 2019. By 2022, three billion people will be using them on a regular basis, according to Statista

But fewer journalists worldwide are using these platforms to disseminate news than they were two years ago, as ICFJ discovered in its 2019 “State of Technology in Global Newsrooms” survey. That’s a particularly dangerous trend during an election year, because messaging apps are potential minefields of misinformation. 

American journalists should take stock of recent elections in India and Brazil, ahead of which misinformation flooded WhatsApp. ICFJ’s “TruthBuzz” projects found coordinated and widespread disinformation efforts using text, videos, and photos on that platform.  

It is particularly troubling given that more people now use it as a primary source for information. In Brazil, one in four internet users consult WhatsApp weekly as a news source. A recent report from New York University’s Center for Business and Human Rights warned that WhatsApp “could become a troubling source of false content in the US, as it has been during elections in Brazil and India.” It’s imperative that news media figure out how to map the contours of these opaque, unruly spaces, and deliver fact-based news to those who congregate there….(More)”.

You Are Now Remotely Controlled


Essay by Shoshana Zuboff in The New York Times: “…Only repeated crises have taught us that these platforms are not bulletin boards but hyper-velocity global bloodstreams into which anyone may introduce a dangerous virus without a vaccine. This is how Facebook’s chief executive, Mark Zuckerberg, could legally refuse to remove a faked video of Speaker of the House Nancy Pelosi and later double down on this decision, announcing that political advertising would not be subject to fact-checking.

All of these delusions rest on the most treacherous hallucination of them all: the belief that privacy is private. We have imagined that we can choose our degree of privacy with an individual calculation in which a bit of personal information is traded for valued services — a reasonable quid pro quo.For example, when Delta Air Lines piloted a biometric data system at the Atlanta airport, the company reported that of nearly 25,000 customers who traveled there each week, 98 percent opted into the process, noting that “the facial recognition option is saving an average of two seconds for each customer at boarding, or nine minutes when boarding a wide body aircraft.”

In fact the rapid development of facial recognition systems reveals the public consequences of this supposedly private choice. Surveillance capitalists have demanded the right to take our faces wherever they appear — on a city street or a Facebook page. The Financial Times reported that a Microsoft facial recognition training database of 10 million images plucked from the internet without anyone’s knowledge and supposedly limited to academic research was employed by companies like IBM and state agencies that included the United States and Chinese military. Among these were two Chinese suppliers of equipment to officials in Xinjiang, where members of the Uighur community live in open-air prisons under perpetual surveillance by facial recognition systems.

Privacy is not private, because the effectiveness of these and other private or public surveillance and control systems depends upon the pieces of ourselves that we give up — or that are secretly stolen from us.

Our digital century was to have been democracy’s Golden Age. Instead, we enter its third decade marked by a stark new form of social inequality best understood as “epistemic inequality.” It recalls a pre-Gutenberg era of extreme asymmetries of knowledge and the power that accrues to such knowledge, as the tech giants seize control of information and learning itself. The delusion of “privacy as private” was crafted to breed and feed this unanticipated social divide. Surveillance capitalists exploit the widening inequity of knowledge for the sake of profits. They manipulate the economy, our society and even our lives with impunity, endangering not just individual privacy but democracy itself. Distracted by our delusions, we failed to notice this bloodless coup from above….(More)”.

An AI Epidemiologist Sent the First Warnings of the Wuhan Virus


Eric Niiler at Wired: “On January 9, the World Health Organization notified the public of a flu-like outbreak in China: a cluster of pneumonia cases had been reported in Wuhan, possibly from vendors’ exposure to live animals at the Huanan Seafood Market. The US Centers for Disease Control and Prevention had gotten the word out a few days earlier, on January 6. But a Canadian health monitoring platform had beaten them both to the punch, sending word of the outbreak to its customers on December 31.

BlueDot uses an AI-driven algorithm that scours foreign-language news reports, animal and plant disease networks, and official proclamations to give its clients advance warning to avoid danger zones like Wuhan.

Speed matters during an outbreak, and tight-lipped Chinese officials do not have a good track record of sharing information about diseases, air pollution, or natural disasters. But public health officials at WHO and the CDC have to rely on these very same health officials for their own disease monitoring. So maybe an AI can get there faster. “We know that governments may not be relied upon to provide information in a timely fashion,” says Kamran Khan, BlueDot’s founder and CEO. “We can pick up news of possible outbreaks, little murmurs or forums or blogs of indications of some kind of unusual events going on.”…

The firm isn’t the first to look for an end-run around public health officials, but they are hoping to do better than Google Flu Trends, which was euthanized after underestimating the severity of the 2013 flu season by 140 percent. BlueDot successfully predicted the location of the Zika outbreak in South Florida in a publication in the British medical journal The Lancet….(More)”.

AI Isn’t a Solution to All Our Problems


Article by Griffin McCutcheon, John Malloy, Caitlyn Hall, and Nivedita Mahesh: “From the esoteric worlds of predictive health care and cybersecurity to Google’s e-mail completion and translation apps, the impacts of AI are increasingly being felt in our everyday lived experience. The way it has crepted into our lives in such diverse ways and its proficiency in low-level knowledge shows that AI is here to stay. But like any helpful new tool, there are notable flaws and consequences to blindly adapting it. 

AI is a tool—not a cure-all to modern problems….

Connecterra is trying to use TensorFlow to address global hunger through AI-enabled efficient farming and sustainable food development. The company uses AI-equipped sensors to track cattle health, helping farmers look for signs of illness early on. But, this only benefits one type of farmer: those rearing cattle who are able to afford a device to outfit their entire herd. Applied this way, AI can only improve the productivity of specific resource-intensive dairy farms and is unlikely to meet Connecterra’s goal of ending world hunger.

This solution, and others like it, ignores the wider social context of AI’s application. The belief that AI is a cure-all tool that will magically deliver solutions if only you can collect enough data is misleading and ultimately dangerous as it prevents other effective solutions from being implemented earlier or even explored. Instead, we need to both build AI responsibly and understand where it can be reasonably applied. 

Challenges with AI are exacerbated because these tools often come to the public as a “black boxes”—easy to use but entirely opaque in nature. This shields the user from understanding what biases and risks may be involved, and this lack of public understanding of AI tools and their limitations is a serious problem. We shouldn’t put our complete trust in programs whose workings their creators cannot interpret. These poorly understood conclusions from AI generate risk for individual users, companies or government projects where these tools are used. 

With AI’s pervasiveness and the slow change of policy, where do we go from here? We need a more rigorous system in place to evaluate and manage risk for AI tools….(More)”.

Making Public Transit Fairer to Women Demands Way More Data


Flavie Halais at Wired: “Public transportation is sexist. This may be unintentional or implicit, but it’s also easy to see. Women around the world do more care and domestic work than men, and their resulting mobility habits are hobbled by most transport systems. The demands of running errands and caring for children and other family members mean repeatedly getting on and off the bus, meaning paying more fares. Strollers and shopping bags make travel cumbersome. A 2018 study of New Yorkers found women were harassed on the subway far more frequently than men were, and as a result paid more money to avoid transit in favor of taxis and ride-hail….

What is not measured is not known, and the world of transit data is still largely blind to women and other vulnerable populations. Getting that data, though, isn’t easy. Traditional sources like national censuses and user surveys provide reliable information that serve as the basis for policies and decisionmaking. But surveys are costly to run, and it can take years for a government to go through the process of adding a question to its national census.

Before pouring resources into costly data collection to find answers about women’s transport needs, cities could first turn to the trove of unconventional gender-disaggregated data that’s already produced. They include data exhaust, or the trail of data we leave behind as a result of our interactions with digital products and services like mobile phones, credit cards, and social media. Last year, researchers in Santiago, Chile, released a report based on their parsing of anonymized call detail records of female mobile phone users, to extract location information and analyze their mobility patterns. They found that women tended to travel to fewer locations than men, and within smaller geographical areas. When researchers cross-referenced location information with census data, they found a higher gender gap among lower-income residents, as poorer women made even shorter trips. And when using data from the local transit agency, they saw that living close to a public transit stop increased mobility for both men and women, but didn’t close the gender gap for poorer residents.

To encourage private companies to share such info, Stefaan Verhulst advocates for data collaboratives, flexible partnerships between data providers and researchers. Verhulst is the head of research and development at GovLab, a research center at New York University that contributed to the research in Santiago. And that’s how GovLab and its local research partner, Universidad del Desarollo, got access to the phone records owned by the Chilean phone company, Telefónica. Data collaboratives can enhance access to private data without exposing companies to competition or privacy concerns. “We need to find ways to access data according to different shades of openness,” Verhulst says….(More)”.

UK citizens' climate assembly to meet for first time


Sandra Laville in The Guardian: “Ordinary people from across the UK – potentially including climate deniers – will take part in the first ever citizens’ climate assembly this weekend.

Mirroring the model adopted in France by Emmanuel Macron, 110 people from all walks of life will begin deliberations on Saturday to come up with a plan to tackle global heating and meet the government’s target of net-zero emissions by 2050.

The assembly was selected to be a representative sample of the population after a mailout to 30,000 people chosen at random. About 2,000 people responded saying they wanted to be considered for the assembly, and the 110 members were picked by computer.

They come from all age brackets and their selection reflects a 2019 Ipsos Mori poll of how concerned the general population is by climate change, where responses ranged from not at all to very concerned. Of the assembly members, three people are not at all concerned, 16 not very concerned, 36 fairly concerned, 54 very concerned, and one did not know, organisers said.

The selection process meant those chosen could include climate deniers or sceptics, according to Sarah Allan, the head of engagement at Involve, which is running the assembly along with the Sortition Foundation and the e-democracy project mySociety.

“It is really important that it is representative of the UK population,” said Allen. “Those people, just because they’re sceptical of climate change, they’re going to be affected by the steps the government takes to get to net zero by 2050 too and they shouldn’t have their voice denied in that.”

The UK climate assembly differs from the French model in that it was commissioned by six select committees, rather than by the prime minister. Their views, which will be produced in a report in the spring, will be considered by the select committees but there is no guarantee any of the proposals will be taken up by government.

Allen said it was rare for members of a citizens’ assembly to get locked into dissent. She pointed to the success of the Irish citizens’ assembly in 2016, which helped break the deadlock in the abortion debate. “This climate assembly is going to come up with recommendations that are going to be really invaluable in highlighting public preferences,” she said….(More)”.

How Aid Groups Map Refugee Camps That Officially Don't Exist


Abby Sewell at Wired: “On the outskirts of Zahle, a town in Lebanon’s Beqaa Valley, a pair of aid workers carrying clipboards and cell phones walk through a small refugee camp, home to 11 makeshift shelters built from wood and tarps.

A camp resident leading them through the settlement—one of many in the Beqaa, a wide agricultural plain between Beirut and Damascus with scattered villages of cinderblock houses—points out a tent being renovated for the winter. He leads them into the kitchen of another tent, highlighting cracking wood supports and leaks in the ceiling. The aid workers record the number of residents in each tent, as well as the number of latrines and kitchens in the settlement.

The visit is part of an initiative by the Switzerland-based NGO Medair to map the locations of the thousands of informal refugee settlements in Lebanon, a country where even many city buildings have no street addresses, much less tents on a dusty country road.

“I always say that this project is giving an address to people that lost their home, which is giving back part of their dignity in a way,” says Reine Hanna, Medair’s information management project manager, who helped develop the mapping project.

The initiative relies on GIS technology, though the raw data is collected the old-school way, without high tech mapping aids like drones. Mapping teams criss-cross the country year round, stopping at each camp to speak to residents and conduct a survey. They enter the coordinates of new camps or changes in the population or facilities of old ones into a database that’s shared with UNHCR, the UN refugee agency, and other NGOs working in the camps. The maps can be accessed via a mobile app by workers heading to the field to distribute aid or respond to emergencies.

Lebanon, a small country with an estimated native population of about 4 million, hosts more than 900,000 registered Syrian refugees and potentially hundreds of thousands more unregistered, making it the country with the highest population of refugees per capita in the world.

But there are no official refugee camps run by the government or the UN refugee agency in Lebanon, where refugees are a sensitive subject. The country is not a signatory to the 1951 Refugee Convention, and government officials refer to the Syrians as “displaced,” not “refugees.”

Lebanese officials have been wary of the Syrians settling permanently, as Palestinian refugees did beginning in 1948. Today, more than 70 years later, there are some 470,000 Palestinian refugees registered in Lebanon, though the number living in the country is believed to be much lower….(More)”.

Four maps showing the growth of informal Syrian refugee settlements in the Zahle district of the Beqaa Valley in Lebanon
Maps compiled by UNHCR showing the growth in the number of informal refugee camps in one area of Lebanon over the past six years.COURTESY OF UNHCR