Harvard fraud claims fuel doubts over science of behaviour


Article by Andrew Hill and Andrew Jack: “Claims that fraudulent data was used in papers co-authored by a star Harvard Business School ethics expert have fuelled a growing controversy about the validity of behavioural science, whose findings are routinely taught in business schools and applied within companies.

While the professor has not yet responded to details of the claims, the episode is the latest blow to a field that has risen to prominence over the past 15 years and whose findings in areas such as decision-making and team-building are widely put into practice.

Companies from Coca-Cola to JPMorgan Chase have executives dedicated to behavioural science, while governments around the world have also embraced its findings. But well-known principles in the field such as “nudge theory” are now being called into question.

The Harvard episode “is topic number one in business school circles”, said André Spicer, executive dean of London’s Bayes Business School. “There has been a large-scale replication crisis in psychology — lots of the results can’t be reproduced and some of the underlying data has found to be faked.”…

That cast a shadow over the use of behavioural science by government-linked “nudge units” such as the UK’s Behavioural Insights Team, which was spun off into a company in 2014, and the US Office of Evaluation Sciences.

However, David Halpern, now president of BIT, countered that publication bias is not unique to the field. He said he and his peers use far larger-scale, more representative and robust testing than academic research.

Halpern argued that behavioural research can help to effectively deploy government budgets. “The dirty secret of most governments and organisations is that they spend a lot of money, but have no idea if they are spending in ways that make things better.”

Academics point out that testing others’ results is part of normal scientific practice. The difference with behavioural science is that initial results that have not yet been replicated are often quickly recycled into sensational headlines, popular self-help books and business practice.

“Scientists should be better at pointing out when non-scientists over-exaggerate these things and extrapolate, but they are worried that if they do this they will ruin the positive trend [towards their field],” said Pelle Guldborg Hansen, chief executive of iNudgeyou, a centre for applied behavioural research.

Many consultancies have sprung up to cater to corporate demand for behavioural insights. “What I found was that almost anyone who had read Nudge had a licence to set up as a behavioural scientist,” said Nuala Walsh, who formed the Global Association of Applied Behavioural Scientists in 2020 to try to set some standards…(More)”.

Health Care Data Is a Researcher’s Gold Mine


Article by James O’Shaughnessy: “The UK’s National Health Service should aim to become the world’s leading platform for health research and development. We’ve seen some great examples of the potential we have for world-class research during the pandemic, with examples like the RECOVERY trial and the Covid vaccine platform, and since then through the partnerships with Moderna, Grail, and BioNTech. However, these examples of partnership with industry are often ad hoc arrangements. In general, funding and prestige are concentrated on research labs and early-phase trials, but when it comes to helping health care companies through the commercialization stages of their products, both public and private sector funding is much harder to access. This makes it hard for startups partnering with the NHS to scale their products and sell them on the domestic and international markets.

Instead, we need a systematic approach to leverage our strengths, such as the scale of the NHS, the diversity of our population, and the deep patient phenotyping that our data assets enable. That will give us the opportunity to generate vast amounts of real-world data about health care drugs and technologies—like pricing, performance, and safety—that can prepare companies to scale their innovations and go to market.

To achieve that, there are obstacles to overcome. For instance, setting up research projects is incredibly time-consuming. We have very bureaucratic processes that make the UK one of the slowest places in Europe to set up research studies.

Patients need more access to research. However, there’s really poor information at the moment about where clinical trials are taking place in the country and what kind of patients they are recruiting. We need a clinical clinicaltrials.gov.uk website to give that sort of information.

There’s a significant problem when it comes to the question of patient consent to participate in a R&D. Legally, unless patients have said explicitly that they want to be approached for a research project or a clinical trial, they can’t be contacted for that purpose. The catch-22 is that, of course, most patients are not aware of this, and you can’t legally contact them to inform them. We need to allow ethically approved researchers to proactively approach people to take part in studies which might be of benefit to them…(More)”.

How Leaders in Higher Education Can Embed Behavioral Science in Their Institutions


Essay by Ross E. O’Hara: “…Once we view student success through a behavioral science lens and see the complex systems underlying student decision making, it becomes clear that behavioral scientists work best not as mechanics who repair broken systems, but as engineers who design better systems. Higher education, therefore, needs to diffuse those engineers throughout the organization.

To that end, Hallsworth recommends that organizations change their view of behavioral science “from projects to processes, from commissions to culture.” Only when behavioral science expertise is diffused across units and incorporated into all key organizational functions can a college become behaviorally enabled. So how might higher education go about this transformation?

1. Leverage the faculty

Leaders with deep expertise in behavioral science are likely already employed in social and behavioral sciences departments. Consider ways to focus their energy inward to tackle institutional challenges, perhaps using their own classrooms or departments as testing grounds. As they find promising solutions, build the infrastructure to disseminate and implement those ideas college and system wide. Unlike higher education’s normal approach—giving faculty additional unpaid and underappreciated committee work—provide funding and recognition that incentivizes faculty to make higher education policy an important piece of their academic portfolio.

2. Practice cross-functional training

I have spent the past several years providing colleges with behavioral science professional development, but too often this work is focused on a single functional unit, like academic advisors or faculty. Instead, create trainings that include representatives from across campus (e.g., enrollment; financial aid; registrar; student affairs). Not only will this diffuse behavioral science knowledge across the institution, but it will bring together the key players that impact student experience and make it easier for them to see the adaptive system that determines whether a student graduates or withdraws.

3. Let behavioral scientists be engineers

Whether you look for faculty or outside consultants, bring behavioral science experts into conversations early. From redesigning college-to-career pathways to building a new cafeteria, behavioral scientists can help gather and interpret student voices, foresee and circumvent behavioral challenges, and identify measurable and meaningful evaluation metrics. The impact of their expertise will be even greater when they work in an environment with a diffuse knowledge of behavioral science already in place…(More)”

Barred From Grocery Stores by Facial Recognition


Article by Adam Satariano and Kashmir Hill: “Simon Mackenzie, a security officer at the discount retailer QD Stores outside London, was short of breath. He had just chased after three shoplifters who had taken off with several packages of laundry soap. Before the police arrived, he sat at a back-room desk to do something important: Capture the culprits’ faces.

On an aging desktop computer, he pulled up security camera footage, pausing to zoom in and save a photo of each thief. He then logged in to a facial recognition program, Facewatch, which his store uses to identify shoplifters. The next time those people enter any shop within a few miles that uses Facewatch, store staff will receive an alert.

“It’s like having somebody with you saying, ‘That person you bagged last week just came back in,’” Mr. Mackenzie said.

Use of facial recognition technology by the police has been heavily scrutinized in recent years, but its application by private businesses has received less attention. Now, as the technology improves and its cost falls, the systems are reaching further into people’s lives. No longer just the purview of government agencies, facial recognition is increasingly being deployed to identify shoplifters, problematic customers and legal adversaries.

Facewatch, a British company, is used by retailers across the country frustrated by petty crime. For as little as 250 pounds a month, or roughly $320, Facewatch offers access to a customized watchlist that stores near one another share. When Facewatch spots a flagged face, an alert is sent to a smartphone at the shop, where employees decide whether to keep a close eye on the person or ask the person to leave…(More)”.

Gamifying medical data labeling to advance AI


Article by Zach Winn: “…Duhaime began exploring ways to leverage collective intelligence to improve medical diagnoses. In one experiment, he trained groups of lay people and medical school students that he describes as “semiexperts” to classify skin conditions, finding that by combining the opinions of the highest performers he could outperform professional dermatologists. He also found that by combining algorithms trained to detect skin cancer with the opinions of experts, he could outperform either method on its own….The DiagnosUs app, which Duhaime developed with Centaur co-founders Zach Rausnitz and Tom Gellatly, is designed to help users test and improve their skills. Duhaime says about half of users are medical school students and the other half are mostly doctors, nurses, and other medical professionals…

The approach stands in sharp contrast to traditional data labeling and AI content moderation, which are typically outsourced to low-resource countries.

Centaur’s approach produces accurate results, too. In a paper with researchers from Brigham and Women’s Hospital, Massachusetts General Hospital (MGH), and Eindhoven University of Technology, Centaur showed its crowdsourced opinions labeled lung ultrasounds as reliably as experts did…

Centaur has found that the best performers come from surprising places. In 2021, to collect expert opinions on EEG patterns, researchers held a contest through the DiagnosUs app at a conference featuring about 50 epileptologists, each with more than 10 years of experience. The organizers made a custom shirt to give to the contest’s winner, who they assumed would be in attendance at the conference.

But when the results came in, a pair of medical students in Ghana, Jeffery Danquah and Andrews Gyabaah, had beaten everyone in attendance. The highest-ranked conference attendee had come in ninth…(More)”

Tap into the Wisdom of Your ‘Inner Crowd


Essay by Emir Efendić and Philippe Van de Calseyde: “Take your best guess for the questions below. Without looking up the answers, jot down your guess in your notes app or on a piece of paper. 

  1. What is the weight of the Liberty Bell? 
  2. Saudi Arabia consumes what percentage of the oil it produces? 
  3. What percent of the world’s population lives in China, India, and the European Union combined?

Next, we want you to take a second guess at these questions. But here’s the catch, this time try answering from the perspective a friend whom you often disagree with. (For us, it’s the colleague with whom we shared an office in grad school, ever the contrarian.) How would your friend answer these questions? Write down the second guesses. 

Now, the correct answers. The Liberty Bell weighs 2,080 pounds, and, when we conducted the study in 2021, Saudi Arabia consumed 32.5 percent of the oil it produced, and 43.2 percent of the world’s population lived in China, India, and the European Union combined.

For the final step, compare your first guess with the average of both your guesses.

If you’re like most of the participants in our experiment, averaging the two guesses for each question brings you closer to the answer. Why this is has to do with the fascinating way in which people make estimates and how principles of aggregation can be used to improve numerical estimates. 

A lot of research has shown that the aggregate of individual judgements can be quite accurate, in what has been termed the “wisdom of the crowds.” What makes a crowd so wise? Its wisdom relies on a relatively simple principle: when people’s guesses are sufficiently diverse and independent, averaging judgments increases accuracy by canceling out errors across individuals. 

Interestingly, research suggests that the same principles underlying wise crowds also apply when multiple estimates from a single person are averaged—a phenomenon known as the “wisdom of the inner crowd.” As it turns out, the average guess of the same person is often more accurate than each individual guess on its own.

Although effective, multiple guesses from a single person do suffer from a major drawback. They are typically quite similar to one another, as people tend to anchor on their first guess when generating a second guess….(More)”.

How data helped Mexico City reduce high-impact crime by more than 50%


Article by Alfredo Molina Ledesma: “When Claudia Sheimbaum Pardo became Mayor of Mexico City 2018, she wanted a new approach to tackling the city’s most pressing problems. Crime was at the very top of the agenda – only 7% of the city’s inhabitants considered it a safe place. New policies were needed to turn this around.

Data became a central part of the city’s new strategy. The Digital Agency for Public Innovation was created in 2019 – tasked with using data to help transform the city. To put this into action, the city administration immediately implemented an open data policy and launched their official data platform, Portal de Datos Abiertos. The policy and platform aimed to make data that Mexico City collects accessible to anyone: municipal agencies, businesses, academics, and ordinary people.

“The main objective of the open data strategy of Mexico City is to enable more people to make use of the data generated by the government in a simple and interactive manner,” said Jose Merino, Head of the Digital Agency for Public Innovation. “In other words, what we aim for is to democratize the access and use of information.” To achieve this goal a new tool for interactive data visualization called Sistema Ajolote was developed in open source and integrated into the Open Data Portal…

Information that had never been made public before, such as street-level crime from the Attorney General’s Office, is now accessible to everyone. Academics, businesses and civil society organizations can access the data to create solutions and innovations that complement the city’s new policies. One example is the successful “Hoyo de Crimen” app, which proposes safe travel routes based on the latest street-level crime data, enabling people to avoid crime hotspots as they walk or cycle through the city.

Since the introduction of the open data policy – which has contributed to a comprehensive crime reduction and social support strategy – high-impact crime in the city has decreased by 53%, and 43% of Mexico City residents now consider the city to be a safe place…(More)”.

Use of AI in social sciences could mean humans will no longer be needed in data collection


Article by Michael Lee: A team of researchers from four Canadian and American universities say artificial intelligence could replace humans when it comes to collecting data for social science research.

Researchers from the University of Waterloo, University of Toronto, Yale University and the University of Pennsylvania published an article in the journal Science on June 15 about how AI, specifically large language models (LLMs), could affect their work.

“AI models can represent a vast array of human experiences and perspectives, possibly giving them a higher degree of freedom to generate diverse responses than conventional human participant methods, which can help to reduce generalizability concerns in research,” Igor Grossmann, professor of psychology at Waterloo and a co-author of the article, said in a news release.

Philip Tetlock, a psychology professor at UPenn and article co-author, goes so far as to say that LLMs will “revolutionize human-based forecasting” in just three years.

In their article, the authors pose the question: “How can social science research practices be adapted, even reinvented, to harness the power of foundational AI? And how can this be done while ensuring transparent and replicable research?”

The authors say the social sciences have traditionally relied on methods such as questionnaires and observational studies.

But with the ability of LLMs to pore over vast amounts of text data and generate human-like responses, the authors say this presents a “novel” opportunity for researchers to test theories about human behaviour at a faster rate and on a much larger scale.

Scientists could use LLMs to test theories in a simulated environment before applying them in the real world, the article says, or gather differing perspectives on a complex policy issue and generate potential solutions.

“It won’t make sense for humans unassisted by AIs to venture probabilistic judgments in serious policy debates. I put an 90 per cent chance on that,” Tetlock said. “Of course, how humans react to all of that is another matter.”

One issue the authors identified, however, is that LLMs often learn to exclude sociocultural biases, raising the question of whether models are correctly reflecting the populations they study…(More)”

Three approaches to re-design digital public spaces 


Article by  Gianluca Sgueo: “The underlying tenet of so-called “human centred-design” is a public administration capable of delivering a satisfactory (even gratifying) digital experience to every user. Public services, however, are still marked by severe qualitative asymmetries, both nationally and supranationally. In this article we discuss the key shortcomings of digital public spaces, and we explore three approaches to re-design such spaces with the aim to widen the existing gaps separating the ideal from the actual rendering of human-centred digital government…(More)”.

Better Government Tech Is Possible


Article by Beth Noveck: “In the first four months of the Covid-19 pandemic, government leaders paid $100 million for management consultants at McKinsey to model the spread of the coronavirus and build online dashboards to project hospital capacity.

It’s unsurprising that leaders turned to McKinsey for help, given the notorious backwardness of government technology. Our everyday experience with online shopping and search only highlights the stark contrast between user-friendly interfaces and the frustrating inefficiencies of government websites—or worse yet, the ongoing need to visit a government office to submit forms in person. The 2016 animated movie Zootopia depicts literal sloths running the DMV, a scene that was guaranteed to get laughs given our low expectations of government responsiveness.

More seriously, these doubts are reflected in the plummeting levels of public trust in government. From early Healthcare.gov failures to the more recent implosions of state unemployment websites, policymaking without attention to the technology that puts the policy into practice has led to disastrous consequences.

The root of the problem is that the government, the largest employer in the US, does not keep its employees up-to-date on the latest tools and technologies. When I served in the Obama White House as the nation’s first deputy chief technology officer, I had to learn constitutional basics and watch annual training videos on sexual harassment and cybersecurity. But I was never required to take a course on how to use technology to serve citizens and solve problems. In fact, the last significant legislation about what public professionals need to know was the Government Employee Training Act, from 1958, well before the internet was invented.

In the United States, public sector awareness of how to use data or human-centered design is very low. Out of 400-plus public servants surveyed in 2020, less than 25 percent received training in these more tech-enabled ways of working, though 70 percent said they wanted such training…(More)”.