AI Chatbot Credited With Preventing Suicide. Should It Be?


Article by Samantha Cole: “A recent Stanford study lauds AI companion app Replika for “halting suicidal ideation” for several people who said they felt suicidal. But the study glosses over years of reporting that Replika has also been blamed for throwing users into mental health crises, to the point that its community of users needed to share suicide prevention resources with each other.

The researchers sent a survey of 13 open-response questions to 1006 Replika users who were 18 years or older and students, and who’d been using the app for at least one month. The survey asked about their lives, their beliefs about Replika and their connections to the chatbot, and how they felt about what Replika does for them. Participants were recruited “randomly via email from a list of app users,” according to the study. On Reddit, a Replika user posted a notice they received directly from Replika itself, with an invitation to take part in “an amazing study about humans and artificial intelligence.”

Almost all of the participants reported being lonely, and nearly half were severely lonely. “It is not clear whether this increased loneliness was the cause of their initial interest in Replika,” the researchers wrote. 

The surveys revealed that 30 people credited Replika with saving them from acting on suicidal ideation: “Thirty participants, without solicitation, stated that Replika stopped them from attempting suicide,” the paper said. One participant wrote in their survey: “My Replika has almost certainly on at least one if not more occasions been solely responsible for me not taking my own life.” …(More)”.

A New National Purpose: Harnessing Data for Health


Report by the Tony Blair Institute: “We are at a pivotal moment where the convergence of large health and biomedical data sets, artificial intelligence and advances in biotechnology is set to revolutionise health care, drive economic growth and improve the lives of citizens. And the UK has strengths in all three areas. The immense potential of the UK’s health-data assets, from the NHS to biobanks and genomics initiatives, can unlock new diagnostics and treatments, deliver better and more personalised care, prevent disease and ultimately help people live longer, healthier lives.

However, realising this potential is not without its challenges. The complex and fragmented nature of the current health-data landscape, coupled with legitimate concerns around privacy and public trust, has made for slow progress. The UK has had a tendency to provide short-term funding across multiple initiatives, which has led to an array of individual projects – many of which have struggled to achieve long-term sustainability and deliver tangible benefits to patients.

To overcome these challenges, it will be necessary to be bold and imaginative. We must look for ways to leverage the unique strengths of the NHS, such as its nationwide reach and cradle-to-grave data coverage, to create a health-data ecosystem that is much more than the sum of its many parts. This will require us to think differently about how we collect, manage and utilise health data, and to create new partnerships and models of collaboration that break down traditional silos and barriers. It will mean treating data as a key health resource and managing it accordingly.

One model to do this is the proposed sovereign National Data Trust (NDT) – an endeavour to streamline access to and curation of the UK’s valuable health-data assets…(More)”.

How the war on drunk driving was won


Blog by Nick Cowen: “…Viewed from the 1960s it might have seemed like ending drunk driving would be impossible. Even in the 1980s, the movement seemed unlikely to succeed and many researchers questioned whether it constituted a social problem at all.

Yet things did change: in 1980, 1,450 fatalities were attributed to drunk driving accidents in the UK. In 2020, there were 220. Road deaths in general declined much more slowly, from around 6,000 in 1980 to 1,500 in 2020. Drunk driving fatalities dropped overall and as a percentage of all road deaths.

The same thing happened in the United States, though not to quite the same extent. In 1980, there were around 28,000 drunk driving deaths there, while in 2020, there were 11,654. Despite this progress, drunk driving remains a substantial public threat, comparable in scale to homicide (of which in 2020 there were 594 in Britain and 21,570 in America).

Of course, many things have happened in the last 40 years that contributed to this reduction. Vehicles are better designed to prioritize life preservation in the event of a collision. Emergency hospital care has improved so that people are more likely to survive serious injuries from car accidents. But, above all, driving while drunk has become stigmatized.

This stigma didn’t come from nowhere. Governments across the Western world, along with many civil society organizations, engaged in hard-hitting education campaigns about the risks of drunk driving. And they didn’t just talk. Tens of thousands of people faced criminal sanctions, and many were even put in jail.

Two underappreciated ideas stick out from this experience. First, deterrence works: incentives matter to offenders much more than many scholars found initially plausible. Second, the long-run impact that successful criminal justice interventions have is not primarily in rehabilitation, incapacitation, or even deterrence, but in altering the social norms around acceptable behavior…(More)”.

On the Meaning of Community Consent in a Biorepository Context


Article by Astha Kapoor, Samuel Moore, and Megan Doerr: “Biorepositories, vital for medical research, collect and store human biological samples and associated data for future use. However, our reliance solely on the individual consent of data contributors for biorepository data governance is becoming inadequate. Big data analysis focuses on large-scale behaviors and patterns, shifting focus from singular data points to identifying data “journeys” relevant to a collective. The individual becomes a small part of the analysis, with the harms and benefits emanating from the data occurring at an aggregated level.

Community refers to a particular qualitative aspect of a group of people that is not well captured by quantitative measures in biorepositories. This is not an excuse to dodge the question of how to account for communities in a biorepository context; rather, it shows that a framework is needed for defining different types of community that may be approached from a biorepository perspective. 

Engaging with communities in biorepository governance presents several challenges. Moving away from a purely individualized understanding of governance towards a more collectivizing approach necessitates an appreciation of the messiness of group identity, its ephemerality, and the conflicts entailed therein. So while community implies a certain degree of homogeneity (i.e., that all members of a community share something in common), it is important to understand that people can simultaneously consider themselves a member of a community while disagreeing with many of its members, the values the community holds, or the positions for which it advocates. The complex nature of community participation therefore requires proper treatment for it to be useful in a biorepository governance context…(More)”.

Building a trauma-informed algorithmic assessment toolkit


Report by Suvradip Maitra, Lyndal Sleep, Suzanna Fay, Paul Henman: “Artificial intelligence (AI) and automated processes provide considerable promise to enhance human wellbeing by fully automating or co-producing services with human service providers. Concurrently, if not well considered, automation also provides ways in which to generate harms at scale and speed. To address this challenge, much discussion to date has focused on principles of ethical AI and accountable algorithms with a groundswell of early work seeking to translate these into practical frameworks and processes to ensure such principles are enacted. AI risk assessment frameworks to detect and evaluate possible harms is one dominant approach, as are a growing body of AI audit frameworks, with concomitant emerging governmental and organisational regulatory settings, and associate professionals.

The research outlined in this report took a different approach. Building on work in social services on trauma-informed practice, researchers identified key principles and a practical framework that framed AI design, development and deployment as a reflective, constructive exercise that resulting in algorithmic supported services to be cognisant and inclusive of the diversity of human experience, and particularly those who have experienced trauma. This study resulted in a practical, co-designed, piloted Trauma Informed Algorithmic Assessment Toolkit.

This Toolkit has been designed to assist organisations in their use of automation in service delivery at any stage of their automation journey: ideation; design; development; piloting; deployment or evaluation. While of particular use for social service organisations working with people who may have experienced past trauma, the tool will be beneficial for any organisation wanting to ensure safe, responsible and ethical use of automation and AI…(More)”.

Predicting hotspots of unsheltered homelessness using geospatial administrative data and volunteered geographic information


Paper by Jessie Chien, Benjamin F. Henwood, Patricia St. Clair, Stephanie Kwack, and Randall Kuhn: “Unsheltered homelessness is an increasingly prevalent phenomenon in major cities that is associated with adverse health and mortality outcomes. This creates a need for spatial estimates of population denominators for resource allocation and epidemiological studies. Gaps in the timeliness, coverage, and spatial specificity of official Point-in-Time Counts of unsheltered homelessness suggest a role for geospatial data from alternative sources to provide interim, neighborhood-level estimates of counts and trends. We use citizen-generated data from homeless-related 311 requests, provider-based administrative data from homeless street outreach cases, and expert reports of unsheltered count to predict count and emerging hotspots of unsheltered homelessness in census tracts across the City of Los Angeles for 2019 and 2020. Our study shows that alternative data sources can contribute timely insights into the state of unsheltered homelessness throughout the year and inform the delivery of interventions to this vulnerable population…(More)”.

Crowded Out: The True Costs of Crowdfunding Healthcare


Book by Nora Kenworthy: “Over the past decade, charitable crowdfunding has exploded in popularity across the globe. Sites such as GoFundMe, which now boasts a “global community of over 100 million” users, have transformed the ways we seek and offer help. When faced with crises—especially medical ones—Americans are turning to online platforms that promise to connect them to the charity of the crowd. What does this new phenomenon reveal about the changing ways we seek and provide healthcare? In Crowded Out, Nora Kenworthy examines how charitable crowdfunding so quickly overtook public life, where it is taking us, and who gets left behind by this new platformed economy.

Although crowdfunding has become ubiquitous in our lives, it is often misunderstood: rather than a friendly free market “powered by the kindness” of strangers, crowdfunding is powerfully reinforcing inequalities and changing the way Americans think about and access healthcare. Drawing on extensive research and rich storytelling, Crowded Out demonstrates how crowdfunding for health is fueled by—and further reinforces—financial and moral “toxicities” in market-based healthcare systems. It offers a unique and distressing look beneath the surface of some of the most popular charitable platforms and helps to foster thoughtful discussions of how we can better respond to healthcare crises both small and large…(More)”.

Behavioural Economics and Policy for Pandemics


Book edited by Joan Costa-Font, Matteo M. Galizzi: “Behavioural economics and behavioural public policy have been fundamental parts of governmental responses to the Covid-19 pandemic. This was not only the case at the beginning of the pandemic as governments pondered how to get people to follow restrictions, but also during delivery of the vaccination programme. Behavioural Economics and Policy for Pandemics brings together a world-class line-up of experts to examine the successes and failures of behavioural economics and policy in relation to the Covid-19 pandemic. It documents how people changed their behaviours and use of health care and discusses what we can learn in terms of addressing future pandemics. Featuring high-profile behavioural economists such as George Loewenstein, this book uniquely uncovers behavioural regularities that emerge in the different waves of COVID-19 and documents how pandemics change our lives.

  • Provides a selection of studies featuring behavoural regulaltities during COVID-19
  • Unique in that it brings together works from health economics and behavioural science that neither journals or other books do
  • Offers the first book on the behavioural economics of pandemics
  • Brings together works of behavoural scientists and the economists examining health behaviours on the effects of COVID-19 on health and health care…(More)”.

AI-Powered World Health Chatbot Is Flubbing Some Answers


Article by Jessica Nix: “The World Health Organization is wading into the world of AI to provide basic health information through a human-like avatar. But while the bot responds sympathetically to users’ facial expressions, it doesn’t always know what it’s talking about.

SARAH, short for Smart AI Resource Assistant for Health, is a virtual health worker that’s available to talk 24/7 in eight different languages to explain topics like mental health, tobacco use and healthy eating. It’s part of the WHO’s campaign to find technology that can both educate people and fill staffing gaps with the world facing a health-care worker shortage.

WHO warns on its website that this early prototype, introduced on April 2, provides responses that “may not always be accurate.” Some of SARAH’s AI training is years behind the latest data. And the bot occasionally provides bizarre answers, known as hallucinations in AI models, that can spread misinformation about public health.The WHO’s artificial intelligence tool provides public health information via a lifelike avatar.Source: Bloomberg

SARAH doesn’t have a diagnostic feature like WebMD or Google. In fact, the bot is programmed to not talk about anything outside of the WHO’s purview, including questions on specific drugs. So SARAH often sends people to a WHO website or says that users should “consult with your health-care provider.”

“It lacks depth,” Ramin Javan, a radiologist and researcher at George Washington University, said. “But I think it’s because they just don’t want to overstep their boundaries and this is just the first step.”..(More)”

The Global State of Social Connections


Gallup: “Social needs are universal, and the degree to which they are fulfilled — or not — impacts the health, well-being and resilience of people everywhere. With increasing global interest in understanding how social connections support or hinder health, policymakers worldwide may benefit from reliable data on the current state of social connectedness. Despite the critical role of social connectedness for communities and the people who live in them, little is known about the frequency or form of social connection in many — if not most — parts of the world.

Meta and Gallup have collaborated on two research studies to help fill this gap. In 2022, the Meta-Gallup State of Social Connections report revealed important variations in people’s sense of connectedness and loneliness across the seven countries studied. This report builds on that research by presenting data on connections and loneliness among people from 142 countries…(More)”.