Paper by Sara Mesquita, Lília Perfeito, Daniela Paolotti, and Joana Gonçalves-Sá: “Epidemiology and Public Health have increasingly relied on structured and unstructured data, collected inside and outside of typical health systems, to study, identify, and mitigate diseases at the population level. Focusing on infectious disease, we review how Digital Epidemiology (DE) was at the beginning of 2020 and how it was changed by the COVID-19 pandemic, in both nature and breadth. We argue that DE will become a progressively useful tool as long as its potential is recognized and its risks are minimized. Therefore, we expand on the current views and present a new definition of DE that, by highlighting the statistical nature of the datasets, helps in identifying possible biases. We offer some recommendations to reduce inequity and threats to privacy and argue in favour of complex multidisciplinary approaches to tackling infectious diseases…(More)”
Can AI solve medical mysteries? It’s worth finding out
Article by Bina Venkataraman: “Since finding a primary care doctor these days takes longer than finding a decent used car, it’s little wonder that people turn to Google to probe what ails them. Be skeptical of anyone who claims to be above it. Though I was raised by scientists and routinely read medical journals out of curiosity, in recent months I’ve gone online to investigate causes of a lingering cough, ask how to get rid of wrist pain and look for ways to treat a bad jellyfish sting. (No, you don’t ask someone to urinate on it.)
Dabbling in self-diagnosis is becoming more robust now that people can go to chatbots powered by large language models scouring mountains of medical literature to yield answers in plain language — in multiple languages. What might an elevated inflammation marker in a blood test combined with pain in your left heel mean? The AI chatbots have some ideas. And researchers are finding that, when fed the right information, they’re often not wrong. Recently, one frustrated mother, whose son had seen 17 doctors for chronic pain, put his medical information into ChatGPT, which accurately suggested tethered cord syndrome — which then led a Michigan neurosurgeon to confirm an underlying diagnosis of spina bifida that could be helped by an operation.
The promise of this trend is that patients might be able to get to the bottom of mysterious ailments and undiagnosed illnesses by generating possible causes for their doctors to consider. The peril is that people may come to rely too much on these tools, trusting them more than medical professionals, and that our AI friends will fabricate medical evidence that misleads people about, say, the safety of vaccines or the benefits of bogus treatments. A question looming over the future of medicine is how to get the best of what artificial intelligence can offer us without the worst.
It’s in the diagnosis of rare diseases — which afflict an estimated 30 million Americans and hundreds of millions of people worldwide — that AI could almost certainly make things better. “Doctors are very good at dealing with the common things,” says Isaac Kohane, chair of the department of biomedical informatics at Harvard Medical School. “But there are literally thousands of diseases that most clinicians will have never seen or even have ever heard of.”..(More)”.
Internet use does not appear to harm mental health, study finds
Tim Bradshaw at the Financial Times: “A study of more than 2mn people’s internet use found no “smoking gun” for widespread harm to mental health from online activities such as browsing social media and gaming, despite widely claimed concerns that mobile apps can cause depression and anxiety.
Researchers at the Oxford Internet Institute, who said their study was the largest of its kind, said they found no evidence to support “popular ideas that certain groups are more at risk” from the technology.
However, Andrew Przybylski, professor at the institute — part of the University of Oxford — said that the data necessary to establish a causal connection was “absent” without more co-operation from tech companies. If apps do harm mental health, only the companies that build them have the user data that could prove it, he said.
“The best data we have available suggests that there is not a global link between these factors,” said Przybylski, who carried out the study with Matti Vuorre, a professor at Tilburg University. Because the “stakes are so high” if online activity really did lead to mental health problems, any regulation aimed at addressing it should be based on much more “conclusive” evidence, he added.
“Global Well-Being and Mental Health in the Internet Age” was published in the journal Clinical Psychological Science on Tuesday.
In their paper, Przybylski and Vuorre studied data on psychological wellbeing from 2.4mn people aged 15 to 89 in 168 countries between 2005 and 2022, which they contrasted with industry data about growth in internet subscriptions over that time, as well as tracking associations between mental health and internet adoption in 202 countries from 2000-19.
“Our results do not provide evidence supporting the view that the internet and technologies enabled by it, such as smartphones with internet access, are actively promoting or harming either wellbeing or mental health globally,” they concluded. While there was “some evidence” of greater associations between mental health problems and technology among younger people, these “appeared small in magnitude”, they added.
The report contrasts with a growing body of research in recent years that has connected the beginning of the smartphone era, around 2010, with growing rates of anxiety and depression, especially among teenage girls. Studies have suggested that reducing time on social media can benefit mental health, while those who spend the longest online are at greater risk of harm…(More)”.
Toward Equitable Innovation in Health and Medicine: A Framework
Report by The National Academies: “Advances in biomedical science, data science, engineering, and technology are leading to high-pace innovation with potential to transform health and medicine. These innovations simultaneously raise important ethical and social issues, including how to fairly distribute their benefits and risks. The National Academies of Sciences, Engineering, and Medicine, in collaboration with the National Academy of Medicine, established the Committee on Creating a Framework for Emerging Science, Technology, and Innovation in Health and Medicine to provide leadership and engage broad communities in developing a framework for aligning the development and use of transformative technologies with ethical and equitable principles. The committees resulting report describes a governance framework for decisions throughout the innovation life cycle to advance equitable innovation and support an ecosystem that is more responsive to the needs of a broader range of individuals and is better able to recognize and address inequities as they arise…(More)”.
Data Governance and Privacy Challenges in the Digital Healthcare Revolution
Paper by Nargiz Kazimova: “The onset of the COVID-19 pandemic has catalyzed an imperative for digital transformation in the healthcare sector. This study investigates the accelerated shift towards a digitally-enhanced healthcare delivery system, advocating for the widespread adoption of telemedicine and the relaxation of regulatory barriers. The paper also scrutinizes the burgeoning use of electronic health records, wearable devices, artificial intelligence, and machine learning, and how these technologies offer promising avenues for improving patient care and medical outcomes. Despite the advancements, the rapid digital integration raises significant privacy and security concerns. The stigma associated with certain illnesses and potential discrimination presents serious challenges that digital healthcare innovations can exacerbate.
This research underscores the criticality of stringent data governance to safeguard personal health information in the face of growing digitalization. The analysis begins with an exploration of the data governance role in optimizing healthcare outcomes and preserving privacy, followed by an assessment of the breadth and depth of health data proliferation. The paper subsequently navigates the complex legal and ethical terrain, contrasting HIPAA and GDPR frameworks to underline the current regulatory challenges.
A comprehensive set of strategic recommendations is provided for reinforcing data governance and enhancing privacy protection in healthcare. The author advises on updating legal provisions to match the dynamic healthcare environment, widening the scope of privacy laws, and improving the transparency of data-sharing practices. The establishment of ethical guidelines for the collection and use of health data is also recommended, focusing on explicit consent, decision-making transparency, harm accountability, maintenance of data anonymity, and the mitigation of biases in datasets.
Moreover, the study advocates for stronger transparency in data sharing with clear communication on data use, rigorous internal and external audit mechanisms, and informed consent processes. The conclusion calls for increased collaboration between healthcare providers, patients, administrative staff, ethicists, regulators, and technology companies to create governance models that reconcile patient rights with the expansive use of health data. The paper culminates in a call to action for a balanced approach to privacy and innovation in the data-driven era of healthcare…(More)”.
Private UK health data donated for medical research shared with insurance companies
Article by Shanti Das: “Sensitive health information donated for medical research by half a million UK citizens has been shared with insurance companies despite a pledge that it would not be.
An Observer investigation has found that UK Biobank opened up its vast biomedical database to insurance sector firms several times between 2020 and 2023. The data was provided to insurance consultancy and tech firms for projects to create digital tools that help insurers predict a person’s risk of getting a chronic disease. The findings have raised concerns among geneticists, data privacy experts and campaigners over vetting and ethical checks at Biobank.
Set up in 2006 to help researchers investigating diseases, the database contains millions of blood, saliva and urine samples, collected regularly from about 500,000 adult volunteers – along with medical records, scans, wearable device data and lifestyle information.
Approved researchers around the world can pay £3,000 to £9,000 to access records ranging from medical history and lifestyle information to whole genome sequencing data. The resulting research has yielded major medical discoveries and led to Biobank being considered a “jewel in the crown” of British science.
Biobank said it strictly guarded access to its data, only allowing access by bona fide researchers for health-related projects in the public interest. It said this included researchers of all stripes, whether employed by academic, charitable or commercial organisations – including insurance companies – and that “information about data sharing was clearly set out to participants at the point of recruitment and the initial assessment”.
But evidence gathered by the Observer suggests Biobank did not explicitly tell participants it would share data with insurance companies – and made several public commitments not to do so.
When the project was announced, in 2002, Biobank promised that data would not be given to insurance companies after concerns were raised that it could be used in a discriminatory way, such as by the exclusion of people with a particular genetic makeup from insurance.
In an FAQ section on the Biobank website, participants were told: “Insurance companies will not be allowed access to any individual results nor will they be allowed access to anonymised data.” The statement remained online until February 2006, during which time the Biobank project was subject to public scrutiny and discussed in parliament.
The promise was also reiterated in several public statements by backers of Biobank, who said safeguards would be built in to ensure that “no insurance company or police force or employer will have access”.
This weekend, Biobank said the pledge – made repeatedly over four years – no longer applied. It said the commitment had been made before recruitment formally began in 2007 and that when Biobank volunteers enrolled they were given revised information.
This included leaflets and consent forms that contained a provision that anonymised Biobank data could be shared with private firms for “health-related” research, but did not explicitly mention insurance firms or correct the previous assurances…(More)”
What Is Public Trust in the Health System? Insights into Health Data Use
Open Access Book by Felix Gille: “This book explores the concept of public trust in health systems.
In the context of recent events, including public response to interventions to tackle the COVID-19 pandemic, vaccination uptake and the use of health data and digital health, this important book uses empirical evidence to address why public trust is vital to a well-functioning health system.
In doing so, it provides a comprehensive contemporary explanation of public trust, how it affects health systems and how it can be nurtured and maintained as an integral component of health system governance…(More)”.
Automating Empathy
Open Access Book by Andrew McStay: “We live in a world where artificial intelligence and intensive use of personal data has become normalized. Companies across the world are developing and launching technologies to infer and interact with emotions, mental states, and human conditions. However, the methods and means of mediating information about people and their emotional states are incomplete and problematic.
Automating Empathy offers a critical exploration of technologies that sense intimate dimensions of human life and the modern ethical questions raised by attempts to perform and simulate empathy. It traces the ascendance of empathic technologies from their origins in physiognomy and pathognomy to the modern day and explores technologies in nations with non-Western ethical histories and approaches to emotion, such as Japan. The book examines applications of empathic technologies across sectors such as education, policing, and transportation, and considers key questions of everyday use such as the integration of human-state sensing in mixed reality, the use of neurotechnologies, and the moral limits of using data gleaned through automated empathy. Ultimately, Automating Empathy outlines the key principles necessary to usher in a future where automated empathy can serve and do good…(More)”
How to share data — not just equally, but equitably
Editorial in Nature: “Two decades ago, scientists asked more than 150,000 people living in Mexico City to provide medical data for research. Each participant gave time, blood and details of their medical history. For the researchers, who were based at the National Autonomous University of Mexico in Mexico City and the University of Oxford, UK, this was an opportunity to study a Latin American population for clues about factors contributing to disease and health. For the participants, it was a chance to contribute to science so that future generations might one day benefit from access to improved health care. Ultimately, the Mexico City Prospective Study was an exercise in trust — scientists were trusted with some of people’s most private information because they promised to use it responsibly.
Over the years, the researchers have repaid the communities through studies investigating the effects of tobacco and other risk factors on participants’ health. They have used the data to learn about the impact of diabetes on mortality rates, and they have found that rare forms of a gene called GPR75 lower the risk of obesity. And on 11 October, researchers added to the body of knowledge on the population’s ancestry.
But this project also has broader relevance — it can be seen as a model of trust and of how the power structures of science can be changed to benefit the communities closest to it.
Mexico’s population is genetically wealthy. With a complex history of migration and mixing of several populations, the country’s diverse genetic resources are valuable to the study of the genetic roots of diseases. Most genetic databases are stocked with data from people with European ancestry. If genomics is to genuinely benefit the global community — and especially under-represented groups — appropriately diverse data sets are needed. These will improve the accuracy of genetic tests, such as those for disease risk, and will make it easier to unearth potential drug targets by finding new genetic links to medical conditions…(More)”.
What Big Tech Knows About Your Body
Article by Yael Grauer: “If you were seeking online therapy from 2017 to 2021—and a lot of people were—chances are good that you found your way to BetterHelp, which today describes itself as the world’s largest online-therapy purveyor, with more than 2 million users. Once you were there, after a few clicks, you would have completed a form—an intake questionnaire, not unlike the paper one you’d fill out at any therapist’s office: Are you new to therapy? Are you taking any medications? Having problems with intimacy? Experiencing overwhelming sadness? Thinking of hurting yourself? BetterHelp would have asked you if you were religious, if you were LGBTQ, if you were a teenager. These questions were just meant to match you with the best counselor for your needs, small text would have assured you. Your information would remain private.
Except BetterHelp isn’t exactly a therapist’s office, and your information may not have been completely private. In fact, according to a complaint brought by federal regulators, for years, BetterHelp was sharing user data—including email addresses, IP addresses, and questionnaire answers—with third parties, including Facebook and Snapchat, for the purposes of targeting ads for its services. It was also, according to the Federal Trade Commission, poorly regulating what those third parties did with users’ data once they got them. In July, the company finalized a settlement with the FTC and agreed to refund $7.8 million to consumers whose privacy regulators claimed had been compromised. (In a statement, BetterHelp admitted no wrongdoing and described the alleged sharing of user information as an “industry-standard practice.”)
We leave digital traces about our health everywhere we go: by completing forms like BetterHelp’s. By requesting a prescription refill online. By clicking on a link. By asking a search engine about dosages or directions to a clinic or pain in chest dying. By shopping, online or off. By participating in consumer genetic testing. By stepping on a smart scale or using a smart thermometer. By joining a Facebook group or a Discord server for people with a certain medical condition. By using internet-connected exercise equipment. By using an app or a service to count your steps or track your menstrual cycle or log your workouts. Even demographic and financial data unrelated to health can be aggregated and analyzed to reveal or infer sensitive information about people’s physical or mental-health conditions…(More)”.