Privacy during pandemics: Attitudes to public use of personal data


Paper by Eleonora Freddi and Ole Christian Wasenden: “In this paper we investigate people’s attitudes to privacy and sharing of personal data when used to help society combat a contagious disease, such as COVID-19. Through a two-wave survey, we investigate the role of personal characteristics, and the effect of information, in shaping privacy attitudes. By conducting the survey in Norway and Sweden, which adopted very different strategies to handle the COVID-19 pandemic, we analyze potential differences in privacy attitudes due to policy changes. We find that privacy concern is negatively correlated with allowing public use of personal data. Trust in the entity collecting data and collectivist preferences are positively correlated with this type of data usage. Providing more information about the public benefit of sharing personal data makes respondents more positive to the use of their data, while providing additional information about the costs associated with data sharing does not change attitudes. The analysis suggests that stating a clear purpose and benefit for the data collection makes respondents more positive about sharing. Despite very different policy approaches, we do not find any major differences in privacy attitudes between Norway and Sweden. Findings are also similar between the two survey waves, suggesting a minor role for contextual changes…(More)”

Uniting the UK’s Health Data: A Huge Opportunity for Society’


The Sudlow Review (UK): “…Surveys show that people in the UK overwhelmingly support the use of their health data with appropriate safeguards to improve lives. One of the review’s recommendations calls for continued engagement with patients, the public, and healthcare professionals to drive forward developments in health data research.

The review also features several examples of harnessing health data for public benefit in the UK, such as the national response to the COVID-19 pandemic. But successes like these are few and far between due to complex systems and governance. The review reveals that:

  • Access to datasets is difficult or slow, often taking months or even years.
  • Data is accessible for analysis and research related to COVID-19, but not to tackle other health conditions, such as other infectious diseases, cancer, heart disease, stroke, diabetes and dementia.
  • More complex types of health data generally don’t have national data systems (for example, most lab testing data and radiology imaging).
  • Barriers like these can delay or prevent hundreds of studies, holding back progress that could improve lives…

The Sudlow Review’s recommendations provide a pathway to establishing a secure and trusted health data system for the UK:

  1. Major national public bodies with responsibility for or interest in health data should agree a coordinated joint strategy to recognise England’s health data for what they are: a critical national infrastructure.
  2. Key government health, care and research bodies should establish a national health data service in England with accountable senior leadership.
  3. The Department of Health and Social Care should oversee and commission ongoing, coordinated, engagement with patients, public, health professionals, policymakers and politicians.
  4. The health and social care departments in the four UK nations should set a UK-wide approach to streamline data access processes and foster proportionate, trustworthy data governance.
  5. National health data organisations and statistical authorities in the four UK nations should develop a UK-wide system for standards and accreditation of secure data environments (SDEs) holding data from the health and care system…(More)”.

Lifecycles, pipelines, and value chains: toward a focus on events in responsible artificial intelligence for health


Paper by Joseph Donia et al: “Process-oriented approaches to the responsible development, implementation, and oversight of artificial intelligence (AI) systems have proliferated in recent years. Variously referred to as lifecycles, pipelines, or value chains, these approaches demonstrate a common focus on systematically mapping key activities and normative considerations throughout the development and use of AI systems. At the same time, these approaches risk focusing on proximal activities of development and use at the expense of a focus on the events and value conflicts that shape how key decisions are made in practice. In this article we report on the results of an ‘embedded’ ethics research study focused on SPOTT– a ‘Smart Physiotherapy Tracking Technology’ employing AI and undergoing development and commercialization at an academic health sciences centre. Through interviews and focus groups with the development and commercialization team, patients, and policy and ethics experts, we suggest that a more expansive design and development lifecycle shaped by key events offers a more robust approach to normative analysis of digital health technologies, especially where those technologies’ actual uses are underspecified or in flux. We introduce five of these key events, outlining their implications for responsible design and governance of AI for health, and present a set of critical questions intended for others doing applied ethics and policy work. We briefly conclude with a reflection on the value of this approach for engaging with health AI ecosystems more broadly…(More)”.

What AI Can Do for Your Country


Article by Jylana L. Sheats: “..Although most discussions of artificial intelligence focus on its impacts on business and research, AI is also poised to transform government in the United States and beyond. AI-guided disaster response is just one piece of the picture. The U.S. Department of Health and Human Services has an experimental AI program to diagnose COVID-19 and flu cases by analyzing the sound of patients coughing into their smartphones. The Department of Justice uses AI algorithms to help prioritize which tips in the FBI’s Threat Intake Processing System to act on first. Other proposals, still at the concept stage, aim to extend the applications of AI to improve the efficiency and effectiveness of nearly every aspect of public services. 

The early applications illustrate the potential for AI to make government operations more effective and responsive. They illustrate the looming challenges, too. The federal government will have to recruit, train, and retain skilled workers capable of managing the new technology, competing with the private sector for top talent. The government also faces a daunting task ensuring the ethical and equitable use of AI. Relying on algorithms to direct disaster relief or to flag high-priority crimes raises immediate concerns: What if biases built into the AI overlook some of the groups that most need assistance, or unfairly target certain populations? As AI becomes embedded into more government operations, the opportunities for misuse and unintended consequences will only expand…(More)”.

Use of large language models as a scalable approach to understanding public health discourse


Paper by Laura Espinosa and Marcel Salathé: “Online public health discourse is becoming more and more important in shaping public health dynamics. Large Language Models (LLMs) offer a scalable solution for analysing the vast amounts of unstructured text found on online platforms. Here, we explore the effectiveness of Large Language Models (LLMs), including GPT models and open-source alternatives, for extracting public stances towards vaccination from social media posts. Using an expert-annotated dataset of social media posts related to vaccination, we applied various LLMs and a rule-based sentiment analysis tool to classify the stance towards vaccination. We assessed the accuracy of these methods through comparisons with expert annotations and annotations obtained through crowdsourcing. Our results demonstrate that few-shot prompting of best-in-class LLMs are the best performing methods, and that all alternatives have significant risks of substantial misclassification. The study highlights the potential of LLMs as a scalable tool for public health professionals to quickly gauge public opinion on health policies and interventions, offering an efficient alternative to traditional data analysis methods. With the continuous advancement in LLM development, the integration of these models into public health surveillance systems could substantially improve our ability to monitor and respond to changing public health attitudes…(More)”.

Asserting the public interest in health data: On the ethics of data governance for biobanks and insurers


Paper by Kathryne Metcalf and Jathan Sadowski : “Recent reporting has revealed that the UK Biobank (UKB)—a large, publicly-funded research database containing highly-sensitive health records of over half a million participants—has shared its data with private insurance companies seeking to develop actuarial AI systems for analyzing risk and predicting health. While news reports have characterized this as a significant breach of public trust, the UKB contends that insurance research is “in the public interest,” and that all research participants are adequately protected from the possibility of insurance discrimination via data de-identification. Here, we contest both of these claims. Insurers use population data to identify novel categories of risk, which become fodder in the production of black-boxed actuarial algorithms. The deployment of these algorithms, as we argue, has the potential to increase inequality in health and decrease access to insurance. Importantly, these types of harms are not limited just to UKB participants: instead, they are likely to proliferate unevenly across various populations within global insurance markets via practices of profiling and sorting based on the synthesis of multiple data sources, alongside advances in data analysis capabilities, over space/time. This necessitates a significantly expanded understanding of the publics who must be involved in biobank governance and data-sharing decisions involving insurers…(More)”.

As AI-powered health care expands, experts warn of biases


Article by Marta Biino: “Google’s DeepMind artificial intelligence research laboratory and German pharma company BioNTech are both building AI-powered lab assistants to help scientists conduct experiments and perform tasks, the Financial Times reported.

It’s the latest example of how developments in artificial intelligence are revolutionizing a number of fields, including medicine. While AI has long been used in radiology, for image analysis, or oncology to classify skin lesions for example, as the technology continues to advance its applications are growing.

OpenAI’s GPT models have outperformed humans in making cancer diagnoses based on MRI reports and beat PhD-holders in standardized science tests, to name a few.

However, as AI’s use in health care expands, some fear the notoriously biased technology could carry negative repercussions for patients…(More)”.

Harnessing the feed: social media for mental health information and support 


Report by ReachOut: “…highlights how a social media ban could cut young people off from vital mental health support, including finding that 73 per cent of young people in Australia turn to social media when it comes to support for their mental health.

Based on research with over 2000 young people, the report found a range of benefits for young people seeking mental health support via social media (predominantly TikTok, YouTube and Instagram). 66 per cent of young people surveyed reported increased awareness about their mental health because of relevant content they accessed via social media, 47 per said they had looked for information about how to get professional mental health support on social media and 40 per cent said they sought professional support after viewing mental health information on social media. 

Importantly, half of young people with a probable mental health condition said that they were searching for mental health information or support on social media because they don’t have access to professional support. 

However, young people also highlighted a range of concerns about social media via the research. 38 per cent were deeply concerned about harmful mental health content they have come across on platforms and 43 per cent of the young people who sought support online were deeply concerned about the addictive nature of social media.  

The report highlights young people’s calls for social media to be safer. They want: an end to addictive features like infinite scroll, more control over the content they see, better labelling of mental health information from credible sources, better education and more mental health information provided across platforms…(More)”.

Harnessing digital footprint data for population health: a discussion on collaboration, challenges and opportunities in the UK


Paper by Romana Burgess et al: “Digital footprint data are inspiring a new era in population health and well-being research. Linking these novel data with other datasets is critical for future research wishing to use these data for the public good. In order to succeed, successful collaboration among industry, academics and policy-makers is vital. Therefore, we discuss the benefits and obstacles for these stakeholder groups in using digital footprint data for research in the UK. We advocate for policy-makers’ inclusion in research efforts, stress the exceptional potential of digital footprint research to impact policy-making and explore the role of industry as data providers, with a focus on shared value, commercial sensitivity, resource requirements and streamlined processes. We underscore the importance of multidisciplinary approaches, consumer trust and ethical considerations in navigating methodological challenges and further call for increased public engagement to enhance societal acceptability. Finally, we discuss how to overcome methodological challenges, such as reproducibility and sharing of learnings, in future collaborations. By adopting a multiperspective approach to outlining the challenges of working with digital footprint data, our contribution helps to ensure that future research can navigate these challenges effectively while remaining reproducible, ethical and impactful…(More)”

Climate and health data website launched


Article by Susan Cosier: “A new website of data resources, tools, and training materials that can aid researchers in studying the consequences of climate change on the health of communities nationwide is now available. At the end of July, NIEHS launched the Climate and Health Outcomes Research Data Systems (CHORDS) website, which includes a catalog of environmental and health outcomes data from various government and nongovernmental agencies.

The website provides a few resources of interest, including a catalog of data resources to aid researchers in finding relevant data for their specific research projects; an online training toolkit that provides tutorials and walk-throughs of downloading, integrating, and visualizing health and environmental data; a listing of publications of note on wildfire and health research; and links to existing resources, such as the NIEHS climate change and health glossary and literature portal.

The catalog includes a listing of dozens of data resources provided by different federal and state environmental and health sources. Users can sort the listing based on environmental and health measures of interest — such as specific air pollutants or chemicals — from data providers including NASA and the U.S. Environmental Protection Agency with many more to come…(More)”.