Call it data liberation day: Patients can now access all their health records digitally  


Article by Casey Ross: “The American Revolution had July 4. The allies had D-Day. And now U.S. patients, held down for decades by information hoarders, can rally around a new turning point, October 6, 2022 — the day they got their health data back.

Under federal rules taking effect Thursday, health care organizations must give patients unfettered access to their full health records in digital format. No more long delays. No more fax machines. No more exorbitant charges for printed pages.

Just the data, please — now…The new federal rules — passed under the 21st Century Cures Act — are designed to shift the balance of power to ensure that patients can not only get their data, but also choose who else to share it with. It is the jumping-off point for a patient-mediated data economy that lets consumers in health care benefit from the fluidity they’ve had for decades in banking: they can move their information easily and electronically, and link their accounts to new services and software applications.

“To think that we actually have greater transparency about our personal finances than about our own health is quite an indictment,” said Isaac Kohane, a professor of biomedical informatics at Harvard Medical School. “This will go some distance toward reversing that.”

Even with the rules now in place, health data experts said change will not be fast or easy. Providers and other data holders — who have dug in their heels at every step  —  can still withhold information under certain exceptions. And many questions remain about protocols for sharing digital records, how to verify access rights, and even what it means to give patients all their data. Does that extend to every measurement in the ICU? Every log entry? Every email? And how will it all get standardized?…(More)”

New WHO policy requires sharing of all research data


Press release: “Science and public health can benefit tremendously from sharing and reuse of health data. Sharing data allows us to have the fullest possible understanding of health challenges, to develop new solutions, and to make decisions using the best available evidence.

The Research for Health department has helped spearhead the launch of a new policy from the Science Division which covers all research undertaken by or with support from WHO. The goal is to make sure that all research data is shared equitably, ethically and efficiently. Through this policy, WHO indicates its commitment to transparency in order to reach the goal of one billion more people enjoying better health and well-being.

The WHO policy is accompanied by practical guidance to enable researchers to develop and implement a data management and sharing plan, before the research has even started. The guide provides advice on the technical, ethical and legal considerations to ensure that data, even patient data, can be shared for secondary analysis without compromising personal privacy.  Data sharing is now a requirement for research funding awarded by WHO and TDR. 

“We have seen the problems caused by the lack of data sharing on COVID-19,” said Dr. Soumya Swaminathan, WHO Chief Scientist. “When data related to research activities are shared ethically, equitably and efficiently, there are major gains for science and public health.”

The policy to share data from all research funded or conducted by WHO, and practical guidance to do so, can be found here…(More)”.

Using real-time indicators for economic decision-making in government: Lessons from the Covid-19 crisis in the UK


Paper by David Rosenfeld: “When the UK went into lockdown in mid-March 2020, government was faced with the dual challenge of managing the impact of closing down large parts of the economy and responding effectively to the pandemic. Policy-makers needed to make rapid decisions regarding, on the one hand, the extent of restrictions on movement and economic activity to limit the spread of the virus, and on the other, the amount of support that would be provided to individuals and businesses affected by the crisis. Traditional, official statistics, such as gross domestic product (GDP) or unemployment, which get released on a monthly basis and with a lag, could not be relied upon to monitor the situation and guide policy decisions.

In response, teams of data scientists and statisticians pivoted to develop alternative indicators, leading to an unprecedented amount of innovation in how statistics and data were used in government. This ranged from monitoring sewage water for signs of Covid-19 infection to the Office for National Statistics (ONS) developing a new range of ‘faster indicators’ of economic activity using online job vacancies and data on debit and credit card expenditure from the Clearing House Automated Payment System (CHAPS).

The ONS received generally positive reviews for its performance during the crisis (The Economist, 2022), in contrast to the 2008 financial crisis when policy-makers did not realise the extent of the recession until subsequent revisions to GDP estimates were made. Partly in response to this, the Independent Review of UK Economic Statistics (HM Treasury, 2016) recommended improvements to the use of administrative data and alternative indicators as well as to data science capability to exploit both the extra granularity and the timeliness of new data sources.

This paper reviews the elements that contributed to successes in using real-time data during the pandemic as well as the challenges faced during this period, with a view to distilling some lessons for future use in government. Section 2 provides an overview of real-time indicators (RTIs) and how they were used in the UK during the Covid-19 crisis. The next sections analyse the factors that underpinned the successes (or lack thereof) in using such indicators: section 3 addresses skills, section 4 infrastructure, and section 5 legal frameworks and processes. Section 6 concludes with a summary of the main lessons for governments that hope to make greater use of RTIs…(More)”.

‘Very Harmful’ Lack of Data Blunts U.S. Response to Outbreaks


Paper by Sharon LaFraniere: “After a middle-aged woman tested positive for Covid-19 in January at her workplace in Fairbanks, public health workers sought answers to questions vital to understanding how the virus was spreading in Alaska’s rugged interior.

The woman, they learned, had underlying conditions and had not been vaccinated. She had been hospitalized but had recovered. Alaska and many other states have routinely collected that kind of information about people who test positive for the virus. Part of the goal is to paint a detailed picture of how one of the worst scourges in American history evolves and continues to kill hundreds of people daily, despite determined efforts to stop it.

But most of the information about the Fairbanks woman — and tens of millions more infected Americans — remains effectively lost to state and federal epidemiologists. Decades of underinvestment in public health information systems has crippled efforts to understand the pandemic, stranding crucial data in incompatible data systems so outmoded that information often must be repeatedly typed in by hand. The data failure, a salient lesson of a pandemic that has killed more than one million Americans, will be expensive and time-consuming to fix….(More)”.

The precise cost in needless illness and death cannot be quantified. The nation’s comparatively low vaccination rate is clearly a major factor in why the United States has recorded the highest Covid death rate among large, wealthy nations. But federal experts are certain that the lack of comprehensive, timely data has also exacted a heavy toll.

“It has been very harmful to our response,” said Dr. Ashish K. Jha, who leads the White House effort to control the pandemic. “It’s made it much harder to respond quickly.”

Details of the Fairbanks woman’s case were scattered among multiple state databases, none of which connect easily to the others, much less to the Centers for Disease Control and Prevention, the federal agency in charge of tracking the virus. Nine months after she fell ill, her information was largely useless to epidemiologists because it was impossible to synthesize most of it with data on the roughly 300,000 other Alaskans and the 95 million-plus other Americans who have gotten Covid.

A rough guide to being a public entrepreneur in practice


Report by the RSA: “Our health and social care systems have been working to meet people’s needs for over 70 years. Yet the approach to change is often incremental rather than radical or transformational. This means we sometimes ‘muddle through’ with the resources we have. Given the pace of change and long-term trends and challenges on the horizon this approach is no longer sufficient.

There are, however, people, processes and practices that are demonstrating a new kind of public entrepreneurship – responding fast, taking risks and experimenting to meet challenges head on. We’ve seen incredible responses to the pandemic and whilst it’s hit society hard, it’s also accelerated changes that might have otherwise taken decades to implement.

We need to harness the collective potential of creative people more systematically, working across the system to build resilience and support transformational change efforts. Staff commitment and energy is fundamental to spotting the challenges and the opportunities for change, taking action to not only meet the needs of today, but those of tomorrow. Together NHS Lothian and the RSA designed and ran a programme in an attempt to do just that.

This rough guide presents a summary of the insights gained by 12 members of NHS Lothian’s staff who came together to explore how they can support each other to challenge the status quo and find new ways of addressing challenges they face in their work…(More)”.

Uncovering the genetic basis of mental illness requires data and tools that aren’t just based on white people


Article by Hailiang Huang: “Mental illness is a growing public health problem. In 2019, an estimated 1 in 8 people around the world were affected by mental disorders like depression, schizophrenia or bipolar disorder. While scientists have long known that many of these disorders run in families, their genetic basis isn’t entirely clear. One reason why is that the majority of existing genetic data used in research is overwhelmingly from white people.

In 2003, the Human Genome Project generated the first “reference genome” of human DNA from a combination of samples donated by upstate New Yorkers, all of whom were of European ancestry. Researchers across many biomedical fields still use this reference genome in their work. But it doesn’t provide a complete picture of human genetics. Someone with a different genetic ancestry will have a number of variations in their DNA that aren’t captured by the reference sequence.

When most of the world’s ancestries are not represented in genomic data sets, studies won’t be able to provide a true representation of how diseases manifest across all of humanity. Despite this, ancestral diversity in genetic analyses hasn’t improved in the two decades since the Human Genome Project announced its first results. As of June 2021, over 80% of genetic studies have been conducted on people of European descent. Less than 2% have included people of African descent, even though these individuals have the most genetic variation of all human populations.

To uncover the genetic factors driving mental illness, ISinéad Chapman and our colleagues at the Broad Institute of MIT and Harvard have partnered with collaborators around the world to launch Stanley Global, an initiative that seeks to collect a more diverse range of genetic samples from beyond the U.S. and Northern Europe, and train the next generation of researchers around the world. Not only does the genetic data lack diversity, but so do the tools and techniques scientists use to sequence and analyze human genomes. So we are implementing a new sequencing technology that addresses the inadequacies of previous approaches that don’t account for the genetic diversity of global populations…(More).

Digital Privacy for Reproductive Choice in the Post-Roe Era


Paper by Aziz Z. Huq and Rebecca Wexler: “The overruling of Roe v. Wade unleashed a torrent of regulatory and punitive activity restricting lawful reproductive options. The turn to the expansive criminal law and new schemes of civil liability creates new, and quite different, concerns from the pre-Roe landscape a half-century, ago. Reproductive choice, and its nemesis, rests on information. For pregnant people, deciding on a choice of medical care entails a search for advice and services. Information is at a premium for them. Meanwhile, efforts to regulate abortion begin with clinic closings, but quickly will extend to civil actions and criminal indictments of patients, providers, and those who facilitate abortions. Like the pregnant themselves, criminal and civil enforcers depend on information. And in the contemporary context, the informational landscape, and hence access to counseling and services such as medication abortion, is largely digital. In an era when most people use search engines or social media to access information, the digital architecture and data retention policies of those platforms will determine not only whether the pregnant can access medically accurate advice but also whether the mere act of doing so places them in legal peril.

This Article offers the first comprehensive accounting of abortion-related digital privacy after the end of Roe. It demonstrates first that digital privacy for pregnant persons in the United States has suddenly become a tremendously fraught and complex question. It then maps the treacherous social, legal and economic terrain upon which firms, individuals, and states will make privacy related decisions. Building on this political economy, we develop a moral and economic argument to the effect that digital firms should maximize digital privacy for pregnant persons within the scope of the law, and should actively resist restrictionist states’ efforts to instrumentalize them into their war on reproductive choice. We then lay out precise, tangible steps that firms should take to enact this active resistance, explaining in particular a range of powerful yet legal options for firms to refuse cooperation with restrictionist criminal and civil investigations. Finally, we present an original, concrete and immediately actionable proposal for federal and state legislative intervention: a statutory evidentiary privilege to shield abortion-relevant data from restrictionist warrants, subpoenas, court orders, and judicial proceedings…(More)”

One Data Point Can Beat Big Data


Essay by Gerd Gigerenzer: “…In my research group at the Max Planck Institute for Human Development, we’ve studied simple algorithms (heuristics) that perform well under volatile conditions. One way to derive these rules is to rely on psychological AI: to investigate how the human brain deals with situations of disruption and change. Back in 1838, for instance, Thomas Brown formulated the Law of Recency, which states that recent experiences come to mind faster than those in the distant past and are often the sole information that guides human decision. Contemporary research indicates that people do not automatically rely on what they recently experienced, but only do so in unstable situations where the distant past is not a reliable guide for the future. In this spirit, my colleagues and I developed and tested the following “brain algorithm”:

Recency heuristic for predicting the flu: Predict that this week’s proportion of flu-related doctor visits will equal those of the most recent data, from one week ago.

Unlike Google’s secret Flu Trends algorithm, this rule is transparent and can be easily applied by everyone. Its logic can be understood. It relies on a single data point only, which can be looked up on the website of the Center for Disease Control. And it dispenses with combing through 50 million search terms and trial-and-error testing of millions of algorithms. But how well does it actually predict the flu?

Three fellow researchers and I tested the recency rule using the same eight years of data on which Google Flu Trends algorithm was tested, that is, weekly observations between March 2007 and August 2015. During that time, the proportion of flu-related visits among all doctor visits ranged between one percent and eight percent, with an average of 1.8 percent visits per week (Figure 1). This means that if every week you were to make the simple but false prediction that there are zero flu-related doctor visits, you would have a mean absolute error of 1.8 percentage points over four years. Google Flu Trends predicted much better than that, with a mean error of 0.38 percentage points (Figure 2). The recency heuristic had a mean error of only 0.20 percentage points, which is even better. If we exclude the period where the swine flu happened, that is before the first update of Google Flu Trends, the result remains essentially the same (0.38 and 0.19, respectively)….(More)”.

Hosting an Online World Café to Develop an Understanding of Digital Health Promoting Settings from a Citizen’s Perspective—Methodological Potentials and Challenges


Paper by Joanna Albrecht: “Brown and Isaacs’ World Café is a participatory research method to make connections to the ideas of others. During the SARS-CoV-2 pandemic and the corresponding contact restrictions, only digital hostings of World Cafés were possible. This article aims to present and reflect on the potentials and challenges of hosting online World Cafés and to derive recommendations for other researchers. Via Zoom and Conceptboard, three online World Cafés were conducted in August 2021. In the World Cafés, the main focus was on the increasing digitization in settings in the context of health promotion and prevention from the perspective of setting members of educational institutions, leisure clubs, and communities. Between 9 and 13 participants participated in three World Cafés. Hosting comprises the phases of design and preparation, realisation, and evaluation. Generally, hosting an online World Café is a suitable method for participatory engagement, but particular challenges have to be overcome. Overall café hosts must create an equal participation environment by ensuring the availability of digital devices and stable internet access. The event schedule must react flexibly to technical disruptions and varying participation numbers. Further, compensatory measures such as support in the form of technical training must be implemented before the event. Finally, due to the higher complexity of digitalisation, roles of participants and staff need to be distributed and coordinated…(More)”.

Artificial intelligence was supposed to transform health care. It hasn’t.


Article by Ben Leonard and Ruth Reader: “Artificial intelligence is spreading into health care, often as software or a computer program capable of learning from large amounts of data and making predictions to guide care or help patients. | Seth Wenig/AP Photo

Investors see health care’s future as inextricably linked with artificial intelligence. That’s obvious from the cash pouring into AI-enabled digital health startups, including more than $3 billion in the first half of 2022 alone and nearly $10 billion in 2021, according to a Rock Health investment analysis commissioned by POLITICO.

And no wonder, considering the bold predictions technologists have made. At a conference in 2016, Geoffrey Hinton, British cognitive psychologist and “godfather” of AI, said radiologists would soon go the way of typesetters and bank tellers: “People should stop training radiologists now. It’s just completely obvious that, within five years, deep learning is going to do better.”

But more than five years since Hinton’s forecast, radiologists are still training to read image scans. Instead of replacing doctors, health system administrators now see AI as a tool clinicians will use to improve everything from their diagnoses to billing practices. AI hasn’t lived up to the hype, medical experts said, because health systems’ infrastructure isn’t ready for it yet. And the government is just beginning to grapple with its regulatory role.

“Companies come in promising the world and often don’t deliver,” said Bob Wachter, head of the department of medicine at the University of California, San Francisco. “When I look for examples of … true AI and machine learning that’s really making a difference, they’re pretty few and far between. It’s pretty underwhelming.”

Administrators say algorithms — the software that processes data — from outside companies don’t always work as advertised because each health system has its own technological framework. So hospitals are building out engineering teams and developing artificial intelligence and other technology tailored to their own needs.

But it’s slow going. Research based on job postings shows health care behind every industry except construction in adopting AI…(More)”.