The Open-Source Movement Comes to Medical Datasets


Blog by Edmund L. Andrews: “In a move to democratize research on artificial intelligence and medicine, Stanford’s Center for Artificial Intelligence in Medicine and Imaging (AIMI) is dramatically expanding what is already the world’s largest free repository of AI-ready annotated medical imaging datasets.

Artificial intelligence has become an increasingly pervasive tool for interpreting medical images, from detecting tumors in mammograms and brain scans to analyzing ultrasound videos of a person’s pumping heart.

Many AI-powered devices now rival the accuracy of human doctors. Beyond simply spotting a likely tumor or bone fracture, some systems predict the course of a patient’s illness and make recommendations.

But AI tools have to be trained on expensive datasets of images that have been meticulously annotated by human experts. Because those datasets can cost millions of dollars to acquire or create, much of the research is being funded by big corporations that don’t necessarily share their data with the public.

“What drives this technology, whether you’re a surgeon or an obstetrician, is data,” says Matthew Lungren, co-director of AIMI and an assistant professor of radiology at Stanford. “We want to double down on the idea that medical data is a public good, and that it should be open to the talents of researchers anywhere in the world.”

Launched two years ago, AIMI has already acquired annotated datasets for more than 1 million images, many of them from the Stanford University Medical Center. Researchers can download those datasets at no cost and use them to train AI models that recommend certain kinds of action.

Now, AIMI has teamed up with Microsoft’s AI for Health program to launch a new platform that will be more automated, accessible, and visible. It will be capable of hosting and organizing scores of additional images from institutions around the world. Part of the idea is to create an open and global repository. The platform will also provide a hub for sharing research, making it easier to refine different models and identify differences between population groups. The platform can even offer cloud-based computing power so researchers don’t have to worry about building local resource intensive clinical machine-learning infrastructure….(More)”.

The Illusion of Inclusion — The “All of Us” Research Program and Indigenous Peoples’ DNA


Article by Keolu Fox: “Raw data, including digital sequence information derived from human genomes, have in recent years emerged as a top global commodity. This shift is so new that experts are still evaluating what such information is worth in a global market. In 2018, the direct-to-consumer genetic-testing company 23andMe sold access to its database containing digital sequence information from approximately 5 million people to GlaxoSmithKline for $300 million. Earlier this year, 23andMe partnered with Almirall, a Spanish drug company that is using the information to develop a new antiinflammatory drug for autoimmune disorders. This move marks the first time that 23andMe has signed a deal to license a drug for development.

Eighty-eight percent of people included in large-scale studies of human genetic variation are of European ancestry, as are the majority of participants in clinical trials. Corporations such as Geisinger Health System, Regeneron Pharmaceuticals, AncestryDNA, and 23andMe have already mined genomic databases for the strongest genotype–phenotype associations. For the field to advance, a new approach is needed. There are many potential ways to improve existing databases, including “deep phenotyping,” which involves collecting precise measurements from blood panels, questionnaires, cognitive surveys, and other tests administered to research participants. But this approach is costly and physiologically and mentally burdensome for participants. Another approach is to expand existing biobanks by adding genetic information from populations whose genomes have not yet been sequenced — information that may offer opportunities for discovering globally rare but locally common population-specific variants, which could be useful for identifying new potential drug targets.

Many Indigenous populations have been geographically isolated for tens of thousands of years. Over time, these populations have developed adaptations to their environments that have left specific variant signatures in their genomes. As a result, the genomes of Indigenous peoples are a treasure trove of unexplored variation. Some of this variation will inevitably be identified by programs like the National Institutes of Health (NIH) “All of Us” research program. NIH leaders have committed to the idea that at least 50% of this program’s participants should be members of underrepresented minority populations, including U.S. Indigenous communities (Native Americans, Alaskan Natives, and Native Hawaiians), a decision that explicitly connects diversity with the program’s goal of promoting equal enjoyment of the future benefits of precision medicine.

But there are reasons to believe that this promise may be an illusion….(More)”.

Co-design and Ethical Artificial Intelligence for Health: Myths and Misconceptions


Paper by Joseph Donia and Jay Shaw: “Applications of artificial intelligence / machine learning (AI/ML) are dynamic and rapidly growing, and although multi-purpose, are particularly consequential in health care. One strategy for anticipating and addressing ethical challenges related to AI/ML for health care is co-design – or involvement of end users in design. Co-design has a diverse intellectual and practical history, however, and has been conceptualized in many different ways. Moreover, the unique features of AI/ML introduce challenges to co-design that are often underappreciated. This review summarizes the research literature on involvement in health care and design, and informed by critical data studies, examines the extent to which co-design as commonly conceptualized is capable of addressing the range of normative issues raised by AI/ML for health. We suggest that AI/ML technologies have amplified existing challenges related to co-design, and created entirely new challenges. We outline five co-design ‘myths and misconceptions’ related to AI/ML for health that form the basis for future research and practice. We conclude by suggesting that the normative strength of a co-design approach to AI/ML for health can be considered at three levels: technological, health care system, and societal. We also suggest research directions for a ‘new era’ of co-design capable of addressing these challenges….(More)”.

Remove obstacles to sharing health data with researchers outside of the European Union


Heidi Beate Bentzen et al in Nature: “International sharing of pseudonymized personal data among researchers is key to the advancement of health research and is an essential prerequisite for studies of rare diseases or subgroups of common diseases to obtain adequate statistical power.

Pseudonymized personal data are data on which identifiers such as names are replaced by codes. Research institutions keep the ‘code key’ that can link an individual person to the data securely and separately from the research data and thereby protect privacy while preserving the usefulness of data for research. Pseudonymized data are still considered personal data under the General Data Protection Regulation (GDPR) 2016/679 of the European Union (EU) and, therefore, international transfers of such data need to comply with GDPR requirements. Although the GDPR does not apply to transfers of anonymized data, the threshold for anonymity under the GDPR is very high; hence, rendering data anonymous to the level required for exemption from the GDPR can diminish the usefulness of the data for research and is often not even possible.

The GDPR requires that transfers of personal data to international organizations or countries outside the European Economic Area (EEA)—which comprises the EU Member States plus Iceland, Liechtenstein and Norway—be adequately protected. Over the past two years, it has become apparent that challenges emerge for the sharing of data with public-sector researchers in a majority of countries outside of the EEA, as only a few decisions stating that a country offers an adequate level of data protection have so far been issued by the European Commission. This is a problem, for example, with researchers at federal research institutions in the United States. Transfers to international organizations such as the World Health Organization are similarly affected. Because these obstacles ultimately affect patients as beneficiaries of research, solutions are urgently needed. The European scientific academies have recently published a report explaining the consequences of stalled data transfers and pushing for responsible solutions…(More)”.

More Than Nudges Are Needed to End the Pandemic


Richard Thaler in the New York Times: “…In the case of Covid vaccinations, society cannot afford to wait decades. Although vaccines are readily available and free for everyone over age 12 in the United States, there are many holdouts. About 40 percent of the adult population has not been fully vaccinated, and about a third has not yet gotten even one dose. It is time to get serious.

Of course, information campaigns must continue to stress the safety and efficacy of the vaccines, but it is important to target the messages at the most hesitant groups. It would help if the F.D.A. gave the vaccines its full approval rather than the current emergency use designation. Full approval for the Pfizer drug may come as soon as Labor Day, but the process for the other vaccines is much further behind.

One way to increase vaccine takeup would be to offer monetary incentives. For example, President Biden has recently advocated paying people $100 to get their shots.

Although this policy is well intended, I believe it is a mistake for a state or a country to offer to pay individuals to get vaccinated. First of all, the amount might be taken to be an indicator of how much — or little — the government thinks getting a jab is worth. Surely the value to society of increased vaccinations is well beyond $100 per person.

Second, it seems increasingly likely that one or more booster shots may be necessary for some populations in the United States to deal with the Delta variant of the coronavirus — and, perhaps, other variants as well. If that happens, we don’t want some people to procrastinate, hoping to get paid. Government-sponsored booster shots are already beginning in Israel and are at various stages of planning in several European countries.

An alternative model is being offered by the National Football League, which has stopped short of requiring players to be vaccinated but is offering plenty of incentives. Unvaccinated players have to be tested every day, must be masked and at a distance from teammates on flights, and must stay in their room until game day. Vaccinated players who test positive and are asymptomatic can return to duty after two negative tests 24 hours apart. But unvaccinated players must undergo a 10-day isolation period.

These incentives followed a long effort to educate the players about the benefits to themselves, their families and fellow players. It is hard to say which aspect of the N.F.L. plan is doing the work, but over 90 percent of the league’s players have received at least one jab. The fact that a team could lose a game because an unvaccinated player can’t play creates a powerful group dynamic…(More)”.

Safeguarding Public Values in Cooperation with Big Tech Companies: The Case of the Austrian Contact Tracing App Stopp Corona


Paper by Valerie Eveline: “In April 2020, at the beginning of the COVID-19 pandemic, the Austrian Red Cross announced it was encouraging a cooperation with Google and Apple’s Exposure Notification Framework to develop the so-called Stop Corona app – a contact tracing app which would support health personnel in monitoring the spread of the virus to prevent new infections (European Commission, 2020a). The involvement of Google and Apple to support combating a public health emergency fueled controversy over addressing profit-driven private interests at the expense of public values. Concerns have been raised about the dominant position of US based big tech companies in political decision concerning public values. This research investigates how public values are safeguarded in cooperation with big tech companies in the Austrian contact tracing app Stop Corona. Contact tracing apps manifest a bigger trend in literature, signifying power dynamics of big tech companies, governments, and civil society in relation to public values. The theoretical foundation of this research form prevailing concepts from Media and Communication Studies (MCS) and Science and Technology Studies (STS) about power dynamics such as the expansion of digital platforms and infrastructures, the political economy of big tech companies, dependencies, and digital platforms and infrastructure governance.

The cooperative responsibility framework guides the empirical investigation in four main steps. First steps identify key public values at stake and main stakeholders. After, public deliberations on advancing public values and the translation of public values based on the outcome of public deliberation are analyzed….(More)”.

Medical crowdfunding has become essential in India, but it’s leaving many behind


Article by Akanksha Singh: “In May, as India grappled with a second wave of the coronavirus pandemic, Mahan and Nishan Sekhon found themselves stretched thin. Their mother had contracted black fungus, a potentially lethal disease. The treatment, at a cost of $1,300 per day, had exhausted their insurance plan and burned through their savings. As a last resort, they turned to Ketto, a crowdfunding platform. 

They shared the campaign within their social networks in mid-June, and within a month the brothers had secured $59,000 of their $76,000 goal. “I even got a call from an [Indian man] in Belgium,” Mahan Sekhon told Rest of World. “His Spanish restaurant manager told him [about the fundraiser].”

This is how Ketto is supposed to work. In a country where out-of-pocket expenses account for nearly 63% of total health expenditures, crowdfunding fills a void in medical needs for thousands of Indians. During the Covid-19 crisis, in which more than 4 million people are estimated to have died and 10 million people have lost their jobs, Ketto saw a fourfold increase in registered fundraisers, hosting nearly 12,500 Covid-19 relief campaigns and raising $40 million, according to the company.

However, for many people in India, crowdfunding medical care is either impractical or impossible. To access the platforms, users need official documentation and formal bank accounts, which are far from universal. In 2018, the World Bank’s Identification for Development initiative estimated that 162 million Indians lack registration, including people from the trans community, homeless people, sex workers, indigenous peoples, and those from oppressed caste and class backgrounds. Even when they can get on the platforms, they are regularly targeted with hate speech and discrimination.

It means they are, effectively, cut off from services they need, or are forced to rely on the empathy of intermediaries. “People from marginalized communities in India often do not possess identity documents,” lawyer and activist Lara Jesani told Rest of World. “There are sections of people who systematically face the problem of documentation,” she said.

Ketto, an Indian online crowdfunding platform, says it has hosted over 200,000 medical fundraisers.https://www.ketto.org/

Ketto was founded in 2012 as an online marketplace that allows people to raise funds for everything from starting a business to helping nonprofits. The company began to focus on healthcare three years ago, Varun Sheth, the company’s co-founder, told Rest of World. “We realized that [medical fundraising] was where the platform was most effectively used,” he said. The company promotes campaigns through targeted advertising on Facebook and YouTube, helping them to reach a wide audience, including Indian citizens overseas. “We constantly got feedback that people outside India, especially, want to support more causes in India,” Sheth said.

Since its launch, Ketto said it has hosted over 200,000 medical fundraisers and raised over $148 million. The platform recently raised its largest ever medical appeal, $460,000 for Mithra, an infant with spinal muscular atrophy….(More)”.

The Patient, Data Protection and Changing Healthcare Models


Book by Griet Verhenneman on The Impact of e-Health on Informed Consent, Anonymisation and Purpose Limitation: “Healthcare is changing. It is moving to a paperless environment and becoming a team-based, interdisciplinary and patient-centred profession. Modern healthcare models reflect our data-driven economy, and adopt value-driven strategies, evidence-based medicine, new technology, decision support and automated decision-making. Amidst these changes are the patients, and their right to data protection, privacy and autonomy. The question arises of how to match phenomena that characterise the predominant ethos in modern healthcare systems, such as e-health and personalised medicine, to patient autonomy and data protection laws. That matching exercise is essential. The successful adoption of ICT in healthcare depends, at least partly, on how the public’s concerns about data protection and confidentiality are addressed.

Three backbone principles of European data protection law are considered to be bottlenecks for the implementation of modern healthcare systems: informed consent, anonymisation and purpose limitation. This book assesses the adequacy of these principles and considers them in the context of technological and societal evolutions. A must-read for every professional active in the field of data protection law, health law, policy development or IT-driven innovation…(More)”.

Hundreds of AI tools have been built to catch covid. None of them helped.


Article by Will Douglas Heaven: “When covid-19 struck Europe in March 2020, hospitals were plunged into a health crisis that was still badly understood. “Doctors really didn’t have a clue how to manage these patients,” says Laure Wynants, an epidemiologist at Maastricht University in the Netherlands, who studies predictive tools.

But there was data coming out of China, which had a four-month head start in the race to beat the pandemic. If machine-learning algorithms could be trained on that data to help doctors understand what they were seeing and make decisions, it just might save lives. “I thought, ‘If there’s any time that AI could prove its usefulness, it’s now,’” says Wynants. “I had my hopes up.”

It never happened—but not for lack of effort. Research teams around the world stepped up to help. The AI community, in particular, rushed to develop software that many believed would allow hospitals to diagnose or triage patients faster, bringing much-needed support to the front lines—in theory.

In the end, many hundreds of predictive tools were developed. None of them made a real difference, and some were potentially harmful.

That’s the damning conclusion of multiple studies published in the last few months. In June, the Turing Institute, the UK’s national center for data science and AI, put out a report summing up discussions at a series of workshops it held in late 2020. The clear consensus was that AI tools had made little, if any, impact in the fight against covid.

Not fit for clinical use

This echoes the results of two major studies that assessed hundreds of predictive tools developed last year. Wynants is lead author of one of them, a review in the British Medical Journal that is still being updated as new tools are released and existing ones tested. She and her colleagues have looked at 232 algorithms for diagnosing patients or predicting how sick those with the disease might get. They found that none of them were fit for clinical use. Just two have been singled out as being promising enough for future testing.

“It’s shocking,” says Wynants. “I went into it with some worries, but this exceeded my fears.”

Wynants’s study is backed up by another large review carried out by Derek Driggs, a machine-learning researcher at the University of Cambridge, and his colleagues, and published in Nature Machine Intelligence. This team zoomed in on deep-learning models for diagnosing covid and predicting patient risk from medical images, such as chest x-rays and chest computer tomography (CT) scans. They looked at 415 published tools and, like Wynants and her colleagues, concluded that none were fit for clinical use…..(More)”.

Human behaviour: what scientists have learned about it from the pandemic


Stephen Reicher at The Conversation: “During the pandemic, a lot of assumptions were made about how people behave. Many of those assumptions were wrong, and they led to disastrous policies.

Several governments worried that their pandemic restrictions would quickly lead to “behavioural fatigue” so that people would stop adhering to restrictions. In the UK, the prime minister’s former chief adviser Dominic Cummings recently admitted that this was the reason for not locking down the country sooner.

Meanwhile, former health secretary Matt Hancock revealed that the government’s failure to provide financial and other forms of support for people to self-isolate was down to their fear that the system “might be gamed”. He warned that people who tested positive may then falsely claim that they had been in contact with all their friends, so they could all get a payment.

These examples show just how deeply some governments distrust their citizens. As if the virus was not enough, the public was portrayed as an additional part of the problem. But is this an accurate view of human behaviour?

The distrust is based on two forms of reductionism – describing something complex in terms of its fundamental constituents. The first is limiting psychology to the characteristics – and more specifically the limitations – of individual minds. In this view the human psyche is inherently flawed, beset by biases that distort information. It is seen as incapable of dealing with complexity, probability and uncertainty – and tending to panic in a crisis.

This view is attractive to those in power. By emphasising the inability of people to govern themselves, it justifies the need for a government to look after them. Many governments subscribe to this view, having established so-called nudge units – behavioural science teams tasked with subtly manipulating people to make the “right” decisions, without them realising why, from eating less sugar to filing their taxes on time. But it is becoming increasingly clear that this approach is limited. As the pandemic has shown, it is particularly flawed when it comes to behaviour in a crisis.

In recent years, research has shown that the notion of people panicking in a crisis is something of a myth. People generally respond to crises in a measured and orderly way – they look after each other.

The key factor behind this behaviour is the emergence of a sense of shared identity. This extension of the self to include others helps us care for those around us and expect support from them. Resilience cannot be reduced to the qualities of individual people. It tends to be something that emerges in groups.

Another type of reductionism that governments adopt is “psychologism” – when you reduce the explanation of people’s behaviour to just psychology…(More)”.