How AI could save lives without spilling medical secrets


Will Knight at MIT Technology Review: “The potential for artificial intelligence to transform health care is huge, but there’s a big catch.

AI algorithms will need vast amounts of medical data on which to train before machine learning can deliver powerful new ways to spot and understand the cause of disease. That means imagery, genomic information, or electronic health records—all potentially very sensitive information.

That’s why researchers are working on ways to let AI learn from large amounts of medical data while making it very hard for that data to leak.

One promising approach is now getting its first big test at Stanford Medical School in California. Patients there can choose to contribute their medical data to an AI system that can be trained to diagnose eye disease without ever actually accessing their personal details.

Participants submit ophthalmology test results and health record data through an app. The information is used to train a machine-learning model to identify signs of eye disease in the images. But the data is protected by technology developed by Oasis Labs, a startup spun out of UC Berkeley, which guarantees that the information cannot be leaked or misused. The startup was granted permission by regulators to start the trial last week.

The sensitivity of private patient data is a looming problem. AI algorithms trained on data from different hospitals could potentially diagnose illness, prevent disease, and extend lives. But in many countries medical records cannot easily be shared and fed to these algorithms for legal reasons. Research on using AI to spot disease in medical images or data usually involves relatively small data sets, which greatly limits the technology’s promise….

Oasis stores the private patient data on a secure chip, designed in collaboration with other researchers at Berkeley. The data remains within the Oasis cloud; outsiders are able to run algorithms on the data, and receive the results, without its ever leaving the system. A smart contractsoftware that runs on top of a blockchain—is triggered when a request to access the data is received. This software logs how the data was used and also checks to make sure the machine-learning computation was carried out correctly….(More)”.