New approach can advance medical AI algorithms, reduce risks to patient privacy

Within the field of artificial intelligence (AI), deep learning is an area of rapid and high-impact innovation in the healthcare industry.

Being able to successfully train computers to perform medical tasks has extraordinary potential to improve patient care, increase access, and reduce costs. Researchers at UCLA have already developed AI systems that have had significant value in helping physicians more accurately detect cancers.

However, one major challenge in developing high-quality AI algorithms is the availability of data and patient privacy. Sharing medical data, even de-identified medical data, may pose some risk to the privacy of patients, and protecting patient privacy is one of the main ethical directives of the medical profession.

A recent study, led by Dr. Corey Arnold, director of the Computational Diagnostics Lab, associate professor in the departments of radiology and pathology & laboratory medicine at the David Geffen School of Medicine at UCLA, and member of the UCLA Jonsson Comprehensive Cancer Center, found that a new deep learning training architecture, called federated learning, can collaboratively train AI algorithms without direct data sharing.

Here Dr. Arnold and Karthik Sarma, the paper's first author and a MD/PhD student at the UCLA-Caltech Medical Scientist Training Program, discuss the significance of the new development and how researchers can continue to accelerate the pace of innovation within medical AI while reducing risks to patient privacy.

First, what is deep learning?

ARNOLD: Deep learning is a technique for developing AI algorithms that can be used to help deliver higher quality care faster and cheaper. Deep learning works through a process known as "training" in which previously acquired and labeled data is fed through a deep learning AI model, enabling the model to "learn" from the data it is observing. Once the AI model has finished learning from the data, it can then be used to make predictions when new data is fed into the model.

For example, in our study, we trained a deep learning AI algorithm to locate and delineate the prostate within magnetic resonance images (MRIs). We did this by having a clinician annotate the location of the prostate on a set of MRIs. We then fed these images, along with the annotations, into deep learning model to train it to delineate prostates. We then tested the model by feeding it new MRIs it had not previously seen and asking it to locate the prostate, and then comparing its predictions with references created by our clinicians.

What were some of the main findings from the study?

SARMA: In this study, we looked at an alternative model training approach called federated learning. Instead of sharing data across institutions, federated learning enables sharing training across institutions, with the resulting trained models being periodically synchronized to share learned knowledge without sharing the underlying data.

We found that the federated learning approach allowed us to train AI algorithms that learned from patient data located at each of the study's participating institutions without requiring sharing that data.

ARNOLD: We also found that not only did federated learning produce an AI model that worked better on data from the participating institutions, but it also produced an AI model that worked better on data from different institutions than the ones that participated in the original training.

How does the federated learning model work?

SARMA: When deep learning models are "trained," this process occurs by feeding data through the deep learning network in order to enable it to learn how to classify the data. This "learning" is represented by trained "model weights," which consist of large arrays of numeric values that represent the learned understanding of the deep learning network.

Federated learning works by deploying deep learning models to each participating institution in the "federation." Then, those models are trained at each institution by being exposed to local data. Periodically during this training process, the model weights are sent to a central "federated server," where they are synchronized together and then re-sent to each institution.

This synchronization process (known as "aggregation") combines the knowledge learned at each site into a single set of model weights before re-distributing them. Thus, over time, as the models are trained at each institution and then aggregated together, each of the individual deep learning networks receives the benefit of knowledge learned at each institution within the federation.

Once training is complete, a single aggregated deep network is produced, and as our paper demonstrates, that final network receives the benefit of knowledge learned at each institution, without requiring the data to be shared between institutions directly.

What is so remarkable about these findings?

ARNOLD: Because successful medical AI algorithm development requires exposure to a large quantity of data that is representative of patients across the globe, it was traditionally believed that the only way to be successful was to acquire and transfer to your local institution data originating from a wide variety of healthcare providers -; a barrier that was considered insurmountable for any but the largest AI developers because of the cost, legal and ethical complexity of acquiring patient data.

However, our findings demonstrate that instead, institutions can team up into AI federations and collaboratively develop innovative and valuable medical AI models that can perform just as well as those developed through the creation of massive, siloed datasets, with less risk to privacy. This could enable a significantly faster pace of innovation within the medical AI space, enabling life-saving innovations to be developed and used for patients faster.

Source:
Journal reference:

Sarma, K. V., et al. (2021) Federated learning improves site performance in multicenter deep learning without data sharing. Journal of the American Medical Informatics Association. doi.org/10.1093/jamia/ocaa341.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Research shows brain synchronization between humans and dogs