Data protection is an extremely topical issue in an increasingly connected digital world, and while medicine adopts and develops more beneficial digital tools for research and development, more questions surrounding the safety of patient data arise.
Image Credits: PopTika / Shutterstock.com
A special report published in Radiology has stated that clinical data should be available to use for research and development and other secondary purposes like in the development of artificial intelligence algorithms.
Artificial intelligence has the potential to significantly accelerate medical imaging analysis, but in order to teach the technology about the conditions it is being used to identify, it must be exposed to huge amounts of data from medical examinations and images such as mammograms and CT scans, among many others. This raises important issues concerning the ethical framework that will safeguard patient data when it is shared.
Dr. David B. Larson, MD, MBA, from the Stanford University School of Medicine in Stanford, California, led the study, and explained that “clinical data should be made available to researchers and developers after it has been aggregated and all patient identifiers have been removed,” but that “all who interact with such data should be held to high ethical standards, including protecting patient privacy and not selling clinical data.”
Previously, debate over the sharing of clinical data has focused on ownership, with options being that either the patient owns their medical data, or the medical institution in which the data was generated owns the data. However, Dr. Larson and his colleagues have devised a third option that does not assign ownership to the data at all when data is used for secondary purposes.
Dr. Larson and the research team at Stanford University developed a framework specifically for the sharing and use of clinical data in AI technology development. Larson acknowledges the fact that access to digital clinical data and processing tools can “dramatically accelerate our ability to gain understanding and develop new applications that can benefit patients and populations,” but questions that data’s ethical usage “often preclude the sharing of that information.”
He continues:
“Medical data, which are simply recorded observations, are acquired for the purposes of providing patient care. When that care is provided, that purpose is fulfilled, so we need to find another way to think about how these recorded observations should be used for other purposes.
“We believe that patients, provider organizations, and algorithm developers all have ethical obligations to help ensure that these observations are used to benefit future patients, recognizing that protecting patient privacy is paramount.”
Larson’s framework would support the release of de-identified and aggregated data for research and development. However, those using the data would have to identify themselves and adhere to strict ethical practice. The framework would not require patient consent as they would not always be able to choose not to share their data for AI development, but their privacy would have to be protected.
The article states that when data is used in this way, it is not the data itself, but the “underlying physical properties, phenomena and behaviors that they present, that are of primary interest.”
The authors believe that is in patients’ interest for researchers to be able to look at their clinical data in order to gain deeper insight into anatomy, physiology, and disease progressions, but only if they are not able to identify any patients while doing this.
Selling data is not permitted for clinical providers with Larson’s framework, but corporate organizations could profit from AI algorithms made as a result of the clinical data as long as the profit comes not from the sale of data, but the technology or activities developed as a result of the data. Provider organizations would be able to share clinical data with partners who provide financial support to further their research if their support is solely for their research purposes and not access to data.
Larson said, “We strongly emphasize that protection of patient privacy is paramount. The data must be de-identified. In fact, those who receive the data must not make any attempts to re-identify patients through identifying technology.”
Patient privacy would be achieved by eliminating all identifying information from the data. If identifying features were visible in imaging scans, anyone using those images would need to notify the organization sharing the images to effectively discard the data. This, as Larson stated, would “extend the ethical obligations of provider organizations to all who interact with the data.”
We hope this framework will contribute to more productive dialogue, both in the field of medicine and computer science, as well as with policymakers, as we work to thoughtfully translate ethical considerations into regulatory and legal requirements.”
Dr. David B. Larson, Stanford University School of Medicine
The framework developed by Larson and his colleagues will be put into the public domain to allow other organizations and individuals to consider its potential as they work to answer some of the pressing questions around patient privacy and data protection in clinical AI technology and data sharing.
Source:
Researchers unveil framework for sharing clinical data in AI era. Eurekalert. Available from: https://www.eurekalert.org/emb_releases/2020-03/rson-ruf031720.php
Journal references:
Larson, D.B. et al. (2020). Ethics of Using and Sharing Clinical Imaging Data for Artificial Intelligence: A Proposed Framework. Radiology. DOI: https://doi.org/10.1148/radiol.2020192536
Langlotz, C.P. et al. (2019). A Roadmap for Foundational Research on Artificial Intelligence in Medical Imaging: From the 2018 NIH/RSNA/ACR/The Academy Workshop. Radiology. DOI: https://doi.org/10.1148/radiol.2019190613