The rise in AI use in medicine
Who is responsible? A long drawn argument
Legal considerations regarding AI use in healthcare
Future considerations to resolve the issue regarding liabilities
Most non-criminal lawsuits are over who will pay for misconduct, and either party seems to believe themselves to be innocent. With the rising use of artificial intelligence (AI) in medicine, more individuals seem to be asking the same question: if physicians use medical AI systems for diagnosis and treatment and make a mistake that ultimately harms the patient, then who should be held liable?
This article focuses on the legal implications of using AI in medical diagnosis and treatment recommendations.
Image Credit: Have a nice day Photo/Shutterstock.com
The rise in AI use in medicine
AI technology, including machine learning (ML) and deep learning (DL) models, has been extensively applied in hospitals and clinics worldwide for varied applications, including stroke detection, diabetic retinopathy screening, and predicting hospital admissions.1
Several surveys have shown that this technology has significantly benefited the healthcare system by facilitating smarter and quicker solutions for both doctors and patients.2
By quickly and efficiently analyzing large datasets, AI tools enable accelerated disease diagnosis and monitor treatment response. For early cancer detection and diagnosis, radiologists use these AI-based algorithms to identify patterns in radiological images imperceptible to the human eye. 3
For example, AI algorithms have been designed to analyze computed tomography (CT) images and magnetic resonance imaging (MRI) data to screen patients for lung and prostate cancer, respectively.4
The DL-based strategy has been used in the early detection of breast cancer through the interpretation of two-dimensional and three-dimensional mammography images.5 Multiple studies have shown that AI has improved the overall accuracy when used as an adjunct tool by radiologists interpreting mammograms.
At present, many commercially available algorithms do not perform efficiently due to the lack of comprehensive data on clinical effectiveness.6
Scientists have used AI to facilitate the automated characterization of intratumoral heterogeneity, which helps predict disease progression and treatment efficacy. The DL algorithm has been used to assess CT, MRI, and positron emission tomography (PET) scan images.
Radiomic evaluation of tumor morphology has led to a more precise monitoring of the treatment response of solid tumors.
AI healthcare tools, such as IBM Watson Health, Google DeepMind Health, Eyenuk, IBEX Medical Analytics, Aidoc, and Butterfly iQ, are among the most popular platforms used by doctors, radiologists, psychologists, and other healthcare officials for disease diagnosis and treatment plan for various diseases.
Who Owns Your Medical Data?
Who is responsible? A long drawn argument
If an AI error leads to an unwanted effect, physicians could shift any liability for faulty AI performance to developers, and the company might point out that medical treatment decisions are ultimately made by doctors.
In the era of continually increasing AI use in the healthcare sector, it is important to understand who should take responsibility, i.e., the AI developer, the healthcare provider, or any other stakeholder, when an AI-based diagnosis or treatment plan harms a patient.
At present, there is no prominent line of responsibility between healthcare providers, AI system developers, and regulators overseeing them regarding faulty judgments that harm patients.
Therefore, comprehensive policies are required to assign responsibility to protect patients. Also, more clarity is required to understand whether any liability lies across the AI supply chain.
The future of AI in medicine | Conor Judge | TEDxGalway
Legal considerations regarding AI use in healthcare
Although the application of AI in medical diagnoses and treatment has been immensely beneficial, this technology is also associated with valid legal concerns regarding accountability, privacy, and regulatory compliance.7
For instance, AI tools rely on access to patients’ health data, which has triggered the question about data privacy protections and transparency in how the data is used. To protect sensitive health information from disclosure, regulations like the Health Insurance Portability and Accountability Act (HIPAA) were established in 1996.8
Opaque AI systems can perpetuate preferences through training data imbalances, which could lead the AI tool to exacerbate existing biases. AI systems can generate unfair or discriminatory treatment recommendations if they are trained with data concerning specific patient demographics, narrowing down the generalizability.
In most AI systems, their inner working remain unexplained as "black boxes", which reduces accountability around AI-guided decisions. A greater transparency. AI developers must ensure transparency regarding the device mechanism, limitations, and clinical validation.9
Physicians are free to use AI, but many opt not to, despite understanding the benefits, for fear that errors by AI tools could charge them with practicing medicine below the standard of care.
Most healthcare AI tools are likely to fail under the US Food and Drug Administration (FDA) regulations because the existing framework focuses on medical devices rather than adaptive software algorithms.
Therefore, new regulations must be formulated to specifically address AI in medicine. This promotes innovation to improve efficacy and ensure user safety.
Can AI Outperform Doctors in Diagnosing Infectious Diseases?
Future considerations to resolve the issue regarding liabilities
The regulatory standpoints on AI tools in medicine are different across countries based on factors like risk tolerance and desire to spur innovation. The ongoing international collaboration between governance and healthcare AI will play a crucial role in balancing this hurdle and promoting innovation and public well-being.10
Precise regulations, accountability mechanisms, and technical standards are urgently required to support the use of AI in medicine.
Scientists and policymakers believe that continual examination of data bias, transparency, and privacy will be crucial to improving the accuracy and use of AI tools in the medical sector.
AI systems must provide the reasoning behind a diagnosis. This will help clinicians assess whether the key features were considered for disease diagnosis. Furthermore, regulatory bodies must establish a mechanism to assess the real-world performance of AI systems to detect any errors in the device.
References
- Kang J, et al. Artificial intelligence across oncology specialties: current applications and emerging tools.: BMJ Oncology. 2024;3:e000134. doi.org/10.1136/bmjonc-2023-000134
- Junaid SB, et al. Recent Advancements in Emerging Technologies for Healthcare Management Systems: A Survey. Healthcare (Basel). 2022;10(10):1940. doi: 10.3390/healthcare10101940.
- Kolla L, Parikh RB. Uses and limitations of artificial intelligence for oncology. Cancer. 2024;130(12):2101-2107. doi: 10.1002/cncr.35307.
- Elmore JG, Lee CI. Artificial Intelligence in Medical Imaging-Learning From Past Mistakes in Mammography. JAMA Health Forum. 2022;3(2):e215207. doi: 10.1001/jamahealthforum.2021.5207.
- Wang L. Mammography with deep learning for breast cancer detection. Front Oncol. 2024;14:1281922. doi: 10.3389/fonc.2024.1281922.
- Khan B, et al. Drawbacks of Artificial Intelligence and Their Potential Solutions in the Healthcare Sector. Biomed Mater Devices. 2023;1-8. doi: 10.1007/s44174-023-00063-2.
- Mennella C, Maniscalco U, De Pietro G, Esposito M. Ethical and regulatory challenges of AI technologies in healthcare: A narrative review. Heliyon. 2024;10(4):e26297. doi: 10.1016/j.heliyon.2024.e26297.
- Public Health Law. Health Insurance Portability and Accountability Act of 1996 (HIPAA). (2024) Available at : https://www.cdc.gov/phlp/php/resources/health-insurance-portability-and-accountability-act-of-1996-hipaa.html
- Fehr J, et al. A trustworthy AI reality-check: the lack of transparency of artificial intelligence products in healthcare. Front Digit Health. 2024;6:1267290. doi: 10.3389/fdgth.2024.1267290.
- Morley J, et al. Governing Data and Artificial Intelligence for Health Care: Developing an International Understanding. JMIR Form Res. 2022;6(1):e31623. doi: 10.2196/31623.
Further Reading