Who Takes the Blame When AI Makes a Medical Mistake?

The rise in AI use in medicine
Who is responsible? A long drawn argument
Legal considerations regarding AI use in healthcare
Future considerations to resolve the issue regarding liabilities


Most non-criminal lawsuits are over who will pay for misconduct, and either party seems to believe themselves to be innocent. With the rising use of artificial intelligence (AI) in medicine, more individuals seem to be asking the same question: if physicians use medical AI systems for diagnosis and treatment and make a mistake that ultimately harms the patient, then who should be held liable?

This article focuses on the legal implications of using AI in medical diagnosis and treatment recommendations.

Doctor working office research studying heart disease illness.Image Credit: Have a nice day Photo/Shutterstock.com

The rise in AI use in medicine

AI technology, including machine learning (ML) and deep learning (DL) models, has been extensively applied in hospitals and clinics worldwide for varied applications, including stroke detection, diabetic retinopathy screening, and predicting hospital admissions.1

Several surveys have shown that this technology has significantly benefited the healthcare system by facilitating smarter and quicker solutions for both doctors and patients.2

By quickly and efficiently analyzing large datasets, AI tools enable accelerated disease diagnosis and monitor treatment response. For early cancer detection and diagnosis, radiologists use these AI-based algorithms to identify patterns in radiological images imperceptible to the human eye. 3

For example, AI algorithms have been designed to analyze computed tomography (CT) images and magnetic resonance imaging (MRI) data to screen patients for lung and prostate cancer, respectively.4

The DL-based strategy has been used in the early detection of breast cancer through the interpretation of two-dimensional and three-dimensional mammography images.5 Multiple studies have shown that AI has improved the overall accuracy when used as an adjunct tool by radiologists interpreting mammograms.

At present, many commercially available algorithms do not perform efficiently due to the lack of comprehensive data on clinical effectiveness.6

Scientists have used AI to facilitate the automated characterization of intratumoral heterogeneity, which helps predict disease progression and treatment efficacy. The DL algorithm has been used to assess CT, MRI, and positron emission tomography (PET) scan images.

Radiomic evaluation of tumor morphology has led to a more precise monitoring of the treatment response of solid tumors.

AI healthcare tools, such as IBM Watson Health, Google DeepMind Health, Eyenuk, IBEX Medical Analytics, Aidoc, and Butterfly iQ, are among the most popular platforms used by doctors, radiologists, psychologists, and other healthcare officials for disease diagnosis and treatment plan for various diseases.

Who Owns Your Medical Data?

Who is responsible? A long drawn argument

If an AI error leads to an unwanted effect, physicians could shift any liability for faulty AI performance to developers, and the company might point out that medical treatment decisions are ultimately made by doctors.

In the era of continually increasing AI use in the healthcare sector, it is important to understand who should take responsibility, i.e., the AI developer, the healthcare provider, or any other stakeholder, when an AI-based diagnosis or treatment plan harms a patient.

At present, there is no prominent line of responsibility between healthcare providers, AI system developers, and regulators overseeing them regarding faulty judgments that harm patients.

Therefore, comprehensive policies are required to assign responsibility to protect patients. Also, more clarity is required to understand whether any liability lies across the AI supply chain.

The future of AI in medicine | Conor Judge | TEDxGalway

Legal considerations regarding AI use in healthcare

Although the application of AI in medical diagnoses and treatment has been immensely beneficial, this technology is also associated with valid legal concerns regarding accountability, privacy, and regulatory compliance.7

For instance, AI tools rely on access to patients’ health data, which has triggered the question about data privacy protections and transparency in how the data is used. To protect sensitive health information from disclosure, regulations like the Health Insurance Portability and Accountability Act (HIPAA) were established in 1996.8

Opaque AI systems can perpetuate preferences through training data imbalances, which could lead the AI tool to exacerbate existing biases. AI systems can generate unfair or discriminatory treatment recommendations if they are trained with data concerning specific patient demographics, narrowing down the generalizability.

In most AI systems, their inner working remain unexplained as "black boxes", which reduces accountability around AI-guided decisions. A greater transparency. AI developers must ensure transparency regarding the device mechanism, limitations, and clinical validation.9

Physicians are free to use AI, but many opt not to, despite understanding the benefits, for fear that errors by AI tools could charge them with practicing medicine below the standard of care.

Most healthcare AI tools are likely to fail under the US Food and Drug Administration (FDA) regulations because the existing framework focuses on medical devices rather than adaptive software algorithms.

Therefore, new regulations must be formulated to specifically address AI in medicine. This promotes innovation to improve efficacy and ensure user safety.

Can AI Outperform Doctors in Diagnosing Infectious Diseases?

Future considerations to resolve the issue regarding liabilities

The regulatory standpoints on AI tools in medicine are different across countries based on factors like risk tolerance and desire to spur innovation. The ongoing international collaboration between governance and healthcare AI will play a crucial role in balancing this hurdle and promoting innovation and public well-being.10

Precise regulations, accountability mechanisms, and technical standards are urgently required to support the use of AI in medicine.

Scientists and policymakers believe that continual examination of data bias, transparency, and privacy will be crucial to improving the accuracy and use of AI tools in the medical sector.

AI systems must provide the reasoning behind a diagnosis. This will help clinicians assess whether the key features were considered for disease diagnosis. Furthermore, regulatory bodies must establish a mechanism to assess the real-world performance of AI systems to detect any errors in the device.

References

  1. Kang J, et al. Artificial intelligence across oncology specialties: current applications and emerging tools.: BMJ Oncology. 2024;3:e000134. doi.org/10.1136/bmjonc-2023-000134
  2. Junaid SB, et al. Recent Advancements in Emerging Technologies for Healthcare Management Systems: A Survey. Healthcare (Basel). 2022;10(10):1940. doi: 10.3390/healthcare10101940.
  3. Kolla L, Parikh RB. Uses and limitations of artificial intelligence for oncology. Cancer. 2024;130(12):2101-2107. doi: 10.1002/cncr.35307.
  4. Elmore JG, Lee CI. Artificial Intelligence in Medical Imaging-Learning From Past Mistakes in Mammography. JAMA Health Forum. 2022;3(2):e215207. doi: 10.1001/jamahealthforum.2021.5207.
  5. Wang L. Mammography with deep learning for breast cancer detection. Front Oncol. 2024;14:1281922. doi: 10.3389/fonc.2024.1281922.
  6. Khan B, et al. Drawbacks of Artificial Intelligence and Their Potential Solutions in the Healthcare Sector. Biomed Mater Devices. 2023;1-8. doi: 10.1007/s44174-023-00063-2.
  7. Mennella C, Maniscalco U, De Pietro G, Esposito M. Ethical and regulatory challenges of AI technologies in healthcare: A narrative review. Heliyon. 2024;10(4):e26297. doi: 10.1016/j.heliyon.2024.e26297.
  8. Public Health Law. Health Insurance Portability and Accountability Act of 1996 (HIPAA). (2024) Available at : https://www.cdc.gov/phlp/php/resources/health-insurance-portability-and-accountability-act-of-1996-hipaa.html
  9. Fehr J, et al. A trustworthy AI reality-check: the lack of transparency of artificial intelligence products in healthcare. Front Digit Health. 2024;6:1267290. doi: 10.3389/fdgth.2024.1267290.
  10. Morley J, et al. Governing Data and Artificial Intelligence for Health Care: Developing an International Understanding. JMIR Form Res. 2022;6(1):e31623. doi: 10.2196/31623.

Further Reading

Last Updated: Apr 16, 2025

Dr. Priyom Bose

Written by

Dr. Priyom Bose

Priyom holds a Ph.D. in Plant Biology and Biotechnology from the University of Madras, India. She is an active researcher and an experienced science writer. Priyom has also co-authored several original research articles that have been published in reputed peer-reviewed journals. She is also an avid reader and an amateur photographer.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Bose, Priyom. (2025, April 16). Who Takes the Blame When AI Makes a Medical Mistake?. News-Medical. Retrieved on April 24, 2025 from https://www.news-medical.net/health/Who-Takes-the-Blame-When-AI-Makes-a-Medical-Mistake.aspx.

  • MLA

    Bose, Priyom. "Who Takes the Blame When AI Makes a Medical Mistake?". News-Medical. 24 April 2025. <https://www.news-medical.net/health/Who-Takes-the-Blame-When-AI-Makes-a-Medical-Mistake.aspx>.

  • Chicago

    Bose, Priyom. "Who Takes the Blame When AI Makes a Medical Mistake?". News-Medical. https://www.news-medical.net/health/Who-Takes-the-Blame-When-AI-Makes-a-Medical-Mistake.aspx. (accessed April 24, 2025).

  • Harvard

    Bose, Priyom. 2025. Who Takes the Blame When AI Makes a Medical Mistake?. News-Medical, viewed 24 April 2025, https://www.news-medical.net/health/Who-Takes-the-Blame-When-AI-Makes-a-Medical-Mistake.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.