AI in healthcare: A double-edged sword? Study reveals impact on diagnostic accuracy

A recent JAMA study investigates whether systematically biased artificial intelligence (AI) impacted clinicians’ diagnostic accuracy and whether image-based AI model explanations can reduce model errors.

Study: Measuring the Impact of AI in the Diagnosis of Hospitalized Patients A Randomized Clinical Vignette Survey Study. Image Credit: PeopleImages.com - Yuri A / Shutterstock.com Study: Measuring the Impact of AI in the Diagnosis of Hospitalized Patients A Randomized Clinical Vignette Survey Study. Image Credit: PeopleImages.com - Yuri A / Shutterstock.com

Background

The capacity of AI to identify anomalies or diseases in clinical images has been discussed in previous studies. These studies revealed that AI-based tools can detect diabetic retinopathy from fundus images, pneumonia from chest radiographs, and skin cancer from histopathology images.

Compared to clinicians’ diagnoses without AI, integrating AI models in clinical decision-making could result in more accurate diagnoses. However, using systematically biased AI models that have consistently misdiagnosed patients can cause potential harm to patients. For example, one previous study reported that an AI model consistently underdiagnosed female patients for heart disease.

Ideally, clinicians must follow AI predictions when correct but ignore them when not. Over-reliance on biased AI models will affect a clinician’s diagnosis. It is important to understand whether the application of AI to support diagnostic decisions is safe.

Recently, the incorporation of AI explanations to interpret model predictions could help clinicians understand a model’s logic before considering it for clinical decisions. Suitable explanations for AI predictions could minimize the development of systematically biased models. For example, image-based explanations provided by models to support their decision helped assess the diagnostic accuracy of AI models by clinicians.

About the study

The current study assessed whether AI explanations increased clinician diagnostic accuracy and reduced the development of systematically biased models. For these, a randomized clinical vignette web-based survey was conducted.

Hospitalist physicians, physician assistants, and nurse practitioners associated with caring for patients with acute respiratory failure were recruited. A total of 45 clinical vignettes were created based on hospitalized patients with acute respiratory failure in 2017.

Each patient’s medical history was assessed to better understand their past and present medical history. At least four pulmonary physicians independently reviewed the patient’s medical records.

All participants followed the same vignette order, in which two vignettes were created without AI predictions, six vignettes with AI predictions, and one vignette with a clinical consultation. Participants were randomized to AI predictions with or without AI explanations for six AI vignettes.

After each vignette, participants were asked to score on a scale of zero to 100 to indicate the likelihood of heart failure, pneumonia, or COPD to be the contributing factor to the patient’s respiratory failure. These responses were continuously collected to calculate the association between AI model scores and participant responses.

Study findings

Of 1,024 participants, 457 completed at least one vignette and were included in the primary analysis; however, 418 participants completed all nine vignettes. Importantly, participant demographics did not significantly differ across randomized groups. 

The specialization of most participants was hospital medicine. The mean age of the participants was 34 years, and around 58% of the cohort was female. Only 13% of the cohort positively interacted with clinical decision support tools, whereas the majority were unaware of the systematic biases of AI models.

The study findings indicate that clinicians’ diagnostic accuracy improved when AI models performed accurately for clinical predictions and decreased in the presence of systematically biased AI predictions. AI explanations did not significantly improve the harmful effects of biased models on a clinician’s diagnostic accuracy, estimated to be about 81%. Therefore, a combination of AI models and clinicians could be effectively used for complex diagnostic tasks.

The explanation could not mitigate the errors since the AI model entirely relied on features unrelated to the clinical condition. However, some studies have suggested that state-of-the-art explanations could improve user-AI instructions. The greater possibility of users being deceived by incompetent AI models was indicated, mainly because many failed to understand the models’ simple explanations.

Because of their limited AI literacy, clinicians failed to comprehend and consider the models’ explanations. In addition, these models must be trained to provide better image-based explanations.

Conclusions

Although AI models’ predictions and explanations enhanced diagnostic accuracy, biased predictions impacted diagnostic accuracy. Thus, it is essential to validate AI models before their incorporation into clinical settings.

Journal reference:
  • Jabbour, S., Fouhey, D., Shepard, S., et al. (2023) Measuring the Impact of AI in the Diagnosis of Hospitalized Patients A Randomized Clinical Vignette Survey Study. JAMA 330(23); 2275-2284. doi:10.1001/jama.2023.22295
Dr. Priyom Bose

Written by

Dr. Priyom Bose

Priyom holds a Ph.D. in Plant Biology and Biotechnology from the University of Madras, India. She is an active researcher and an experienced science writer. Priyom has also co-authored several original research articles that have been published in reputed peer-reviewed journals. She is also an avid reader and an amateur photographer.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Bose, Priyom. (2023, December 19). AI in healthcare: A double-edged sword? Study reveals impact on diagnostic accuracy. News-Medical. Retrieved on December 18, 2024 from https://www.news-medical.net/news/20231219/AI-in-healthcare-A-double-edged-sword-Study-reveals-impact-on-diagnostic-accuracy.aspx.

  • MLA

    Bose, Priyom. "AI in healthcare: A double-edged sword? Study reveals impact on diagnostic accuracy". News-Medical. 18 December 2024. <https://www.news-medical.net/news/20231219/AI-in-healthcare-A-double-edged-sword-Study-reveals-impact-on-diagnostic-accuracy.aspx>.

  • Chicago

    Bose, Priyom. "AI in healthcare: A double-edged sword? Study reveals impact on diagnostic accuracy". News-Medical. https://www.news-medical.net/news/20231219/AI-in-healthcare-A-double-edged-sword-Study-reveals-impact-on-diagnostic-accuracy.aspx. (accessed December 18, 2024).

  • Harvard

    Bose, Priyom. 2023. AI in healthcare: A double-edged sword? Study reveals impact on diagnostic accuracy. News-Medical, viewed 18 December 2024, https://www.news-medical.net/news/20231219/AI-in-healthcare-A-double-edged-sword-Study-reveals-impact-on-diagnostic-accuracy.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
PROSPECT-Lung trial aims to improve treatment for patients with resectable non-small cell lung cancer