AI outperforms doctors in diagnostics but falls short as a clinical assistant

New study reveals that large language models outperform physicians in diagnostic accuracy but require strategic integration to enhance clinical decision-making without replacing human expertise.

Study: Large Language Model Influence on Diagnostic Reasoning: A Randomized Clinical Trial. Image Credit: Shutterstock AI / Shutterstock.com

In a recent study published in JAMA Network Open, researchers investigate whether large language models (LLMs) could enhance the diagnostic reasoning of physicians as compared to using standard diagnostic resources. LLMs were found to perform better alone as compared to the performance of physician groups using LLMs for diagnosing cases.

How can artificial intelligence improve clinical diagnoses?

Diagnostic errors, which can arise from systemic and cognitive issues, may cause significant harm to patients. Thus, improving diagnostic accuracy requires methods to address cognitive challenges that are part of clinical reasoning. However, common methods like reflective practices, educational programs, and decision support tools have not effectively improved diagnostic accuracy.

Recent advances in artificial intelligence, especially LLMs, offer promising support by simulating human-like reasoning and responses. LLMs can also handle complex medical cases and assist in clinical decision-making, while interacting empathetically with the user.

The current use of LLMs in healthcare is largely supplementary in enhancing human expertise. Considering the limited training and integration received by healthcare professionals on the use of LLMs in clinical settings, it is crucial to understand the impact of using LLMs in clinical settings on patient care.

About the study

In the present study, researchers utilized a randomized, single-blind design to assess the diagnostic reasoning abilities of physicians using either LLMs or conventional resources. Physicians working in family, emergency, or internal medicine were recruited for the study, with all sessions conducted in person or remotely.

Physicians were provided with one hour to work through six moderately complex clinical cases presented in a survey tool. Study participants in the intervention group were provided access to LLM tools ChatGPT Plus and GPT-4, whereas study participants in the control group only used conventional resources.

Clinical cases included detailed patient histories, examination findings, and test results. The reviewing and selection of cases followed strict criteria involving four physicians, with selected cases affected by a wide range of medical conditions while excluding simple and extremely rare cases.

Structured reflection was included as a conventional assessment tool. This required the participants to list their top differential diagnosis, explain the supporting and opposing case factors, and choose the most likely diagnosis while proposing further treatment steps. The responses were graded for the accuracy of the final diagnosis, as well as diagnostic reasoning.

The objective diagnostic performance of the LLM was evaluated by using standardized prompts, which were repeated thrice for consistency. The responses were then scored by assigning points for correct reasoning and diagnostic plausibility.

Statistical analyses using mixed-effects models were also performed to account for intra-participant variability, whereas linear and logistic models were applied to time metrics and diagnostic performance.

Study findings

The use of LLMs by physicians did not improve the diagnostic reasoning for challenging cases as compared to the use of conventional resources by physicians. However, the LLMs alone performed significantly better than the physicians in diagnosing cases.

These findings were consistent across different physician experience levels, which suggests that simply providing access to LLMs was not likely to enhance the diagnostic reasoning.

No significant differences were observed in case-solving evaluations between the groups. However, further studies using larger sample sizes are needed to determine whether LLM use improves efficiency.

The standalone performance of the LLM was better than that of both human groups, with these results similar to those published in similar studies on other LLM technologies. The superior impartial performance of the LLMs is attributed to the sensitivity to prompt formulation, which emphasizes the importance of prompt strategies in maximizing the utility of LLMs.

Conclusions

LLMs show immense promise in efficient diagnostic reasoning. Despite successful diagnoses provided by LLMs in the current study, these results should not be interpreted to indicate that LLMs can provide diagnoses without clinician oversight.

As AI research progresses and nears clinical integration, it will become even more important to reliably measure diagnostic performance using the most realistic and clinically relevant evaluation methods and metrics.

The integration of LLMs into clinical practice requires effective strategies for structured prompt designing and training physicians to use detailed prompts, which could optimize the performance of physician-LLM collaborations in diagnosis. Nevertheless, the utilization of LLMs for enhancing diagnostic reasoning involves using these tools as complements, rather than replacements, for physician expertise in the clinical decision-making process.

Journal reference:
  • Goh, E., Gallo, R., Hom, J., et al. (2024). Large Language Model Influence on Diagnostic Reasoning: A Randomized Clinical Trial. JAMA Network Open 7(10); e2440969–e2440969. doi:10.1001/jamanetworkopen.2024.40969.
Dr. Chinta Sidharthan

Written by

Dr. Chinta Sidharthan

Chinta Sidharthan is a writer based in Bangalore, India. Her academic background is in evolutionary biology and genetics, and she has extensive experience in scientific research, teaching, science writing, and herpetology. Chinta holds a Ph.D. in evolutionary biology from the Indian Institute of Science and is passionate about science education, writing, animals, wildlife, and conservation. For her doctoral research, she explored the origins and diversification of blindsnakes in India, as a part of which she did extensive fieldwork in the jungles of southern India. She has received the Canadian Governor General’s bronze medal and Bangalore University gold medal for academic excellence and published her research in high-impact journals.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Sidharthan, Chinta. (2024, November 06). AI outperforms doctors in diagnostics but falls short as a clinical assistant. News-Medical. Retrieved on November 06, 2024 from https://www.news-medical.net/news/20241106/AI-outperforms-doctors-in-diagnostics-but-falls-short-as-a-clinical-assistant.aspx.

  • MLA

    Sidharthan, Chinta. "AI outperforms doctors in diagnostics but falls short as a clinical assistant". News-Medical. 06 November 2024. <https://www.news-medical.net/news/20241106/AI-outperforms-doctors-in-diagnostics-but-falls-short-as-a-clinical-assistant.aspx>.

  • Chicago

    Sidharthan, Chinta. "AI outperforms doctors in diagnostics but falls short as a clinical assistant". News-Medical. https://www.news-medical.net/news/20241106/AI-outperforms-doctors-in-diagnostics-but-falls-short-as-a-clinical-assistant.aspx. (accessed November 06, 2024).

  • Harvard

    Sidharthan, Chinta. 2024. AI outperforms doctors in diagnostics but falls short as a clinical assistant. News-Medical, viewed 06 November 2024, https://www.news-medical.net/news/20241106/AI-outperforms-doctors-in-diagnostics-but-falls-short-as-a-clinical-assistant.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Scientists unveil a 3D photoacoustic scanner that speeds up vascular imaging for real-time clinical use