ChatGPT shows promise in supporting doctors in emergency medicine

The artificial intelligence chatbot ChatGPT performed as well as a trained doctor in suggesting likely diagnoses for patients being assessed in emergency medicine departments, in a pilot study to be presented at the European Emergency Medicine Congress, which starts on Saturday.

Researchers say a lot more work is needed, but their findings suggest the technology could one day support doctors working in emergency medicine, potentially leading to shorter waiting times for patients.

The study was by Dr Hidde ten Berg, from the department of emergency medicine and Dr Steef Kurstjens, from the department of clinical chemistry and hematology, both at Jeroen Bosch Hospital, 's-Hertogenbosch, The Netherlands.

Dr ten Berg told the Congress: "Like a lot of people, we have been trying out ChatGPT and we were intrigued to see how well it worked for examining some complex diagnostic cases. So, we set up a study to assess how well the chatbot worked compared to doctors with a collection of emergency medicine cases from daily practice."

The research, which is also published this month in the Annals of Emergency Medicine [2], included anonymized details on 30 patients who were treated at Jeroen Bosch Hospital's emergency department in 2022. The researchers entered physicians' notes on patients' signs, symptoms and physical examinations into two versions of ChatGPT (the free 3.5 version and the subscriber 4.0 version). They also provided the chatbot with results of lab tests, such as blood and urine analysis. For each case, they compared the shortlist of likely diagnoses generated by the chatbot to the shortlist made by emergency medicine doctors and to the patient's correct diagnosis.

They found a large overlap (around 60%) between the shortlists generated by ChatGPT and the doctors. Doctors had the correct diagnosis within their top five likely diagnoses in 87% of the cases, compared to 97% for ChatGPT version 3.5 and 87% for version 4.0.

We found that ChatGPT performed well in generating a list of likely diagnoses and suggesting the most likely option. We also found a lot of overlap with the doctors' lists of likely diagnoses. Simply put, this indicates that ChatGPT was able suggest medical diagnoses much like a human doctor would.

For example, we included a case of a patient presenting with joint pain that was alleviated with painkillers, but redness, joint pain and swelling always recurred. In the previous days, the patient had a fever and sore throat. A few times there was a discoloration of the fingertips. Based on the physical exam and additional tests, the doctors thought the most likely diagnosis was probably rheumatic fever, but ChatGPT was correct with its most likely diagnosis of vasculitis.

It's vital to remember that ChatGPT is not a medical device and there are concerns over privacy when using ChatGPT with medical data. However, there is potential here for saving time and reducing waiting times in the emergency department. The benefit of using artificial intelligence could be in supporting doctors with less experience, or it could help in spotting rare diseases."

Dr Hidde ten Berg

Professor Youri Yordanov from the St Antoine Hospital emergency department (APHP Paris), France, is Chair of the EUSEM 2023 abstract committee and was not involved in the research. He said: "We are a long way from using ChatGPT in the clinic, but it's vital that we explore new technology and consider how it could be used to help doctors and their patients. People who need to go to the emergency department want to be seen as quickly as possible and to have their problem correctly diagnosed and treated. I look forward to more research in this area and hope that it might ultimately support the work of busy health professionals."

Source:
Journal reference:

ten Berg, H., et al. (2023) ChatGPT and Generating a Differential Diagnosis Early in an Emergency Department Presentation. Annals of Emergency Medicine. doi.org/10.1016/j.annemergmed.2023.08.003.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
New long COVID index highlights five symptom subtypes