A study published in JAMA Internal Medicine indicates that artificial intelligence assistant-generated responses to patients’ questions are better than physicians’ responses regarding quality and empathy.
Study: Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum. Image Credit: TierneyMJ / Shutterstock
Background
Due to social restrictions, virtual healthcare systems have significantly increased during the coronavirus disease 2019 (COVID-19) pandemic. This has led to a 1.6-fold increase in electronic patient messages and a concomitant increase in workload and stress among healthcare professionals. All these factors can collectively give rise to a situation where most patients’ messages will be ignored or answered unsatisfactorily.
Current strategies to reduce virtual healthcare burdens include restricting electronic message notifications, billing for responses, or delegating messages to less trained medical staff. However, these strategies limit patients’ access to quality healthcare support. Currently, healthcare systems are considering artificial intelligence (AI) assistants to reduce the workload of healthcare professionals.
In the current study, scientists have explored the ability of an AI chatbot assistant (ChatGPT) to provide high-quality and empathetic responses to patients’ healthcare messages. Particularly, they have compared chatbot responses with physicians’ responses to questions asked by patients on a social media platform.
ChatGPT is a new generation AI technology driven by large language models. This chatbot is widely recognized for its ability to write near-human-quality text on a wide range of topics.
Important observations
The study utilized a public database of questions from a public social media platform to randomly select 195 exchanges with a unique patient’s question and a unique physician’s answer.
The comparison of physician responses with chatbot responses revealed that the average length of physician responses was significantly shorter than the chatbot responses. Among selected exchanges, about 94% comprised a single patient question and only a single physician response. The remaining exchanges comprised two separate physician responses to a single patient question.
The evaluators (a team of licensed healthcare professionals) who analyzed the selected exchanges preferred chatbot responses over physician responses in 78% of the 585 evaluations.
According to their reports, chatbot responses are significantly higher quality than physician responses. They used the Likert scale to categorize the responses into five groups, i.e., very poor, poor, acceptable, good, or very good. The findings revealed that chatbot responses are better than good quality, and physician responses are acceptable.
The prevalence of responses rated below the acceptable quality was 10 times higher for physicians. In contrast, the prevalence of responses rated good or very good was 3-times higher for the chatbot.
The evaluators rated chatbot responses as significantly more empathetic than physician responses. They found physician responses were 41% less empathetic than chatbot responses. In addition, the prevalence of responses rated less than slightly empathetic was 5 times higher for physicians. In contrast, the prevalence of responses rated empathetic or very empathetic was 9 times higher for the chatbot.
Study significance
The study finds that an AI chatbot assistant-generated responses to patients’ healthcare messages are better than physician-provided responses in terms of quality and empathy. Based on these findings, the scientists recommend that AI chatbot assistants can be adopted in clinical setups for electronic messaging. However, chatbot-generated messages should be reviewed and edited by physicians to improve accuracy levels and restrict potential false or fabricated information.
Chatbot-generated high-quality and empathetic responses might be helpful for rapidly satisfying patients’ healthcare queries, which is needed to reduce unnecessary clinic visits and preserve resources for more deserving patients. Moreover, these responses might improve patient outcomes by increasing treatment adherence and compliance and reducing the frequency of missed appointments.
As mentioned by the scientists, the study has primarily evaluated the quality of chatbot-generated responses; however, the study has not evaluated how an AI assistant may enhance clinicians responding to patient questions.
usechatgpt init success
usechatgpt init success
Journal reference:
usechatgpt init success
usechatgpt init success