Public skepticism of AI-generated medical advice

A recent Nature Medicine study investigated the public perception of artificial intelligence (AI)-based tools designed to provide digital medical advice.

Study: Influence of believed AI involvement on the perception of digital medical advice. Image Credit: MUNGKHOOD STUDIO / Shutterstock.com Study: Influence of believed AI involvement on the perception of digital medical advice. Image Credit: MUNGKHOOD STUDIO / Shutterstock.com

The role of AI in medicine

To date, several AI-based systems have been developed for medical purposes. For example, AI-based tools enable the analysis of medical images, such as X-rays and magnetic resonance imaging (MRI) scans, and the prediction of drug interactions.

Recently developed AI-based large language models (LLMs) have been used to generate medical advice. For example, ChatGPT, a popular LLM application created by OpenAI, offers medical information without consulting professional physicians. ChatGPT, particularly the ChatGPT 4.0 version, is associated with high accuracy in diagnosing disease.

One previous study revealed that clinicians assessed responses to medical queries generated by LLMs and considered these answers to be of high quality. In fact, the responses generated by AI-based LLMs were considered to be more empathic than answers provided by human physicians. Importantly, none of the clinicians in this study were aware of the authorship of the responses they evaluated.

Even after generating high quality data, significant reservations have been observed among different stakeholders in using AI-based applications. Thus, it is imperative to assess how the general public perceives AI-generated healthcare advice.

About the study

The current study used two experiments to explore how the public reacts to LLM-generated medical advice in a controlled experimental setting. Whereas the ‘study one’ cohort comprised 1,050 participants from various nationalities, the ‘study two’ cohort included 1,230 participants from the United Kingdom. These cohorts were used to assess how identical medical advice labeled as human, AI, or human physician + AI was perceived by the participants.

The human physician + AI label, which included information generated by a human physician in collaboration with AI, was developed based on the assumption that AI will not replace but support human competencies in the future. In study two, an individual’s willingness to follow the provided medical advice was evaluated. The study participants’ desire to test the AI tools that offer medical advice was also assessed.

Study findings

The study findings indicate that the general public perceives physicians as the most authentic source of medical information compared to AI-based tools. ' Human physician’s advice was perceived as significantly more empathic than advice provided by ‘AI’ and ‘human physician + AI.’ Similarly, ‘human’ advice was rated as significantly more reliable than ‘AI’ and ‘human physician + AI’ advice.

Mixed-effect regression analyses indicated that the comprehensibility ratings were not affected by the author's label. In general, the study participants were significantly less willing to follow medical advice believed to be generated by AI tools.

Consistent with previous reports, the current study highlighted the importance of mutual demonstration of care and respect achieved through patient-physician interactions. The aversion to AI-based tools for medical information could be attributed to the perception of these tools as ‘dehumanizing,' which is reflected through lower empathy scores for AI-labeled advice. Another reason for resistance to AI-generated medical advice could be due to ‘uniqueness neglect,’ in which patients perceive that AI may fail to consider an individual’s unique characteristics.

Conclusions

Medical advice provided by ‘human physicians’ was perceived as more empathic and reliable but not as comprehensible as ‘AI’ and ‘human physicians + AI’ advice.

The current study had certain limitations. For example, all study participants were asked to adopt the perspective of other individuals, which meant they could not formulate their inquiries.

Furthermore, the assessed dialogs had only one question and an associated response. Therefore, this study's experimental setting failed to capture the extensive interactions that typically occur in face-to-face doctor-patient consultations. Future research should consider more interactive and less controlled environments.

Consistent with previous research, the study findings suggest a bias against medical advice labeled as AI-generated, irrespective of whether it is supervised by a human physician or not. This raises notable concerns, especially considering rapid advancements in the use of AI in healthcare and the potential for human-AI collaboration.

To mitigate public concerns, a larger group of stakeholders will need to be engaged, including insurance providers and physicians. The issue of framing the involvement of AI in delivering medical advice is crucial, as a recent study has shown that when patients are convinced that human physicians would remain unequivocally involved in the decision-making position, there is a higher level of trust in medical advice offered.

Journal reference:
  • Reis, M., Reis, F., & Kunde, W. (2024). Influence of believed AI involvement on the perception of digital medical advice. Nature Medicine; 1-3. doi:10.1038/s41591-024-03180-7
Dr. Priyom Bose

Written by

Dr. Priyom Bose

Priyom holds a Ph.D. in Plant Biology and Biotechnology from the University of Madras, India. She is an active researcher and an experienced science writer. Priyom has also co-authored several original research articles that have been published in reputed peer-reviewed journals. She is also an avid reader and an amateur photographer.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Bose, Priyom. (2024, July 30). Public skepticism of AI-generated medical advice. News-Medical. Retrieved on September 08, 2024 from https://www.news-medical.net/news/20240730/Public-skepticism-of-AI-generated-medical-advice.aspx.

  • MLA

    Bose, Priyom. "Public skepticism of AI-generated medical advice". News-Medical. 08 September 2024. <https://www.news-medical.net/news/20240730/Public-skepticism-of-AI-generated-medical-advice.aspx>.

  • Chicago

    Bose, Priyom. "Public skepticism of AI-generated medical advice". News-Medical. https://www.news-medical.net/news/20240730/Public-skepticism-of-AI-generated-medical-advice.aspx. (accessed September 08, 2024).

  • Harvard

    Bose, Priyom. 2024. Public skepticism of AI-generated medical advice. News-Medical, viewed 08 September 2024, https://www.news-medical.net/news/20240730/Public-skepticism-of-AI-generated-medical-advice.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Exploring AI’s impact on precision oncology