UK doctors turn to AI chatbots for diagnoses and documentation, but are they putting patient privacy at risk?

Despite privacy concerns and risks of errors, a new survey shows that many UK doctors are embracing AI chatbots to streamline their work and assist with clinical tasks like documentation and diagnoses.

Short Report: Generative artificial intelligence in primary care: an online survey of UK general practitioners. Image Credit: BOY ANTHONY / ShutterstockShort Report: Generative artificial intelligence in primary care: an online survey of UK general practitioners. Image Credit: BOY ANTHONY / Shutterstock

A recent study published in the journal BMJ Health and Care Informatics measured the use of generative artificial intelligence (AI) by general practitioners (GPs) in the United Kingdom (UK).

Following the introduction of the Chat Generative Pretrained Transformer (ChatGPT) in late 2022, Interest in chatbots powered by large language models (LLMs) has substantially increased, with a growing focus on their clinical potential. These new-generation chatbots are trained on extensive data to generate responses and function like autocompletion devices.

Further, unlike internet search engines, these chatbots can rapidly summarize and generate text, remember previous prompts, and mimic conversational interactions. Preliminary evidence highlights the ability of these tools to help with writing empathic documentation, more detailed documentation than dictation or typing, and help in differential diagnoses.

Nevertheless, these tools have limitations; they may provide erroneous information, and their outputs also risk worsening or perpetuating gender, disability, and racial inequities in healthcare. Additionally, there are concerns regarding patient privacy, as it remains unclear how generative AI companies use the collected data. However, there has been limited data on clinician experiences and practices; few studies have investigated doctors' adoption and opinion of these tools in clinical practice.

The study and findings

In the present study, researchers measured GPs’ use of AI chatbots in the UK. GPs registered with a clinician marketing service were surveyed. The survey was pretested and piloted with six GPs. This study was launched as an omnibus survey with predetermined sample sizes. Sampling was stratified by regional location. Participants were asked to answer all closed-ended items. The survey collected demographic information, showing that 45% of respondents were GP partners/principals, 34% were salaried GPs, and 18% were locum GPs.

Participant data were fully anonymized and numerically stored. Shopping vouchers were offered after survey completion. The survey was conducted between February 2 and 22, 2024. The current study reported responses to the survey item, Q1, which asked participants whether they had ever used Bing AI, ChatGPT, Google’s Bard, or other generative AI tools in any aspect of clinical practice.

Overall, 1,006 participants completed the survey; 53% were males, and 54% were aged ≥ 46. The majority of respondents were GP partners or salaried GPs. Overall, 205 respondents reported using generative AI in clinical practice. On February 8, 2024, after 200 responses had been collected, a follow-up question was added in response to the high percentage of AI users; it asked those who responded affirmatively to specify what they were using AI tools for.

About 29% reported using AI tools for documentation after appointments, and 28% used AI for suggesting a differential diagnosis. Further, 20% used AI tools for patient timelines or summarization from previous documentation, and 25% used them to explore treatment options. Around 8% used AI to write letters.

Conclusions

Overall, one in five surveyed doctors reported using AI chatbots to help with tasks in clinical practice in the UK. ChatGPT was the most used. Further, more than a quarter of generative AI users reported using them to help with differential diagnosis or documentation after appointments. These findings suggest that GPs may benefit from these tools, particularly in supporting clinical reasoning and administrative tasks.  

However, these tools have limitations as they can generate errors and biases. Moreover, they may undermine patient privacy because it is unclear how generative AI companies use the collected information. The medical community will also need to address regulatory uncertainties and create clear work policies to ensure the safe adoption of these tools in healthcare settings. Together, despite the lack of guidance about generative AI tools and unclear work policies, GPs in the UK report using them to assist with their job.

Journal reference:
  • Blease CR, Locher C, Gaab J, Hägglund M, Mandl KD. Generative artificial intelligence in primary care: an online survey of UK general practitioners. BMJ Health & Care Informatics, 2024, DOI: 10.1136/bmjhci-2024-101102, https://informatics.bmj.com/content/31/1/e101102
Tarun Sai Lomte

Written by

Tarun Sai Lomte

Tarun is a writer based in Hyderabad, India. He has a Master’s degree in Biotechnology from the University of Hyderabad and is enthusiastic about scientific research. He enjoys reading research papers and literature reviews and is passionate about writing.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Sai Lomte, Tarun. (2024, September 26). UK doctors turn to AI chatbots for diagnoses and documentation, but are they putting patient privacy at risk?. News-Medical. Retrieved on December 11, 2024 from https://www.news-medical.net/news/20240926/UK-doctors-turn-to-AI-chatbots-for-diagnoses-and-documentation-but-are-they-putting-patient-privacy-at-risk.aspx.

  • MLA

    Sai Lomte, Tarun. "UK doctors turn to AI chatbots for diagnoses and documentation, but are they putting patient privacy at risk?". News-Medical. 11 December 2024. <https://www.news-medical.net/news/20240926/UK-doctors-turn-to-AI-chatbots-for-diagnoses-and-documentation-but-are-they-putting-patient-privacy-at-risk.aspx>.

  • Chicago

    Sai Lomte, Tarun. "UK doctors turn to AI chatbots for diagnoses and documentation, but are they putting patient privacy at risk?". News-Medical. https://www.news-medical.net/news/20240926/UK-doctors-turn-to-AI-chatbots-for-diagnoses-and-documentation-but-are-they-putting-patient-privacy-at-risk.aspx. (accessed December 11, 2024).

  • Harvard

    Sai Lomte, Tarun. 2024. UK doctors turn to AI chatbots for diagnoses and documentation, but are they putting patient privacy at risk?. News-Medical, viewed 11 December 2024, https://www.news-medical.net/news/20240926/UK-doctors-turn-to-AI-chatbots-for-diagnoses-and-documentation-but-are-they-putting-patient-privacy-at-risk.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.