Navigating the minefield of AI in healthcare: Balancing innovation with accuracy

In a recent 'Fast Facts' article published in the journal BMJ, researchers discuss recent advances in generative artificial intelligence (AI), the importance of the technology in the world today, and the potential dangers that need to be addressed before large language models (LLMs) such as ChatGPT can become the trustworthy sources of factual information we believe them to be.

BMJ Fast Facts: Quality and safety of artificial intelligence generated health information. Image Credit: Le Panda / ShutterstockBMJ Fast Facts: Quality and safety of artificial intelligence generated health information. Image Credit: Le Panda / Shutterstock

What is generative AI? 

'Generative artificial intelligence (AI)' is a subset of AI models that create context-dependant content (text, images, audio, and video) and form the basis of the natural language models powering AI assistants (Google Assistant, Amazon Alexa, and Siri) and productivity applications including ChatGPT and Grammarly AI. This technology represents one of the fastest-growing sectors in digital computation and has the potential to substantially progress varying aspects of society, including healthcare and medical research.

Unfortunately, advancements in generative AI, especially large language models (LLMs) like ChatGPT, have far outpaced ethical and safety checks, introducing the potential for severe consequences, both accidental and deliberate (malicious). Research estimates that more than 70% of people use the internet as their primary source of health and medical information, with more individuals tapping into LLMs such as Gemini, ChatGPT, and Copilot with their queries each day. The present article focuses on three vulnerable aspects of AI, namely AI errors, health disinformation, and privacy concerns. It highlights the efforts of novel disciplines, including AI Safety and Ethical AI, in addressing these vulnerabilities.

AI errors

Errors in data processing are a common challenge across all AI technologies. As input datasets become more extensive and model outputs (text, audio, pictures, or video) become more sophisticated, erroneous or misleading information becomes increasingly more challenging to detect.

"The phenomenon of "AI hallucination" has gained prominence with the widespread use of AI chatbots (e.g., ChatGPT) powered by LLMs. In the health information context, AI hallucinations are particularly concerning because individuals may receive incorrect or misleading health information from LLMs that are presented as fact."

For lay members of society incapable of discerning between factual and inaccurate information, these errors can become very costly very fast, especially in cases of erroneous medical information. Even trained medical professionals may suffer from these errors, given the growing amount of research conducted using LLMs and generative AI for data analyses.

Thankfully, numerous technological strategies aimed at mitigating AI errors are currently being developed, the most promising of which involves developing generative AI models that "ground" themselves in information derived from credible and authoritative sources. Another method is incorporating 'uncertainty' in the AI model's result – when presenting an output. The model will also present its degree of confidence in the validity of the information presented, thereby allowing the user to reference credible information repositories in instances of high uncertainty. Some generative AI models already incorporate citations as a part of their results, thereby encouraging the user to educate themselves further before accepting the model's output at face value.

Health disinformation

Disinformation is distinct from AI hallucinations in that the latter is accidental and inadvertent, while the former is deliberate and malicious. While the practice of disinformation is as old as human society itself, generative AI presents an unprecedented platform for the generation of 'diverse, high-quality, targeted disinformation at scale' at almost no financial cost to the malicious actor.

"One option for preventing AI-generated health disinformation involves fine-tuning models to align with human values and preferences, including avoiding known harmful or disinformation responses from being generated. An alternative is to build a specialized model (separate from the generative AI model) to detect inappropriate or harmful requests and responses."

While both the above techniques are viable in the war against disinformation, they are experimental and model-sided. To prevent inaccurate data from even reaching the model for processing, initiatives such as digital watermarks, designed to validate accurate data and represent AI-generated content, are currently in the works. Equally importantly, the establishment of AI vigilance agencies would be required before AI can be unquestioningly trusted as a robust information delivery system.

Privacy and bias

Data used for generative AI model training, especially medical data, must be screened to ensure no identifiable information is included, thereby respecting the privacy of its users and the patients whose data the models were trained upon. For crowdsourced data, AI models usually include privacy terms and conditions. Study participants must ensure that they abide by these terms and not provide information that can be traced back to the volunteer in question.

Bias is the inherited risk of AI models to skew data based on the model's training source material. Most AI models are trained on extensive datasets, usually obtained from the internet.

"Despite efforts by developers to mitigate biases, it remains challenging to fully identify and understand the biases of accessible LLMs owing to a lack of transparency about the training data and process. Ultimately, strategies aimed at minimizing these risks include exercising greater discretion in the selection of training data, thorough auditing of generative AI outputs, and taking corrective steps to minimize biases identified."

Conclusions

Generative AI models, the most popular of which include LLMs such as ChatGPT, Microsoft Copilot, Gemini AI, and Sora, represent some of the best human productivity enhancements of the modern age. Unfortunately, advancements in these fields have far outpaced credibility checks, resulting in the potential for errors, disinformation, and bias, which could lead to severe consequences, especially when considering healthcare. The present article summarizes some of the dangers of generative AI in its current form and highlights under-development techniques to mitigate these dangers.

Journal reference:
  • Sorich, M. J., Menz, B. D., & Hopkins, A. M. (2024). Quality and safety of artificial intelligence generated health information. In BMJ (p. q596). BMJ, DOI – 10.1136/bmj.q596, https://www.bmj.com/content/384/bmj.q596
Hugo Francisco de Souza

Written by

Hugo Francisco de Souza

Hugo Francisco de Souza is a scientific writer based in Bangalore, Karnataka, India. His academic passions lie in biogeography, evolutionary biology, and herpetology. He is currently pursuing his Ph.D. from the Centre for Ecological Sciences, Indian Institute of Science, where he studies the origins, dispersal, and speciation of wetland-associated snakes. Hugo has received, amongst others, the DST-INSPIRE fellowship for his doctoral research and the Gold Medal from Pondicherry University for academic excellence during his Masters. His research has been published in high-impact peer-reviewed journals, including PLOS Neglected Tropical Diseases and Systematic Biology. When not working or writing, Hugo can be found consuming copious amounts of anime and manga, composing and making music with his bass guitar, shredding trails on his MTB, playing video games (he prefers the term ‘gaming’), or tinkering with all things tech.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Francisco de Souza, Hugo. (2024, March 25). Navigating the minefield of AI in healthcare: Balancing innovation with accuracy. News-Medical. Retrieved on October 31, 2024 from https://www.news-medical.net/news/20240325/Navigating-the-minefield-of-AI-in-healthcare-Balancing-innovation-with-accuracy.aspx.

  • MLA

    Francisco de Souza, Hugo. "Navigating the minefield of AI in healthcare: Balancing innovation with accuracy". News-Medical. 31 October 2024. <https://www.news-medical.net/news/20240325/Navigating-the-minefield-of-AI-in-healthcare-Balancing-innovation-with-accuracy.aspx>.

  • Chicago

    Francisco de Souza, Hugo. "Navigating the minefield of AI in healthcare: Balancing innovation with accuracy". News-Medical. https://www.news-medical.net/news/20240325/Navigating-the-minefield-of-AI-in-healthcare-Balancing-innovation-with-accuracy.aspx. (accessed October 31, 2024).

  • Harvard

    Francisco de Souza, Hugo. 2024. Navigating the minefield of AI in healthcare: Balancing innovation with accuracy. News-Medical, viewed 31 October 2024, https://www.news-medical.net/news/20240325/Navigating-the-minefield-of-AI-in-healthcare-Balancing-innovation-with-accuracy.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
New UCL hub to advance cancer detection with imaging tech