Researchers call for ethical guidance on use of AI in healthcare

In a recent review article published in npj Digital Medicine, researchers investigated the ethical implications of deploying Large Language Models (LLMs) in healthcare through a systematic review.

Their conclusions indicate that while LLMs offer significant advantages such as enhanced data analysis and decision support, persistent ethical concerns regarding fairness, bias, transparency, and privacy underscore the necessity for defined ethical guidelines and human oversight in their application.

Study: The ethics of ChatGPT in medicine and healthcare: a systematic review on Large Language Models (LLMs). Image Credit: Summit Art Creations/Shutterstock.comStudy: The ethics of ChatGPT in medicine and healthcare: a systematic review on Large Language Models (LLMs). Image Credit: Summit Art Creations/Shutterstock.com

Background

LLMs have sparked widespread interest due to their advanced artificial intelligence (AI) capabilities, demonstrated prominently since OpenAI released ChatGPT in 2022.

This technology has rapidly expanded into various sectors, including medicine and healthcare, showing promise for clinical decision-making, diagnosis, and patient communication tasks.

However, alongside their potential benefits, concerns have emerged regarding their ethical implications. Previous research has highlighted risks such as the dissemination of inaccurate medical information, privacy breaches from handling sensitive patient data, and the perpetuation of biases based on gender, culture, or race.

Despite these concerns, there is a noticeable gap in comprehensive studies systematically addressing the ethical challenges of integrating LLMs into healthcare. Existing literature focuses on specific instances rather than providing a holistic overview.

Methods

Addressing existing gaps in this field is crucial as healthcare environments demand rigorous ethical standards and regulations.

In this systematic review, researchers mapped the ethical landscape surrounding the role of LLMs in healthcare to identify potential benefits and harms to inform future discussions, policies, and guidelines seeking to govern ethical LLM use.

The researchers designed a review protocol on practical applications and ethical considerations, registered in the International Prospective Register of Systematic Reviews. Ethical approval was not required.

They searched relevant publication databases and preprint servers to gather data, considering preprints due to their prevalence in technology fields and potential relevance not yet indexed in databases.

Inclusion criteria were based on intervention, application setting, and outcomes, with no restrictions on publication type but excluding works solely on medical education or academic writing.

After initial screening of titles and abstracts, data were extracted and coded using a structured form. Quality appraisal focused descriptively on procedural quality criteria to distinguish peer-reviewed material, critically engaging with findings for validity and comprehensiveness during reporting.

Findings

The study analyzed 53 articles to explore LLMs' ethical implications and applications in healthcare. Four main themes emerged from the research: clinical applications, patient support applications, support of health professionals, and public health perspectives.

In clinical applications, LLMs show potential for assisting in initial patient diagnosis and triage, using predictive analysis to identify health risks and recommend treatments.

However, concerns arise regarding their accuracy and the potential for biases in their decision-making processes. These biases could lead to incorrect diagnoses or treatment recommendations, highlighting healthcare professionals' need for careful oversight.

Patient support applications focus on LLMs aiding individuals in accessing medical information, managing symptoms, and navigating healthcare systems.

While LLMs can improve health literacy and communication across language barriers, data privacy and the reliability of medical advice generated by these models remain significant ethical considerations.

Supporting health professionals, LLMs are proposed to automate administrative tasks, summarize patient interactions, and facilitate medical research.

While this automation could enhance efficiency, there are concerns about the impact on professional skills, the integrity of research outputs, and the potential for biases in automated data analysis.

From a public health perspective, LLMs offer opportunities to monitor disease outbreaks, improve health information access, and enhance public health communication.

However, the study highlights risks such as spreading misinformation and the concentration of AI power among a few companies, potentially exacerbating health disparities and undermining public health efforts.

Overall, while LLMs present promising advancements in healthcare, their ethical deployment requires careful consideration of biases, privacy concerns, and the need for human oversight to mitigate potential harms and ensure equitable access and patient safety.

Conclusions

The researchers found that LLMs such as ChatGPT are widely explored in healthcare for their potential to enhance efficiency and patient care by rapidly analyzing large datasets and providing personalized information.

However, ethical concerns persist, including biases, transparency issues, and the generation of misleading information termed hallucinations, which can have severe consequences in clinical settings.

The study aligns with broader research on AI ethics, emphasizing the complexities and risks of deploying AI in healthcare.

Strengths of this study include a comprehensive literature review and structured categorization of LLM applications and ethical issues.

Limitations include the developing nature of ethical examination in this field, reliance on preprint sources, and a predominance of perspectives from North America and Europe.

Future research should focus on defining robust ethical guidelines, enhancing algorithm transparency, and ensuring equitable deployment of LLMs in global healthcare contexts.

Journal reference:
Priyanjana Pramanik

Written by

Priyanjana Pramanik

Priyanjana Pramanik is a writer based in Kolkata, India, with an academic background in Wildlife Biology and economics. She has experience in teaching, science writing, and mangrove ecology. Priyanjana holds Masters in Wildlife Biology and Conservation (National Centre of Biological Sciences, 2022) and Economics (Tufts University, 2018). In between master's degrees, she was a researcher in the field of public health policy, focusing on improving maternal and child health outcomes in South Asia. She is passionate about science communication and enabling biodiversity to thrive alongside people. The fieldwork for her second master's was in the mangrove forests of Eastern India, where she studied the complex relationships between humans, mangrove fauna, and seedling growth.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Pramanik, Priyanjana. (2024, July 10). Researchers call for ethical guidance on use of AI in healthcare. News-Medical. Retrieved on November 22, 2024 from https://www.news-medical.net/news/20240710/Researchers-call-for-ethical-guidance-on-use-of-AI-in-healthcare.aspx.

  • MLA

    Pramanik, Priyanjana. "Researchers call for ethical guidance on use of AI in healthcare". News-Medical. 22 November 2024. <https://www.news-medical.net/news/20240710/Researchers-call-for-ethical-guidance-on-use-of-AI-in-healthcare.aspx>.

  • Chicago

    Pramanik, Priyanjana. "Researchers call for ethical guidance on use of AI in healthcare". News-Medical. https://www.news-medical.net/news/20240710/Researchers-call-for-ethical-guidance-on-use-of-AI-in-healthcare.aspx. (accessed November 22, 2024).

  • Harvard

    Pramanik, Priyanjana. 2024. Researchers call for ethical guidance on use of AI in healthcare. News-Medical, viewed 22 November 2024, https://www.news-medical.net/news/20240710/Researchers-call-for-ethical-guidance-on-use-of-AI-in-healthcare.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Radnet’s DeepHealth and HOPPR forge partnership to advance AI in healthcare