From potential to practice: A blueprint for responsible AI in healthcare

A groundbreaking framework bridges the gap between AI's potential and real-world healthcare applications, spotlighting its role in transforming patient care while maintaining safety and equity.

Study: Establishing responsible use of AI guidelines: a comprehensive case study for healthcare institutions. Image Credit: Collagery / ShutterstockStudy: Establishing responsible use of AI guidelines: a comprehensive case study for healthcare institutions. Image Credit: Collagery / Shutterstock

A recent study published in the journal npj Digital Medicine presented comprehensive guidelines for the responsible integration of artificial intelligence (AI) into healthcare.

AI integration into medicine has markedly advanced over time. Deep learning models have demonstrated remarkable capabilities in detecting malignant breast lesions, lung nodules, and diabetic retinopathy, among others. Further, these models are promising in improving clinical decision-making, facilitating patient triage, and providing therapeutic recommendations.

In addition, large language models (LLMs) have expanded AI's potential in healthcare. However, like other healthcare technologies, LLMs warrant scrutiny, safety monitoring, and validation. AI also presents new challenges, such as variability in performance across clinical settings, evolving disease patterns, and demographic shifts. Additionally, issues related to patient privacy, training protocols, usability, and workflow adaptation remain significant considerations.

LLMs face greater scrutiny as they can generate irrelevant and inaccurate content, omit crucial details, and fabricate nonexistent information. Regulatory bodies have begun adapting quickly in response to the rapid developments in AI, and several leading entities have initiated establishing high-level guidelines. Despite these efforts, a "tangible gap" persists in ensuring their consistent implementation across diverse healthcare settings.

About the Study

In the present study, researchers at Harvard Medical School and the Mass General Brigham AI Governance Committee developed comprehensive guidelines for integrating AI into healthcare effectively and responsibly. They formed a cross-functional team of 18 experts from various domains, including informatics, research, legal, data analytics, equity, privacy, safety, patient experience, and quality. To identify critical themes, the team performed an extensive peer-reviewed and gray literature search on topics such as AI governance and implementation.

The researchers focused on the following nine principles: fairness, robustness, equity, safety, privacy, explainability, transparency, benefit, and accountability. Additionally, three focus groups were established to refine the guidelines: 1) robustness and safety, 2) fairness and privacy, 3) transparency, accountability, and benefit. Each group had 4–7 expert members.

Next, the team focused on developing and executing a structured framework to facilitate the application of AI guidelines within a healthcare setting. They selected generative AI and its application in ambient documentation systems as a representative case study. This choice reflected the unique challenges of monitoring generative AI technologies, such as ensuring patient privacy and mitigating AI hallucinations.

A pilot study was first conducted with select individuals from different departments. The researchers focused on privacy and security, sharing strictly de-identified data with the vendor to allow for continuous updates and improvements. They collaborated with the vendor for strict de-identification, data retention policies, and controlled use of data solely for improving model performance.

Subsequently, the team implemented a shadow deployment phase in which AI systems were operated in parallel with existing workflows without affecting patient care. After shadow deployment, key performance metrics, such as fairness across demographics, usability, and workflow integration, were rigorously evaluated.

Findings

The researchers identified several components critical for the responsible implementation of AI in healthcare. Diverse and demographically representative training datasets should be mandated to reduce bias. Further, outcomes should be evaluated through an equity lens. Regular evaluations of equity should include model reengineering to ensure fair benefits across patient populations.

The transparent communication of the AI system’s Food and Drug Administration (FDA) status would be equally critical. Specifying whether FDA approval is required and detailing the current status of the AI system could help ensure compliance and build trust. A risk-based approach should be adopted to monitor AI systems, such that applications that may introduce higher risk to care outcomes require more robust monitoring than those with no or minimal risk.

The preliminary phase (pilot study) enabled comprehensive functionality assessments and feedback collection, which were crucial for identifying issues early in the implementation. During shadow deployment, most users (of the AI systems) were from the departments of emergency medicine and internal medicine.

Feedback revealed both strengths and areas for improvement. Most criticisms focused on documenting physical examinations, while the system received praise for its accuracy when working with interpreters or patients with strong accents.

Conclusions

In sum, the study illustrated a methodology to incorporate AI into healthcare. This multidisciplinary approach provided a blueprint for non-profit organizations, healthcare institutes, and government bodies aiming to implement and monitor AI responsibly. The case study highlighted challenges such as balancing ethical considerations with clinical utility and underscored the importance of ongoing collaboration with vendors to refine AI systems.

Future work will focus on expanding testing to include broader demographic and clinical case diversity while automating performance monitoring. These efforts aim to ensure that AI systems remain adaptable and equitable across various healthcare environments.

The study demonstrates the importance of continuous evaluation, monitoring, and adaptation of AI systems to ensure efficacy and relevance in challenging clinical settings.

Journal reference:
  • Saenz, A. D., Centi, A., Ting, D., You, J. G., Landman, A., & Mishuris, R. G. (2024). Establishing responsible use of AI guidelines: A comprehensive case study for healthcare institutions. Npj Digital Medicine, 7(1), 1-6. DOI: 10.1038/s41746-024-01300-8, https://www.nature.com/articles/s41746-024-01300-8
Tarun Sai Lomte

Written by

Tarun Sai Lomte

Tarun is a writer based in Hyderabad, India. He has a Master’s degree in Biotechnology from the University of Hyderabad and is enthusiastic about scientific research. He enjoys reading research papers and literature reviews and is passionate about writing.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Sai Lomte, Tarun. (2024, December 02). From potential to practice: A blueprint for responsible AI in healthcare. News-Medical. Retrieved on December 03, 2024 from https://www.news-medical.net/news/20241202/From-potential-to-practice-A-blueprint-for-responsible-AI-in-healthcare.aspx.

  • MLA

    Sai Lomte, Tarun. "From potential to practice: A blueprint for responsible AI in healthcare". News-Medical. 03 December 2024. <https://www.news-medical.net/news/20241202/From-potential-to-practice-A-blueprint-for-responsible-AI-in-healthcare.aspx>.

  • Chicago

    Sai Lomte, Tarun. "From potential to practice: A blueprint for responsible AI in healthcare". News-Medical. https://www.news-medical.net/news/20241202/From-potential-to-practice-A-blueprint-for-responsible-AI-in-healthcare.aspx. (accessed December 03, 2024).

  • Harvard

    Sai Lomte, Tarun. 2024. From potential to practice: A blueprint for responsible AI in healthcare. News-Medical, viewed 03 December 2024, https://www.news-medical.net/news/20241202/From-potential-to-practice-A-blueprint-for-responsible-AI-in-healthcare.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.