FDA strengthens AI regulation to ensure patient safety and innovation in healthcare

As AI technology evolves rapidly, the FDA is tackling the challenge of balancing innovation with patient safety, shaping regulations that ensure AI tools remain effective throughout their entire lifecycle.

Special Communication: FDA Perspective on the Regulation of Artificial Intelligence in Health Care and Biomedicine. Image Credit: Sansoen Saengsakaorat / ShutterstockSpecial Communication: FDA Perspective on the Regulation of Artificial Intelligence in Health Care and Biomedicine. Image Credit: Sansoen Saengsakaorat / Shutterstock

A Special Communication published in the Journal of the American Medical Association (JAMA) examined the regulation of artificial intelligence (AI) in healthcare by the United States (U.S.) Food and Drug Administration (FDA). It also explored AI’s potential in clinical research, development of medical products, and patient care while highlighting the key areas to address as regulations are adapted for some of AI’s distinctive challenges in biomedicine and healthcare.

Background

Advances in AI have immense potential for transforming biomedicine and healthcare. Expectations from AI often exceed those from previous medical technologies such as telemedicine, digital health tools, and electronic health records. While many of these technologies were breakthroughs, the competence of AI tools in data analysis, diagnostics, and personalized care is revolutionary.

However, the use of AI in medicine and healthcare also raises significant concerns pertaining to oversight and regulation. The U.S. FDA has long been formulating regulations for the incorporation of AI in the development of medical products and healthcare. However, the dynamic nature of AI technology presents some unique regulatory challenges, especially in the areas of effectiveness, safety, postmarket performance, and accountability. Furthermore, the rapid pace at which AI technology evolves requires regulatory frameworks that can be quickly adapted.

FDA regulations for AI in medicine

According to the review, FDA regulation of AI-enabled medical products began in 1995 with the approval of PAPNET, an AI-based tool that pathologists could use for cervical cancer diagnosis. Although PAPNET was not widely adopted due to its high costs, the FDA has since approved close to 1,000 AI-based medical devices and products, with applications largely in radiology and cardiology.

AI is also being widely used in drug development, including drug discovery, clinical trials, and dosage optimization. Furthermore, while AI-based applications have become more common in the field of oncology, there is growing interest in applying AI to mental health, where digital technologies have the potential for significant impact.

The number of regulatory submissions for the use of AI in drug development received by the FDA has increased tenfold in one year, and given the wide range of applications and complexities of AI, the FDA has also adapted the regulatory framework to be risk-based but also mindful of AI's evolution in real-world clinical settings.

A five-point plan of action introduced by the FDA in 2021 for the regulation of machine learning and AI-based medical devices aims to foster innovation while ensuring the effectiveness and safety of these products. This plan of action is also in accordance with Congressional guidance which encourages the FDA to create regulations that are flexible enough to allow developers to update AI products without seeking constant and continuous approvals from the FDA.

However, the article underscores that these regulations must account for the need to manage AI products throughout their entire life cycle, particularly through continuous monitoring of their performance after deployment in clinical settings.

The FDA's medical products center has also identified four areas of focus for AI development, which include enhancing public health safety, supporting regulatory innovation, promoting best practices and harmonized standards, and advancing research for AI performance evaluation.

Key Concepts for FDA Regulation of AI

The FDA aims to shape the regulation of AI-enabled medical products by using U.S. laws as well as global standards as the basis. Collaborations with bodies such as the International Medical Device Regulators Forum allow the FDA to promote harmonized AI standards across the globe, including managing AI's role in drug development and modernizing clinical trials through international cooperation.

With the rapid evolution of AI technology, one of the major challenges for the FDA is effectively processing large volumes of AI submissions while ensuring that innovation is not hindered and safety is not compromised. Moreover, continuous postmarket surveillance of AI systems is crucial to ensure they function as intended over time, especially in diverse and evolving clinical environments. This requires a flexible, science-based regulatory framework, such as the Software Precertification Pilot, which allows continuous assessment of AI products.

The risk-based approach to regulating AI-enabled medical devices also allows for flexible approaches across a wide range of AI models. For example, simple AI models used for administrative functions are less regulated, while complex AI models, such as those embedded in cardiac defibrillators, are subjected to stricter regulations.

Another example provided by the reviewers was the AI-based tool for the detection of sepsis - Sepsis ImmunoScore - which was classified as a Class II device requiring special safety measures to account for potential bias or algorithm failure risks.

The review emphasizes that specialized regulatory tools are needed to evaluate the growing number of AI models, including generative AI and large language models. This is particularly important because of the risks posed by unpredictable outputs, such as incorrect diagnoses, which will need thorough assessment both before and after deployment in clinical workflows.

Conclusions

To summarize, the review indicated that flexible regulatory approaches and coordinated efforts across industries, international organizations, and governments, along with strict regulation by the FDA, are vital for keeping pace with AI technology's rapid development in medicine and ensuring the efficacy and safety of AI tools.

The authors argue that a rigorous focus on postmarket monitoring and the entire life cycle of AI tools is essential to ensure they continue to perform safely and effectively in clinical practice. They believe that a focus on the health outcomes of patients rather than financial optimization should guide the integration of AI into healthcare. They also caution that balancing innovation with patient care must remain a priority to prevent AI from being driven primarily by financial incentives.

Journal reference:
Dr. Chinta Sidharthan

Written by

Dr. Chinta Sidharthan

Chinta Sidharthan is a writer based in Bangalore, India. Her academic background is in evolutionary biology and genetics, and she has extensive experience in scientific research, teaching, science writing, and herpetology. Chinta holds a Ph.D. in evolutionary biology from the Indian Institute of Science and is passionate about science education, writing, animals, wildlife, and conservation. For her doctoral research, she explored the origins and diversification of blindsnakes in India, as a part of which she did extensive fieldwork in the jungles of southern India. She has received the Canadian Governor General’s bronze medal and Bangalore University gold medal for academic excellence and published her research in high-impact journals.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Sidharthan, Chinta. (2024, October 16). FDA strengthens AI regulation to ensure patient safety and innovation in healthcare. News-Medical. Retrieved on November 30, 2024 from https://www.news-medical.net/news/20241016/FDA-strengthens-AI-regulation-to-ensure-patient-safety-and-innovation-in-healthcare.aspx.

  • MLA

    Sidharthan, Chinta. "FDA strengthens AI regulation to ensure patient safety and innovation in healthcare". News-Medical. 30 November 2024. <https://www.news-medical.net/news/20241016/FDA-strengthens-AI-regulation-to-ensure-patient-safety-and-innovation-in-healthcare.aspx>.

  • Chicago

    Sidharthan, Chinta. "FDA strengthens AI regulation to ensure patient safety and innovation in healthcare". News-Medical. https://www.news-medical.net/news/20241016/FDA-strengthens-AI-regulation-to-ensure-patient-safety-and-innovation-in-healthcare.aspx. (accessed November 30, 2024).

  • Harvard

    Sidharthan, Chinta. 2024. FDA strengthens AI regulation to ensure patient safety and innovation in healthcare. News-Medical, viewed 30 November 2024, https://www.news-medical.net/news/20241016/FDA-strengthens-AI-regulation-to-ensure-patient-safety-and-innovation-in-healthcare.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.