Authors take in-depth look at risks associated with medical AI/ML systems

Artificial intelligence and machine learning (AI/ML) are increasingly transforming the healthcare sector. From spotting malignant tumours to reading CT scans and mammograms, AI/ML-based technology is faster and more accurate than traditional devices - or even the best doctors. But along with the benefits come new risks and regulatory challenges.

In their latest article Algorithms on regulatory lockdown in medicine recently published in Science, Boris Babic, INSEAD Assistant Professor of Decision Sciences; Theodoros Evgeniou, INSEAD Professor of Decision Sciences and Technology Management; Sara Gerke, Research Fellow at Harvard Law School's Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics; and I. Glenn Cohen, Professor at Harvard Law School and Faculty Director at the Petrie-Flom Center look at the new challenges facing regulators as they navigate the unfamiliar pathways of AI/ML.

They consider the questions: What new risks do we face as AI/ML devices are developed and implemented? How should they be managed? What factors do regulators need to focus on to ensure maximum value at minimal risk?

Until now regulatory bodies like the U.S. Food and Drug Administration (FDA) have approved medical AI/ML-based software with "locked algorithms" - that is algorithms that provide the same result each time and do not change with use. However, a key strength and potential benefit from most AI/ML technology is derived from its ability to evolve as the model learns in response to new data. These "adaptive algorithms", made possible because of AI/ML, create what is in essence a learning healthcare system, in which the boundaries between research and practice are porous.

Given the significant value of this adaptive system, a fundamental question for regulators today is whether authorisation should be limited to the version of technology that was submitted and evaluated as being safe and effective, or whether they permit the marketing of an algorithm where greater value is to be found in the technology's ability to learn and adapt to new conditions.

The authors take an in-depth look at the risks associated with this update problem, considering the specific areas which require focus and ways in which the challenges could be addressed.

The key to strong regulation, they say, is to prioritise continuous risk monitoring.

"To manage the risks, regulators should focus particularly on continuous monitoring and risk assessment, and less on planning for future algorithm changes," say the authors.

As regulators move forward, the authors recommend they develop new processes to continuously monitor, identify, and manage associated risks. They suggest key elements that could help with this, and which may in the future themselves be automated using AI/ML - possibly having AI/ML systems monitoring each other.

While the paper draws largely from the FDA's experience in regulating biomedical technology, the lessons and examples have broad relevance as other countries consider how they shape their associated regulatory architecture. They are also important and relevant for any business that develops AI/ML embedded products and services, from automotive, to insurance, financials, energy, and increasingly many others. Executives in all organisations have a lot to learn about managing new AI/ML risks from how regulators think about them today.

"Our goal is to emphasise the risks that can arise from unanticipated changes in how medical AI/ML systems react or adapt to their environments," say the authors, warning that, "Subtle, often unrecognised parametric updates or new types of data can cause large and costly mistakes."

Source:
Journal reference:

Babic, B. et al. (2019) Algorithms on regulatory lockdown in medicine. Science. doi.org/10.1126/science.aay9547

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Professor Nancy Ip: Pioneering New Paths in Neurodegenerative Therapy