In an editorial, Monica M. Bertagnolli assesses the promise of artificial intelligence and machine learning (AI/ML) to study and improve health. The editorial was written by Dr. Bertagnolli in her capacity as director of the National Cancer Institute.
AI/ML offer powerful new tools to analyze highly complex datasets, and researchers across biomedicine are taking advantage. However, Dr. Bertagnolli argues that human judgment is still required. Humans must select and develop the right computational models and ensure that the data used to train machine learning models are relevant, complete, high quality, and sufficiently copious. Many machine learning insights emerge from a "black box" without transparency into the logic underlying the predictions, which can impede acceptance for AI/ML-informed methods in clinical practice. "Explainable AI" can crack open the box to allow researchers more access to the causal links the methods are capturing. AI/ML-informed methods must also meet patient needs in the real world, and so interdisciplinary collaborations should include those engaged in clinical care. Researchers must also watch for bias; unrecognized confounders such as race and socioeconomic status can produce results that discriminate against some patient groups. AI/ML is an exciting new tool that also demands increased responsibility. Ultimately, AI is only as smart and as responsible as the humans who wield it.
In the same issue, Victor J. Dzau, President of the National Academy of Medicine shares his perspective on the same topic.
Source:
Journal reference:
Bertagnolli, M. M. (2023). Advancing health through artificial intelligence/machine learning: The critical importance of multidisciplinary collaboration. PNAS Nexus. doi.org/10.1093/pnasnexus/pgad356