New applications of artificial intelligence (AI) in health care settings have shown early success in improving survival and outcomes in traffic accident victims transported by ambulance and in predicting survival after liver transplantation, according to two research studies presented at the virtual American College of Surgeons Clinical Congress 2020.
Both studies evaluated how AI can crunch massive amounts of data to support decision-making by surgeons and other care providers at the point of care.
In one study, researchers at the University of Minnesota applied a previously published AI approach known as natural language processing (NLP)1 to categorize treatment needs and medical interventions for 22,529 motor vehicle crash patients that emergency medical service (EMS) personnel transported to ACS-verified Level I trauma centers in Minnesota.
According to a 2016 study by the National Academies of Sciences, Engineering, and Medicine, 20 percent of medical injury deaths are potentially preventable2 representing a quality gap the researchers sought to address.
Reviewing the performance of EMS teams to profile potentially preventable deaths can enable quality improvement efforts to reduce these deaths.
Currently this process for performance review is manual, time-consuming, and expensive. AI allows possible automation of this process."
Christopher James Tignanelli, MD, Study Senior Author, FACS
NLP is an AI protocol that extracts key data from spoken or written text that providers--EMS personnel in this study--enter into the electronic record as a key component of their report. Dr. Tignanelli is an assistant professor of surgery, division of acute care surgery, at the University of Minnesota Medical School, and affiliate faculty at the Institute for Health Informatics at the University of Minnesota.
In this study, two trauma surgeons independently and manually reviewed a random selection of 1 percent of patient records and determined treatment needs and medical interventions. To evaluate the accuracy of the AI system, the manual determinations were compared with the NLP determinations. "Overall the algorithm performed with very high accuracy," Dr. Tignanelli said.
Typically after EMS personnel enter their notes into the electronic health record, oversight personnel cull through them and determine whether the patient received appropriate care, usually a week or so afterward.
"That's quite a labor-intensive process," said presenting author Jacob Swann, MD, a burn and trauma fellow at Regions Hospital in St. Paul, Minn. "The goal of this project and what it validated was to automate a lot of those notes."
The NLP approach ran those notes through an algorithm to separate the notes of consequential medical interventions from less consequential ones. "That can streamline the manual review process," Dr. Swann said. "It's not performed at the accuracy level that would allow you to take the physician out of it and say that AI can determine with complete accuracy if the standard of care was given or not, but it does perform well."
The AI pipeline Dr. Swann and colleagues studied determined that only about one quarter (242 of 936) patients who needed an airway intervention actually got one before they arrived at the hospital, and that about two-thirds (110 of 170) of those who did not have adequate intravenous access and needed access into the bone, known as intraosseous (IO) access, during advance cardiac life support did receive IO access.
"Being able to identify systemic errors allows you to improve the entire health system," Dr. Swann said. "Having the ability to look at large aggregate data and go through 330,000 charts over several minutes with an AI-reading algorithm, to identify specific areas for potential improvement--whether it's getting intravenous access in our patients or having problems with splinting long bone fractures--allows you separate the signal from the noise and then figure out where the problem lies."
The "holy grail," noted Dr. Swann, is to have an AI system that can listen and observe EMS personnel during en route care and assist with complex decision making by recommending care options in real time.
For the second AI study, researchers at Baylor College of Medicine, Houston, tested four different machine-learning models for predicting survival after liver transplantation. The two models that showed high accuracy for predicting survival are known as the Random Forest and AdaBoost models.
Lead author Rowland Pettit, MD-PhD candidate at Baylor, explained that Random Forest (RF) is an ensemble learning method that combines the outputs of multiple decision trees and predicts an outcome by a "majority wins" approach.
The models took into account a total of 324 disease characteristics to determine survivability. The strongest factors were acuity of illness and the recipient's disease course, Mr. Pettit said.
The study selected all 109,742 adult patients who had one liver transplant from the United Network of Organ Sharing database since its inception in 1984.
The RF model showed an accuracy, reported as area under the curve, of 80 percent for predicting survival at one month, 79 percent at three months, 75 percent at one year, and 73 percent at three and five years. None of the other models showed predictability exceeding 70 percent.
"The most readily accessible application of these models would be for regulation, providing immediate feedback to clinicians about their outcomes for the past year and how they and their centers performed compared to others," Mr. Pettit said.
"Being able to accurately predict whether a patient should have survived or not is crucial to then accurately providing feedback."
This type of AI model also has the potential to integrate with electronic medical record systems and physician workflows to provide benchmarks, he added.
"It would be very easy with an integrated model to run predictions for every patient on a liver transplant waiting list in real time and determine the probability of each patient living at one, three or five years," he said. "This step is not to make the decision for the clinician, but to add a further clinician-assistance decision-making tool to give them quantitative data for use in organ allocation decisions."
Senior author of the Baylor AI study is Abbas A. Rana, MD, FACS, assistant professor of surgery, division of abdominal transplantation, Baylor College of Medicine. Study coauthors are Stuart Corr, PhD; and Jim Havelka, MBA, of Baylor College of Medicine.
Coauthors of the Minnesota AI study on traffic accident victims are Greg M. Silvermann; Elizabeth A. Lindemann; Lori Boland, MPH; Jon C. Gibson, MD; Charles J. Lick, MD; Benjamin C. Knoll; Serguei Pakhomov, PhD; and Genevieve B. Melton, MD, PhD, all of the University of Minnesota.
Mr. Pettit and Drs. Rana and Swann have no disclosures related to this research. Dr. Tignanelli disclosed having a patent pending for an AI model. His coauthors have no disclosures related to this research.