Artificial intelligence algorithm can identify patients at high risk of intentional self-harm

According to the American Foundation for Suicide Prevention, suicide is the 10th leading cause of death in the U.S., with over 1.4 million suicide attempts recorded in 2018.

Although effective treatments are available for those at risk, clinicians do not have a reliable way of predicting which patients are likely to make a suicide attempt.

Researchers at the Medical University of South Carolina and University of South Florida report in JMIR Medical Informatics that they have taken important steps toward addressing the problem by creating an artificial intelligence algorithm that can automatically identify patients at high risk of intentional self-harm, based on the information in the clinical notes in the electronic health record.

The study was led by Jihad Obeid, M.D., co-director of the MUSC Biomedical Informatics Center, and Brian Bunnell, Ph.D., formerly at MUSC and currently an assistant professor in the Department of Psychiatry and Behavioral Neurosciences at the University of South Florida.

The team used complex artificial neural networks, a form of artificial intelligence also known as deep learning, to analyze unstructured, textual data in the electronic health record. Deep learning methods progressively use layers of artificial networks to extract higher information from raw input data.

The team showed that these models, once trained, could identify patients at risk of intentional self-harm.

This kind of work is important because it leverages the latest technologies to address an important problem like suicide and identifies patients at risk so that they can be referred to appropriate management."

Jihad Obeid, MD, Co-Director of the MUSC Biomedical Informatics Center, and Brian Bunnell, Assistant Professor, Department of Psychiatry and Behavioral Neurosciences, University of South Florida

Thus far, researchers have primarily relied on structured data in the electronic health record for the identification and prediction of patients at risk. Structured data refers to tabulated information that has been entered into designated fields in the electronic health record as part of clinical care.

For example, when physicians diagnose patients and assign International Classification of Disease (ICD) codes, they are creating structured data. This sort of tabulated, structured data is easy for computer programs to analyze.

However, 80% to 90% of the pertinent information in the electronic health record is trapped in text format. In other words, the clinical notes, progress reports, plan-of-care notes and other narrative texts in the electronic health record represent an enormous untapped resource for research.

Obeid's study is unique because it uses deep neural networks to "read" clinical notes in the electronic health record and identify and predict patients at risk for self-harm.

After regulatory ethics review and approval of the proposed research by the Institutional Review Board at MUSC, Obeid began by identifying patient records associated with ICD codes indicative of intentional self-harm in MUSC's research data warehouse.

That warehouse, which was created with support from the South Carolina Clinical & Translational Research Institute, provides MUSC researchers access to patient electronic health record data, provided they have obtained the necessary permissions.

In order to simulate a real-world scenario, Obeid and his team divided the clinical records into two time categories: 2012 to 2017 records that were used for training the models and 2018-2019 records that were used for testing the trained models. First, they looked at the clinical notes taken during the hospital visit in which the ICD code was assigned.

Using that as the training data set, the models "learned" which patterns of language in the clinical notes of the patients' electronic medical records were associated with the assignment of an ICD code of intentional self-harm. Once the models were trained, they could identify those patients based solely on their analysis of text in the clinical notes, with an accuracy of 98.5%. Experts manually reviewed a subset of records to confirm the model's accuracy.

Next, the team tested whether the most accurate of the models could use clinical notes in the electronic health record to predict future self-harm.

To this end, Obeid's team identified the records of patients who had presented with intentional self-harm and trained the model using their clinical notes between six months to one month prior to the intentional self-harm hospital visit.

They then tested whether the trained models could correctly predict if these patients would later present with intentional self-harm.

Predicting future self-harm based solely on clinical notes proved to be more challenging than identifying current at-risk patients due to the extra "noise" that is introduced when vast amounts of patient history are included in the model.

Historical clinical notes tend to be varied and not always relevant. For example, if a patient was seen for depression or other mental health issues six months prior to his or her hospital visit for intentional self-harm, then the clinical notes were likely to include relevant information.

However, if the patient came in for a condition unrelated to mental health, then the notes were less likely to include relevant information.

While the inclusion of irrelevant information introduces a lot of noise into the data analysis, all of this information must be included across patients in the models to predict outcomes. As a result, the model was less accurate at predicting which patients would later present for intentional self-harm than simply classifying current patients for suicide risk.

Nonetheless, the predictive accuracy of this model was very competitive with that previously reported for models that relied on structured data, reaching an accuracy of almost 80% with relatively high sensitivity and precision.

Obeid's team has shown the feasibility of using deep learning models to identify patients at risk of intentional self-harm based on clinical notes alone. The study also showed that models can be used to predict, with fairly good fidelity, which patients will present in the future for intentional self-harm based on clinical notes in their electronic health record.

These early results are promising and could have large impacts at the clinical level. If deep learning models can be used to predict which patients are at high-risk for suicide based on clinical notes, then clinicians can refer high-risk patients early for appropriate treatment.

Using these models to classify patients as at risk for self-harm could also facilitate enrollment into clinical studies and trials of potential new treatments relevant to suicide.

In future studies, Obeid aims to evaluate changes in the predictive time window for his models, for example, looking at records one year before a patient's presentation for intentional self-harm instead of six months.

The team also intends to examine other outcomes such as suicide or suicidal ideation. And while the models work well at MUSC, Obeid must now show that they can be generalized to other institutions.

"Can the models be trained in one location and transferred to another location and still work?" asked Obeid. "If the answer is yes, then this saves critical resources because other institutions will not have to perform expensive and time-consuming manual chart reviews to confirm that the models are getting it right during the training periods."

Source:
Journal reference:

Obeid, J. S., et al. (2020) Identifying and Predicting Intentional Self-Harm in Electronic Health Record Clinical Notes: Deep Learning Approach. JMIR Medical Informatics.
doi.org/10.2196/17784.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
NHS risks developing two-tier hospital system due to underfunding and poor infrastructure