AI-based model can use ECG images to diagnose multiple heart rhythm and conduction disorders

Researchers at the Yale Cardiovascular Data Science (CarDS) Lab have developed an artificial intelligence (AI)-based model for clinical diagnosis that can use electrocardiogram (ECG) images, regardless of format or layout, to diagnose multiple heart rhythm and conduction disorders.

The team led by Dr. Rohan Khera, assistant professor in cardiovascular medicine, developed a novel multilabel automated diagnosis model from ECG images. ECG Dx © is the latest tool from the CarDS Lab designed to make AI-based ECG interpretation accessible in remote settings. They hope the new technology provides an improved method to diagnose key cardiac disorders. The findings were published in Nature Communications on March 24.

The first author of the study is Veer Sangha, a computer science major at Yale College. "Our study suggests that image and signal models performed comparably for clinical labels on multiple datasets," said Sangha. "Our approach could expand the applications of artificial intelligence to clinical care targeting increasingly complex challenges."

As mobile technology improves, patients increasingly have access to ECG images, which raises new questions about how to incorporate these devices in patient care. Under Khera's mentorship, Sangha's research at the CarDS Lab analyzes multi-modal inputs from electronic health records to design potential solutions.

The model is based on data collected from more than 2 million ECGs from more than 1.5 million patients who received care in Brazil from 2010 to 2017. One in six patients was diagnosed with rhythm disorders. The tool was independently validated through multiple international data sources, with high accuracy for clinical diagnosis from ECGs.

Machine learning (ML) approaches, specifically those that use deep learning, have transformed automated diagnostic decision-making. For ECGs, they have led to the development of tools that allow clinicians to find hidden or complex patterns. However, deep learning tools use signal-based models, which according to Khera have not been optimized for remote health care settings. Image-based models may offer improvement in the automated diagnosis from ECGs.

There are a number of clinical and technical challenges when using AI-based applications.

Current AI tools rely on raw electrocardiographic signals instead of stored images, which are far more common as ECGs are often printed and scanned as images. Also, many AI-based diagnostic tools are designed for individual clinical disorders, and therefore, may have limited utility in a clinical setting where multiple ECG abnormalities co-occur. A key advance is that the technology is designed to be smart -; it is not dependent on specific ECG layouts and can adapt to existing variations and new layouts. In that respect, it can perform like expert human readers, identifying multiple clinical diagnoses across different formats of printed ECGs that vary across hospitals and countries."

Dr. Rohan Khera, assistant professor in cardiovascular medicine

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Tirzepatide improves heart health and function in obese HFpEF patients