Use of artificial intelligence to diagnose lagophthalmos

In a recent study published in Scientific Reports, researchers used a convolutional neural network (CNN) to automate lagophthalmos diagnosis.

Study: Diagnosing lagophthalmos using artificial intelligence. Image Credit: Joyseulay/Shutterstock.comStudy: Diagnosing lagophthalmos using artificial intelligence. Image Credit: Joyseulay/Shutterstock.com

Background

Lagophthalmos is characterized by an inadequate or aberrant eyelid closure, which increases the risk of corneal ulcers and blindness.

It is a frequent symptom of many diseases and is of three types: cicatricial (CL), paralytic lagophthalmos (PL), and nocturnal (NL). Complete closure of eyelids is required to maintain steady tear films and a hydrated ocular surface.

However, among individuals with CL, PL, and NL, the ear fluid does not sufficiently wet the eyes, resulting in drying out that can cause keratitis and keratopathy, leading to corneal ulcers, decreased vision, or blindness. Early detection and tailored therapy are critical to avoiding complications.

Algorithm-based tools with automated diagnostics have various advantages, including the lack of expert knowledge required, the capacity to establish or reject suspected clinical diagnoses in ambiguous patient situations, and the ability to substantiate or refute suspected clinical diagnoses in ambiguous patient instances.

About the study

In the present study, researchers demonstrated a new approach that uses still-image processing to discover visual patterns and, eventually, diagnose lagophthalmos using CNN technology.

The team studied 30 lagophthalmos patients at the Regensburg University Hospital in Germany between June 2019 and May 2021. They obtained data from 10 disease-free adults who served as a control group, and the training dataset included 826 photos.

Each validation dataset and testing dataset had 91 patient photographs. After 17 minutes of training, the model had mean losses of 0.3 and 0.4 and final losses of 0.3 and 0.2, respectively.

The researchers obtained a 93% testing accuracy with a 0.2-point loss. The study included 18-year-old patients with signs of lagophthalmos who agreed to therapy and study participation. The researchers excluded patients who could not provide informed permission since they could not speak German or were illiterate.

The researchers used Python 3.7 and standard machine learning and data science modules to train and assess a convolutional neural network (CNN) using 1,008 patient photos.

The CNN was developed as a lightweight CNN with three convolutional layers to reduce the number of parameters in the network. The researchers included a dropout layer as a regularization strategy to prevent overfitting.

They trained the CNN across 64 epochs, with more epochs avoided to avoid overfitting. They used the testing set to assess the model with the best validation accuracy throughout training.

The researchers built the model utilizing rectified linear units (ReLUs) with each of the three convolutional layers, a max-pool layer between each convolutional layer, a flattened layer, and two dense layers to correlate the association between distinct picture characteristics with a specific output.

They trained the model weights on no testing data without using synthetically created training images to retain the complexity of dealing with genuine patient photos and prevent overfitting the model.

Results

The model performed admirably concerning the training, validating, and testing accuracies, with mean and final accuracies of 86% and 91% in the training dataset, respectively, and mean and absolute accuracies of 88% and 98% in the validation dataset, respectively, over 64 epochs.

Mean losses of 0.3 and 0.4 and final losses of 0.3 and 0.2 were observed during training and validation, respectively. The validation accuracy and recall values were 1.0 and 0.9, yielding F1 scores of 0.97. The model specificity in the validation dataset was 1.0, with an area under the receiver operating characteristic curve (AUROC) value of 0.998.

When categorizing the testing set, the final architecture of the model obtained 93% accuracy with a 0.20-point loss. The AUROC value for model testing was 0.96 with a specificity of 0.98; moreover, the recall was 0.8 with a 0.96-point accuracy.

The researchers trained the model for 17 minutes, during which accuracy metrics consistently increased while related losses decreased. The findings indicate that the model's accuracy in categorizing the training dataset and testing dataset examples improved with time.

The validation dataset accuracy peaked at epoch 42, whereas training accuracy peaked at epoch 56, indicating that the model is still learning and refining with every epoch.

Of interest, with half-open eyelids, the model demonstrated robust diagnostic skills, indicating the capacity of the model to efficiently identify and categorize essential aspects despite potential differences in input data presentation.

Model accuracy in the training dataset remained somewhat lower than the validation dataset accuracy during most epochs, demonstrating that the CNN model generalized effectively to unknown data. An exception was noted during epoch 39 when the training dataset accuracy reached 83%.

Conclusions

Overall, the study findings reveal a unique use of artificial intelligence (CNN) for speedy and accurate diagnosis of lagophthalmos.

The CNN-based strategy combines anti-overfitting tactics, quick training timeframes, and high accuracy levels with the potential to improve medical efficiency and patient care. The validation dataset accuracy (98%) outperformed the training dataset accuracy (91%).

The modest depth of three CNN layers contributed to model generalizability. In the majority of cases, the model predicted correctly, but some output was erroneous, indicating that additional improvements are required.

Throughout 64 epochs, the link between the training dataset and validation dataset accuracy was noted, with training accuracy reaching 87% and validation dataset accuracy reaching 87%. The model performed slightly worse, with a larger loss value of 0.2 in the testing dataset.

Journal reference:
Pooja Toshniwal Paharia

Written by

Pooja Toshniwal Paharia

Pooja Toshniwal Paharia is an oral and maxillofacial physician and radiologist based in Pune, India. Her academic background is in Oral Medicine and Radiology. She has extensive experience in research and evidence-based clinical-radiological diagnosis and management of oral lesions and conditions and associated maxillofacial disorders.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Toshniwal Paharia, Pooja Toshniwal Paharia. (2023, December 12). Use of artificial intelligence to diagnose lagophthalmos. News-Medical. Retrieved on November 21, 2024 from https://www.news-medical.net/news/20231212/Use-of-artificial-intelligence-to-diagnose-lagophthalmos.aspx.

  • MLA

    Toshniwal Paharia, Pooja Toshniwal Paharia. "Use of artificial intelligence to diagnose lagophthalmos". News-Medical. 21 November 2024. <https://www.news-medical.net/news/20231212/Use-of-artificial-intelligence-to-diagnose-lagophthalmos.aspx>.

  • Chicago

    Toshniwal Paharia, Pooja Toshniwal Paharia. "Use of artificial intelligence to diagnose lagophthalmos". News-Medical. https://www.news-medical.net/news/20231212/Use-of-artificial-intelligence-to-diagnose-lagophthalmos.aspx. (accessed November 21, 2024).

  • Harvard

    Toshniwal Paharia, Pooja Toshniwal Paharia. 2023. Use of artificial intelligence to diagnose lagophthalmos. News-Medical, viewed 21 November 2024, https://www.news-medical.net/news/20231212/Use-of-artificial-intelligence-to-diagnose-lagophthalmos.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Major donation fuels advances in ophthalmic artificial intelligence