MONET: New AI tool enhances medical imaging with deep learning and text analysis

In a recent study published in Nature Medicine, researchers developed the medical concept retriever (MONET) foundation model, which connects medical pictures to text and evaluates images based on their idea existence, which aids in critical tasks in medical artificial intelligence (AI) development and implementation.

Study: Prediction of tumor origin in cancers of unknown primary origin with cytology-based deep learning. Image Credit: LALAKA/Shutterstock.comStudy: Prediction of tumor origin in cancers of unknown primary origin with cytology-based deep learning. Image Credit: LALAKA/Shutterstock.com

Background

Building reliable picture-based medical artificial intelligence systems necessitates analyzing information and neural network models at each level of development, from the training phase to the post-deployment phase.

Richly annotated medical datasets containing semantically relevant ideas could de-mystify the 'black-box' technologies.

Understanding clinically significant notions like darker pigmentation, atypical pigment networks, and multiple colors is medically beneficial; however, getting labels takes effort, and most medical information sets provide just diagnostic annotations.

About the study

In the current study, researchers created MONET, an AI model that can annotate medical pictures with medically relevant ideas. They designed the model to identify various human-understandable ideas across two picture modalities in dermatology: dermoscopic and clinical images.

The researchers gathered 105,550 dermatology image-text pairings from PubMed articles and medical textbooks, followed by training MONET using 105,550 dermatology-related photos and natural language data from a broad-scale medical literature database.

MONET assigns ratings to photos for each idea, which indicate the extent to which the image portrays the notion.

MONET, based on contrastive-type learning, is an artificial intelligence approach that allows for direct plain language description application to images.

This method avoids manual labeling, allowing for massive image-text pair information on a considerably larger scale than possible with supervised-type learning. After MONET training, the researchers evaluated its effectiveness in annotation and other AI transparency-related use cases.

The researchers tested MONET's concept annotation capabilities by selecting the most conceptual photos from dermoscopic and clinical images.

They compared MONET's performance to supervised learning strategies involving training ResNet-50 models with ground-truth conceptual labels and OpenAI's Contrastive language-image pretraining (CLIP) model.

The researchers also used MONET to automate data evaluation and tested its efficacy in concept differential analysis.

They utilized MONET to analyze the International Skin Imaging Collaboration (ISIC) data, the broadest dermoscopic image collection with over 70,000 publicly available images routinely used to train dermatological AI models.

The researchers developed model auditing using MONET' (MA-MONET) using MONET for the automatic detection of semantically relevant medical concepts and model mistakes.

Researchers evaluated MONET-MA in real-world settings by training CNN models on data from several universities and assessing their automated concept annotation.

They contrasted the 'MONET + CBM' automatic idea scoring method against the human labeling method, which exclusively applies to photos containing SkinCon labels.

The researchers also investigated the effect of concept selection on MONET+CBM performance, specifically task-relevant ideas in bottleneck layers. Further, they evaluated the impact of incorporating the concept of red in the bottleneck on MONET+CBM performance in interinstitutional transfer scenarios.

Results

MONET is a flexible medical AI platform that can appropriately annotate ideas across dermatological images, as confirmed by board-certified dermatologists.

Its concept annotation feature enables relevant trustworthiness evaluations across the medical artificial intelligence pipeline, as proven by model audits, data audits, and interpretable model developments.

MONET successfully finds appropriate dermoscopic and clinical images for various dermatological keywords, beating the baseline CLIP model in both areas. MONET outperformed CLIP for dermoscopic and clinical pictures while remaining equivalent to supervised learning models for clinical pictures.

MONET's automated annotation functionality aids in the identification of differentiating traits between any two arbitrary groups of images in a human-readable language during idea differential analysis.

The researchers found that MONET recognizes differentially expressed ideas in clinical and dermoscopic datasets and can help with large-scale dataset auditing.

MA-MONET use revealed features linked with high mistake rates, such as a cluster of photos labeled blue-whitish veil, blue, black, gray, and flat-topped.

The researchers identified the cluster with the highest error rate by erythema, regression structure, red, atrophy, and hyperpigmentation. Dermatologists chose ten target-related ideas for the MONET+CBM and CLIP+CBM bottleneck layers, allowing for flexible labeling options.

MONET+CBM surpasses all baselines concerning the mean area under the receiver-operating characteristic curve (AUROC) for predicting malignancy and melanoma in clinical pictures. Supervised black-box models consistently outperformed in cancer and melanoma prediction tests.

Conclusion

The study found that image-text models can increase AI transparency and trustworthiness in the medical field. MONET, a platform for medical concept annotation, can improve dermatological AI transparency and trustworthiness by allowing for large-scale annotation of ideas.

AI model developers may improve data collection, processing, and optimization procedures, resulting in more dependable medical AI models.

MONET can influence clinical deployment and monitoring of medical image AI systems by allowing for full auditing and fairness analysis through annotating skin tone descriptors.

Journal reference:
Pooja Toshniwal Paharia

Written by

Pooja Toshniwal Paharia

Pooja Toshniwal Paharia is an oral and maxillofacial physician and radiologist based in Pune, India. Her academic background is in Oral Medicine and Radiology. She has extensive experience in research and evidence-based clinical-radiological diagnosis and management of oral lesions and conditions and associated maxillofacial disorders.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Toshniwal Paharia, Pooja Toshniwal Paharia. (2024, April 19). MONET: New AI tool enhances medical imaging with deep learning and text analysis. News-Medical. Retrieved on December 22, 2024 from https://www.news-medical.net/news/20240419/MONET-New-AI-tool-enhances-medical-imaging-with-deep-learning-and-text-analysis.aspx.

  • MLA

    Toshniwal Paharia, Pooja Toshniwal Paharia. "MONET: New AI tool enhances medical imaging with deep learning and text analysis". News-Medical. 22 December 2024. <https://www.news-medical.net/news/20240419/MONET-New-AI-tool-enhances-medical-imaging-with-deep-learning-and-text-analysis.aspx>.

  • Chicago

    Toshniwal Paharia, Pooja Toshniwal Paharia. "MONET: New AI tool enhances medical imaging with deep learning and text analysis". News-Medical. https://www.news-medical.net/news/20240419/MONET-New-AI-tool-enhances-medical-imaging-with-deep-learning-and-text-analysis.aspx. (accessed December 22, 2024).

  • Harvard

    Toshniwal Paharia, Pooja Toshniwal Paharia. 2024. MONET: New AI tool enhances medical imaging with deep learning and text analysis. News-Medical, viewed 22 December 2024, https://www.news-medical.net/news/20240419/MONET-New-AI-tool-enhances-medical-imaging-with-deep-learning-and-text-analysis.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
CBD can be used safely in women with advanced breast cancer and clinical anxiety