New AI-driven tool converts common histology images into detailed, multi-layered cancer markers, promising faster, more accurate diagnostics and improved patient outcomes.
In a recent study published in Nature Machine Intelligence, researchers developed VirtualMultiplexer, a virtually multiplexed staining tool based on generative artificial intelligence (AI) that converts hematoxylin and eosin (H&E) pictures to immunohistochemistry (IHC) pictures for several antibody markers (one marker each time).
Background
Tissues are spatially structured ecosystems composed of various cells and non-cellular substances. H&E is an important staining technique used in histopathology processes to examine tissue morphology associated with disease. H&E detects aberrant cell proliferation, lymphovascular invasion, and immune cell infiltration in cancer.
Understanding tumor spatial heterogeneity is critical to cancer biology. Current processes rely on time-consuming and tissue-intensive procedures, resulting in misaligned images. Artificially staining tissue images with AI is a promising, cost-effective, and easily accessible alternative.
About the study
In the present study, researchers created the VirtualMultiplexer tool to provide virtually multiplexed immunohistochemistry images for various antibody markers based on an input H&E-stained image. The antibody markers include androgen receptor (AR), homeobox protein Nkx-3.1 (NKX3.1), cluster of differentiation 44 (CD44), CD146, p53, and erythroblast transformation-specific-related gene (ERG).
The team trained VirtualMultiplexer on unpaired original H&E-stained (source) pictures and immunohistochemistry (target) ones. The model divided the images into patches and fed them into generator networks, which conditioned the input and output. The model translated the staining patterns to tissue shapes. The produced IHC patches were put together to form virtual IHC images.
VirtualMultiplexer provides an architecture replicating human expert review at single-cell, cell-neighborhood, and full-image levels. It uses a neighborhood loss to ensure that produced IHC patches cannot be distinguished from original ones, in addition to adversarial and multilayered contrastive losses from contrastive unpaired translation (CUT). Global consistency losses ensured content and stylistic consistency between actual and virtual immunohistochemistry images. Local consistency losses captured the original representation and staining patterns.
The researchers trained the AI tool using a tissue microarray (TMA) for prostate cancer. The TMA included unpaired pictures stained with H&E and IHC agents for six clinically significant membrane, cytoplasmic, and nuclear markers. They trained a separate one-to-one VirtualMultiplexer model for each IHC marker individually.
To ensure staining dependability, they offer a multiscale technique that combines three separate loss functions. The researchers analyzed the created images by applying quantitative fidelity criteria, expert pathology evaluation, and visual Turing assessments before determining their therapeutic significance by estimating clinical outcomes. They compared the VirtualMultiplexer to four cutting-edge unpaired S2S translation algorithms and used the Fréchet inception distance (FID) to evaluate the quality of AI-generated pictures.
The researchers encoded genuine H&E, real IHC, or virtual IHC pictures as tissue-graph representations and then used a graph transformer (GT) to transfer the representations to downstream class labels.
The researchers used the obtained stainings in the European Multicenter Prostate Cancer Clinical and Translational Research (EMPaCT) dataset to predict patient survival and disease progression. They tested the model's ability to generalize data using the prostate cancer grade assessment (PANDA) and SICAP datasets, which included H&E-stained biopsies and associated Gleason scores. They applied the EMPaCT-pre-trained VirtualMultiplexer to a pancreatic ductal adenocarcinoma cohort (PDAC) TMA and generated virtual IHC images for CD44, CD146, and p53, as well as colorectal and breast cancer H&E-stained whole-slide images (WSIs) from The Cancer Genome Atlas.
Results
VirtualMultiplexer identified physiologically meaningful staining patterns at different tissue scales without needing sequential tissue slices, picture registration, or lengthy expert annotation. It generates nearly multiplexed imaging datasets with excellent staining quality that are indistinguishable from actual ones in a timely, robust, and exact manner. Researchers effectively translated the model across tissue sizes and patient cohorts, indicating its ability to transfer between tissue types.
VirtualMultiplexer created virtual IHC pictures that retained the tissue shape and staining patterns of the original H&E image. The model had the lowest value for FID across all antibody markers (mean, 29), consistently less than CycleGAN, AI-FFPE CUT, and CUT with kernel instance normalization (KIN). In the Turing test, the model had 52% sensitivity and 54% specificity for antibody markers. On average, 71% of virtual pictures had acceptable staining quality, compared to 78% for genuine images.
VirtualMultiplexer has restrictions, such as a raised backdrop and more pronounced tiling artifacts near the core boundary. It also does not adequately stain CD146+ vascular structures or glandular NKX3.1+ cells invading peri-glandular stroma. Despite these difficulties, the produced pictures allowed for training early fusion GT models, which improved clinical endpoint prediction in both the out-of-distribution prostate cancer and PDAC tumor, node, and metastasis (TNM) cohorts.
Conclusion
The study showed that VirtualMultiplexer improves clinical prediction in histopathology processes and cancer biology by generating high-quality, realistic multiplexed IHC pictures. The findings highlight the therapeutic use of AI-assisted multiplexed tumor imaging. VirtualMultiplexer is suitable for data inpainting, sample imputation, and pre-histopathological experimental design. Future studies should evaluate the approach in real-world contexts.