Repository logo
 

The Importance of Temporal Integration in DCE-MRI for Improved Breast Cancer Diagnosis

Accepted version
Peer-reviewed

Loading...
Thumbnail Image

Change log

Abstract

Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) is a sequence of MRI scans acquired after the administration of a contrast agent. In Breast cancer, it is used to capture dynamic changes in tissue enhancement over time, which is shown to be different between benign and malignant lesions. In fact, the DCE-MRI is a temporal sequence of MRI composted of one pre-contrast and six post-contrast instant. This study leverages machine learning techniques to enhance breast cancer classification using the full sequence of DCE-MRI data, addressing the common oversight of underutilizing its temporal dimension. We analyze an in-house dataset, integrating radiomic features from each time instant through \textit{i)} feature concatenated from all instants of the sequence and a random forest model (multi-instant Random Forest), and \textit{ii)} a graph neural network (GNN) to extract informative embeddings in which nodes correspond to the seven time instants of the DCE-MRI sequence. Our findings indicate that incorporating temporal information significantly improves classification performance, both in terms of accuracy and particularly in terms of Positive Predictive Value (PPV), which is crucial for reducing false positives in clinical decisions. Despite the complexity of GNNs, their performance gains are marginal compared to the simpler multi-instant RF, suggesting that shallower models may be equally effective with smaller datasets. Explainable AI methods further reveal that the pre-contrast and third post-contrast instants are most informative for classification, offering new insights for radiology physicians.

Description

Keywords

Journal Title

Conference Name

WIRN 2024 The Italian Workshop on Neural Networks

Journal ISSN

Volume Title

Publisher

Publisher DOI

Publisher URL

Rights and licensing

Except where otherwised noted, this item's license is described as All Rights Reserved
Sponsorship
This research was co-funded by the Italian Complementary National Plan PNC-I.1 "Research initiatives for innovative technologies and pathways in the health and welfare sector” D.D. 931 of 06/06/2022, "DARE - DigitAl lifelong pRevEntion" initiative, code PNC0000002, CUP: B53C22006460001; and Finanziato dall’Unione europea – Next Generation EU - Progetti di Ricerca di Rilevante Interesse Nazionale (PRIN) 2022, Prot. 2022ENK9LS. Project: "EXEGETE: Explainable Generative Deep Learning Methods for Medical Image and Signal Processing", CUP: B53D23013040006