TY - GEN
T1 - Performance evaluation of deep learning and transformer models using multimodal data for breast cancer classification
AU - Hussain, Sadam
AU - Ali, Mansoor
AU - Naseem, Usman
AU - Bosques Palomo, Beatriz Alejandra
AU - Monsivais Molina, Mario Alexis
AU - Garza Abdala, Jorge Alberto
AU - Avendano Avalos, Daly Betzabeth
AU - Cardona-Huerta, Servando
AU - Aaron Gulliver, T.
AU - Tamez Pena, Jose Gerardo
PY - 2025
Y1 - 2025
N2 - Rising breast cancer (BC) occurrence and mortality are major global concerns for women. Deep learning (DL) has demonstrated superior diagnostic performance in BC classification compared to human expert readers. However, the predominant use of unimodal (digital mammography) features may limit the current performance of diagnostic models. To address this, we collected a novel multimodal dataset comprising both imaging and textual data. This study proposes a multimodal DL architecture for BC classification, utilizing images (mammograms; four views) and textual data (radiological reports) from our new in-house dataset. Various augmentation techniques were applied to enhance the training data size for both imaging and textual data. We explored the performance of eleven SOTA DL architectures (VGG16, VGG19, ResNet34, ResNet50, MobileNet-v3, EffNet-b0, EffNet-b1, EffNet-b2, EffNet-b3, EffNet-b7, and Vision Transformer (ViT)) as imaging feature extractors. For textual feature extraction, we utilized either artificial neural networks (ANNs) or long short-term memory (LSTM) networks. The combined imaging and textual features were then inputted into an ANN classifier for BC classification, using the late fusion technique. We evaluated different feature extractor and classifier arrangements. The VGG19 and ANN combinations achieved the highest accuracy of 0.951. For precision, the VGG19 and ANN combination again surpassed other CNN and LSTM, ANN based architectures by achieving a score of 0.95. The best sensitivity score of 0.903 was achieved by the VGG16+LSTM. The highest F1 score of 0.931 was achieved by VGG19+LSTM. Only the VGG16+LSTM achieved the best area under the curve (AUC) of 0.937, with VGG16+LSTM closely following with a 0.929 AUC score.
AB - Rising breast cancer (BC) occurrence and mortality are major global concerns for women. Deep learning (DL) has demonstrated superior diagnostic performance in BC classification compared to human expert readers. However, the predominant use of unimodal (digital mammography) features may limit the current performance of diagnostic models. To address this, we collected a novel multimodal dataset comprising both imaging and textual data. This study proposes a multimodal DL architecture for BC classification, utilizing images (mammograms; four views) and textual data (radiological reports) from our new in-house dataset. Various augmentation techniques were applied to enhance the training data size for both imaging and textual data. We explored the performance of eleven SOTA DL architectures (VGG16, VGG19, ResNet34, ResNet50, MobileNet-v3, EffNet-b0, EffNet-b1, EffNet-b2, EffNet-b3, EffNet-b7, and Vision Transformer (ViT)) as imaging feature extractors. For textual feature extraction, we utilized either artificial neural networks (ANNs) or long short-term memory (LSTM) networks. The combined imaging and textual features were then inputted into an ANN classifier for BC classification, using the late fusion technique. We evaluated different feature extractor and classifier arrangements. The VGG19 and ANN combinations achieved the highest accuracy of 0.951. For precision, the VGG19 and ANN combination again surpassed other CNN and LSTM, ANN based architectures by achieving a score of 0.95. The best sensitivity score of 0.903 was achieved by the VGG16+LSTM. The highest F1 score of 0.931 was achieved by VGG19+LSTM. Only the VGG16+LSTM achieved the best area under the curve (AUC) of 0.937, with VGG16+LSTM closely following with a 0.929 AUC score.
KW - Breast Cancer
KW - Feature Fusion
KW - Multi-modal Classification
KW - Deep Learning
UR - http://www.scopus.com/inward/record.url?scp=85206971717&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-73376-5_6
DO - 10.1007/978-3-031-73376-5_6
M3 - Conference proceeding contribution
SN - 9783031733758
T3 - Lecture Notes in Computer Science
SP - 59
EP - 69
BT - Cancer prevention, detection, and intervention
A2 - Ali, Sharib
A2 - van der Sommen, Fons
A2 - Papież, Bartłomiej Władysław
A2 - Ghatwary, Noha
A2 - Jin, Yueming
A2 - Kolenbrander, Iris
PB - Springer, Springer Nature
CY - Cham
T2 - 3rd International Workshop on Cancer Prevention, detection and intervenTion, CaPTion 2024, held in Conjunction with 27th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2024
Y2 - 6 October 2024 through 6 October 2024
ER -