Performance evaluation of deep learning and transformer models using multimodal data for breast cancer classification

Sadam Hussain*, Mansoor Ali, Usman Naseem, Beatriz Alejandra Bosques Palomo, Mario Alexis Monsivais Molina, Jorge Alberto Garza Abdala, Daly Betzabeth Avendano Avalos, Servando Cardona-Huerta, T. Aaron Gulliver, Jose Gerardo Tamez Pena

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contributionpeer-review

Abstract

Rising breast cancer (BC) occurrence and mortality are major global concerns for women. Deep learning (DL) has demonstrated superior diagnostic performance in BC classification compared to human expert readers. However, the predominant use of unimodal (digital mammography) features may limit the current performance of diagnostic models. To address this, we collected a novel multimodal dataset comprising both imaging and textual data. This study proposes a multimodal DL architecture for BC classification, utilizing images (mammograms; four views) and textual data (radiological reports) from our new in-house dataset. Various augmentation techniques were applied to enhance the training data size for both imaging and textual data. We explored the performance of eleven SOTA DL architectures (VGG16, VGG19, ResNet34, ResNet50, MobileNet-v3, EffNet-b0, EffNet-b1, EffNet-b2, EffNet-b3, EffNet-b7, and Vision Transformer (ViT)) as imaging feature extractors. For textual feature extraction, we utilized either artificial neural networks (ANNs) or long short-term memory (LSTM) networks. The combined imaging and textual features were then inputted into an ANN classifier for BC classification, using the late fusion technique. We evaluated different feature extractor and classifier arrangements. The VGG19 and ANN combinations achieved the highest accuracy of 0.951. For precision, the VGG19 and ANN combination again surpassed other CNN and LSTM, ANN based architectures by achieving a score of 0.95. The best sensitivity score of 0.903 was achieved by the VGG16+LSTM. The highest F1 score of 0.931 was achieved by VGG19+LSTM. Only the VGG16+LSTM achieved the best area under the curve (AUC) of 0.937, with VGG16+LSTM closely following with a 0.929 AUC score.

Original languageEnglish
Title of host publicationCancer prevention, detection, and intervention
Subtitle of host publicationThird MICCAI Workshop, CaPTion 2024, Held in Conjunction with MICCAI 2024, Marrakesh, Morocco, October 6, 2024, proceedings
EditorsSharib Ali, Fons van der Sommen, Bartłomiej Władysław Papież, Noha Ghatwary, Yueming Jin, Iris Kolenbrander
Place of PublicationCham
PublisherSpringer, Springer Nature
Pages59-69
Number of pages11
ISBN (Electronic)9783031733765
ISBN (Print)9783031733758
DOIs
Publication statusPublished - 2025
Event3rd International Workshop on Cancer Prevention, detection and intervenTion, CaPTion 2024, held in Conjunction with 27th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2024 - Marrakesh, Morocco
Duration: 6 Oct 20246 Oct 2024

Publication series

NameLecture Notes in Computer Science
PublisherSpringer
Volume15199
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference3rd International Workshop on Cancer Prevention, detection and intervenTion, CaPTion 2024, held in Conjunction with 27th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2024
Country/TerritoryMorocco
CityMarrakesh
Period6/10/246/10/24

Keywords

  • Breast Cancer
  • Feature Fusion
  • Multi-modal Classification
  • Deep Learning

Cite this