2D medical image segmentation via learning multi-scale contextual dependencies

Shuchao Pang, Anan Du, Zhenmei Yu, Mehmet A. Orgun*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)

Abstract

Automatic medical image segmentation plays an important role as a diagnostic aid in the identification of diseases and their treatment in clinical settings. Recently proposed methods based on Convolutional Neural Networks (CNNs) have demonstrated their potential in image processing tasks, including some medical image analysis tasks. Those methods can learn various feature representations with numerous weight-shared convolutional kernels, however, the missed diagnosis rate of regions of interest (ROIs) is still high in medical image segmentation. Two crucial factors behind this shortcoming, which have been overlooked, are small ROIs from medical images and the limited context information from the existing network models. In order to reduce the missed diagnosis rate of ROIs from medical images, we propose a new segmentation framework which enhances the representative capability of small ROIs (particularly in deep layers) and explicitly learns global contextual dependencies in multi-scale feature spaces. In particular, the local features and their global dependencies from each feature space are adaptively aggregated based on both the spatial and the channel dimensions. Moreover, some visualization comparisons of the learned features from our framework further boost neural networks’ interpretability. Experimental results show that, in comparison to some popular medical image segmentation and general image segmentation methods, our proposed framework achieves the state-of-the-art performance on the liver tumor segmentation task with 91.18% Sensitivity, the COVID-19 lung infection segmentation task with 75.73% Sensitivity and the retinal vessel detection task with 82.68% Sensitivity. Moreover, it is possible to integrate (parts of) the proposed framework into most of the recently proposed Fully CNN-based models, in order to improve their effectiveness in medical image segmentation tasks.

Original languageEnglish
Pages (from-to)40-53
Number of pages14
JournalMethods
Volume202
Early online date23 May 2021
DOIs
Publication statusPublished - Jun 2022

Keywords

  • Medical image segmentation
  • Contextual dependency
  • Hepatic tumors
  • COVID-19 lung infection
  • Retinal vessel
  • Visualization

Fingerprint

Dive into the research topics of '2D medical image segmentation via learning multi-scale contextual dependencies'. Together they form a unique fingerprint.

Cite this