The bag of visual words model has been widely used in content-based image retrieval. However, when it is applied to medical domain, it potentially has several limitations, e.g., some ordinary feature descriptors may not be able to capture the subtle characteristics of medical images; there is a semantic gap between the low-level features and the medical concepts; the emerging multi-modal data pose challenges on current retrieval framework and urge us to extend the possibilities to combine and analyze the multi-modal data. In an attempt to address these issues, we proposed a bag of semantic words model for medical content-based retrieval in this study. We built the high-level semantic features from the low-level visual features by a three-step pipeline. We first extracted a set of low-level features pertaining to the disease symptoms from the medical images. We then translated the low-level features to symptom severity degrees by symptom quantization. Finally, the high-level semantic words were built through learning the patterns of the symptoms. The proposed model was evaluated using 331 multi-modal neuroimaging datasets from the ADNI database. The preliminary results show that the proposed bag of semantic words model could extract the semantic information from medical images and outperformed the state-of-the-art medical content-based retrieval methods.
|Title of host publication||International Workshop on Medical Content-Based Retrieval for Clinical Decision Support 2013|
|Number of pages||8|
|Publication status||Published - 26 Sep 2013|
|Event||MICCAI Workshop on Medical Content-based Retrieval for Clinical Decision Support - Nagoya, Japan|
Duration: 27 Sep 2013 → …
|Conference||MICCAI Workshop on Medical Content-based Retrieval for Clinical Decision Support|
|Period||27/09/13 → …|