TY - JOUR
T1 - Improving medical visual representation learning with pathological-level cross-modal alignment and correlation exploration
AU - Wang, Jun
AU - Zhu, Lixing
AU - Yu, Xiaohan
AU - Bhalerao, Abhir
AU - He, Yulan
PY - 2025/10/23
Y1 - 2025/10/23
N2 - Learning medical visual representations from image-report pairs through joint learning has garnered increasing research attention due to its potential for transferring acquired knowledge to various downstream medical tasks. Previous works have predominantly focused on instance-wise or token-wise cross-modal alignment, often neglecting the importance of pathological-level consistency. This paper presents a novel framework PLACE that promotes the Pathological-Level Alignment and enriches the fine-grained details via Correlation Exploration without additional human annotations. Specifically, we propose a novel pathological-level cross-modal alignment (PCMA) approach to maximize the consistency of pathology observations from both images and reports. To facilitate this, a Visual Pathology Observation Extractor is introduced to extract visual pathological observation representations from localized tokens. The PCMA module operates independently of any external disease annotations, enhancing the generalizability and robustness of our methods. Furthermore, we design a proxy task that enforces the model to identify correlations among image patches, thereby enriching the fine-grained details crucial for various downstream tasks. Experimental results demonstrate that our proposed framework achieves new state-of-the-art performance on multiple downstream tasks, including classification, image-to-text retrieval, semantic segmentation, object detection and report generation.
AB - Learning medical visual representations from image-report pairs through joint learning has garnered increasing research attention due to its potential for transferring acquired knowledge to various downstream medical tasks. Previous works have predominantly focused on instance-wise or token-wise cross-modal alignment, often neglecting the importance of pathological-level consistency. This paper presents a novel framework PLACE that promotes the Pathological-Level Alignment and enriches the fine-grained details via Correlation Exploration without additional human annotations. Specifically, we propose a novel pathological-level cross-modal alignment (PCMA) approach to maximize the consistency of pathology observations from both images and reports. To facilitate this, a Visual Pathology Observation Extractor is introduced to extract visual pathological observation representations from localized tokens. The PCMA module operates independently of any external disease annotations, enhancing the generalizability and robustness of our methods. Furthermore, we design a proxy task that enforces the model to identify correlations among image patches, thereby enriching the fine-grained details crucial for various downstream tasks. Experimental results demonstrate that our proposed framework achieves new state-of-the-art performance on multiple downstream tasks, including classification, image-to-text retrieval, semantic segmentation, object detection and report generation.
KW - medical cross-modal learning
KW - medical image-text joint training
KW - Medical visual representation learning
UR - http://www.scopus.com/inward/record.url?scp=105019975987&partnerID=8YFLogxK
U2 - 10.1109/JBHI.2025.3624382
DO - 10.1109/JBHI.2025.3624382
M3 - Article
C2 - 41129431
AN - SCOPUS:105019975987
SN - 2168-2194
JO - IEEE Journal of Biomedical and Health Informatics
JF - IEEE Journal of Biomedical and Health Informatics
ER -