TY - GEN
T1 - Structural attention
T2 - International Conference on Medical Image Computing and Computer-Assisted Intervention (27th : 2024)
AU - Phan, Vu Minh Hieu
AU - Xie, Yutong
AU - Zhang, Bowen
AU - Qi, Yuankai
AU - Liao, Zhibin
AU - Perperidis, Antonios
AU - Phung, Son Lam
AU - Verjans, Johan W.
AU - To, Minh-Son
PY - 2024
Y1 - 2024
N2 - Unpaired medical image synthesis aims to provide complementary information for an accurate clinical diagnostics, and address challenges in obtaining aligned multi-modal medical scans. Transformer-based models excel in imaging translation tasks thanks to their ability to capture long-range dependencies. Although effective in supervised training, their performance falters in unpaired image synthesis, particularly in synthesizing structural details. This paper empirically demonstrates that, lacking strong inductive biases, Transformer can converge to non-optimal solutions in the absence of paired data. To address this, we introduce UNet Structured Transformer (UNest) - a novel architecture incorporating structural inductive biases for unpaired medical image synthesis. We leverage the foundational Segment-Anything Model to precisely extract the foreground structure and perform structural attention within the main anatomy. This guides the model to learn key anatomical regions, thus improving structural synthesis under the lack of supervision in unpaired training. Evaluated on two public datasets, spanning three modalities, i.e., MR, CT, and PET, UNest improves recent methods by up to 19.30% across six medical image synthesis tasks. Our code is released at https://github.com/HieuPhan33/MICCAI2024-UNest.
AB - Unpaired medical image synthesis aims to provide complementary information for an accurate clinical diagnostics, and address challenges in obtaining aligned multi-modal medical scans. Transformer-based models excel in imaging translation tasks thanks to their ability to capture long-range dependencies. Although effective in supervised training, their performance falters in unpaired image synthesis, particularly in synthesizing structural details. This paper empirically demonstrates that, lacking strong inductive biases, Transformer can converge to non-optimal solutions in the absence of paired data. To address this, we introduce UNet Structured Transformer (UNest) - a novel architecture incorporating structural inductive biases for unpaired medical image synthesis. We leverage the foundational Segment-Anything Model to precisely extract the foreground structure and perform structural attention within the main anatomy. This guides the model to learn key anatomical regions, thus improving structural synthesis under the lack of supervision in unpaired training. Evaluated on two public datasets, spanning three modalities, i.e., MR, CT, and PET, UNest improves recent methods by up to 19.30% across six medical image synthesis tasks. Our code is released at https://github.com/HieuPhan33/MICCAI2024-UNest.
UR - http://www.scopus.com/inward/record.url?scp=85210286120&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-72104-5_66
DO - 10.1007/978-3-031-72104-5_66
M3 - Conference proceeding contribution
AN - SCOPUS:85210286120
SN - 9783031721038
T3 - Lecture Notes in Computer Scie
SP - 690
EP - 700
BT - Medical Image Computing and Computer Assisted Intervention - MICCAI 2024
A2 - Linguraru, Marius George
A2 - Dou, Qi
A2 - Feragen, Aasa
A2 - Giannarou, Stamatia
A2 - Glocker, Ben
A2 - Lekadir, Karim
A2 - Schnabel, Julia A.
PB - Springer, Springer Nature
CY - Cham
Y2 - 6 October 2024 through 10 October 2024
ER -