TY - GEN
T1 - Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement and Gated Fusion
AU - Chen, Cheng
AU - Dou, Qi
AU - Jin, Yueming
AU - Chen, Hao
AU - Qin, Jing
AU - Heng, Pheng Ann
N1 - Funding Information:
Acknowledgments. This work was supported in part by the National Basic Program of China 973 Program under Grant 2015CB351706, the National Natural Science Foundation of China, under Project No. U1613219, the Research Grants Council of Hong Kong Special Administrative Region, under Project No. CUHK14225616, and the Hong Kong Innovation and Technology Commission, under Project No. ITS/319/17.
Publisher Copyright:
© 2019, Springer Nature Switzerland AG.
PY - 2019
Y1 - 2019
N2 - Accurate medical image segmentation commonly requires effective learning of the complementary information from multimodal data. However, in clinical practice, we often encounter the problem of missing imaging modalities. We tackle this challenge and propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities. Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code, which uniquely sticks to each modality, and the modality-invariant content code, which absorbs multimodal information for the segmentation task. With enhanced modality-invariance, the disentangled content code from each modality is fused into a shared representation which gains robustness to missing data. The fusion is achieved via a learning-based strategy to gate the contribution of different modalities at different locations. We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset. With competitive performance to the state-of-the-art approaches for full modality, our method achieves outstanding robustness under various missing modality(ies) situations, significantly exceeding the state-of-the-art method by over in average for Dice on whole tumor segmentation.
AB - Accurate medical image segmentation commonly requires effective learning of the complementary information from multimodal data. However, in clinical practice, we often encounter the problem of missing imaging modalities. We tackle this challenge and propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities. Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code, which uniquely sticks to each modality, and the modality-invariant content code, which absorbs multimodal information for the segmentation task. With enhanced modality-invariance, the disentangled content code from each modality is fused into a shared representation which gains robustness to missing data. The fusion is achieved via a learning-based strategy to gate the contribution of different modalities at different locations. We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset. With competitive performance to the state-of-the-art approaches for full modality, our method achieves outstanding robustness under various missing modality(ies) situations, significantly exceeding the state-of-the-art method by over in average for Dice on whole tumor segmentation.
UR - http://www.scopus.com/inward/record.url?scp=85075692071&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-32248-9_50
DO - 10.1007/978-3-030-32248-9_50
M3 - Conference article published in proceeding or book
AN - SCOPUS:85075692071
SN - 9783030322472
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 447
EP - 456
BT - Medical Image Computing and Computer Assisted Intervention – MICCAI 2019 - 22nd International Conference, Proceedings
A2 - Shen, Dinggang
A2 - Yap, Pew-Thian
A2 - Liu, Tianming
A2 - Peters, Terry M.
A2 - Khan, Ali
A2 - Staib, Lawrence H.
A2 - Essert, Caroline
A2 - Zhou, Sean
PB - Springer Science and Business Media Deutschland GmbH
T2 - 22nd International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2019
Y2 - 13 October 2019 through 17 October 2019
ER -