TY - CHAP
T1 - Automatic fetal ultrasound standard plane detection using knowledge transferred recurrent neural networks
AU - Chen, Hao
AU - Dou, Qi
AU - Ni, Dong
AU - Cheng, Jie Zhi
AU - Qin, Jing
AU - Li, Shengli
AU - Heng, Pheng Ann
PY - 2015/1/1
Y1 - 2015/1/1
N2 - Accurate acquisition of fetal ultrasound (US) standard planes is one of the most crucial steps in obstetric diagnosis. The conventional way of standard plane acquisition requires a thorough knowledge of fetal anatomy and intensive manual labors. Hence, automatic approaches are highly demanded in clinical practice. However, automatic detection of standard planes containing key anatomical structures from US videos remains a challenging problem due to the high intra-class variations of standard planes. Unlike previous studies that developed specific methods for different anatomical standard planes respectively, we present a general framework to detect standard planes from US videos automatically. Instead of utilizing hand-crafted visual features, our framework explores spatio-temporal feature learning with a novel knowledge transferred recurrent neural network (T-RNN), which incorporates a deep hierarchical visual feature extractor and a temporal sequence learning model. In order to extract visual features effectively, we propose a joint learning framework with knowledge transfer across multi-tasks to address the insufficiency issue of limited training data. Extensive experiments on different US standard planes with hundreds of videos corroborate that our method can achieve promising results, which outperform state-of-the-art methods.
AB - Accurate acquisition of fetal ultrasound (US) standard planes is one of the most crucial steps in obstetric diagnosis. The conventional way of standard plane acquisition requires a thorough knowledge of fetal anatomy and intensive manual labors. Hence, automatic approaches are highly demanded in clinical practice. However, automatic detection of standard planes containing key anatomical structures from US videos remains a challenging problem due to the high intra-class variations of standard planes. Unlike previous studies that developed specific methods for different anatomical standard planes respectively, we present a general framework to detect standard planes from US videos automatically. Instead of utilizing hand-crafted visual features, our framework explores spatio-temporal feature learning with a novel knowledge transferred recurrent neural network (T-RNN), which incorporates a deep hierarchical visual feature extractor and a temporal sequence learning model. In order to extract visual features effectively, we propose a joint learning framework with knowledge transfer across multi-tasks to address the insufficiency issue of limited training data. Extensive experiments on different US standard planes with hundreds of videos corroborate that our method can achieve promising results, which outperform state-of-the-art methods.
UR - http://www.scopus.com/inward/record.url?scp=84947424557&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-24553-9_62
DO - 10.1007/978-3-319-24553-9_62
M3 - Chapter in an edited book (as author)
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 507
EP - 514
BT - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
ER -