TY - GEN
T1 - A dense semantic mapping system based on CRF-RNN network
AU - Cheng, Jiyu
AU - Sun, Yuxiang
AU - Meng, Max Q.H.
N1 - Funding Information:
1Jiyu Cheng, 2Yuxiang Sun and 3Max Q.-H. Meng are with the Robotics and Perception Laboratory, Department of Electronic Engineering, The Chinese University of Hong Kong, Shatin, N.T. Hong Kong SAR, China. email:{jycheng, yxsun, qhmeng}@ee.cuhk.edu.hk This project is partially supported by the Shenzhen Science and Technology Program No. JCYJ20170413161616163 and RGC GRF grants CUHK 415512, CUHK 415613 and CUHK 14205914, CRF grant CUHK6/CRF/13G, ITC ITF grant ITS/236/15 and CUHK VC discretional fund #4930765, awarded to Prof. Max Q.-H. Meng.
Publisher Copyright:
© 2017 IEEE.
PY - 2017/8/30
Y1 - 2017/8/30
N2 - Geometric structure and appearance information of environments are main outputs of Visual Simultaneous Localization and Mapping (Visual SLAM) systems. They serve as the fundamental knowledge for robotic applications in unknown environments. Nowadays, more and more robotic applications require semantic information in visual maps to achieve better performance. However, most of the current Visual SLAM systems are not equipped with the semantic annotation capability. In order to address this problem, we develop a novel system to build 3-D Visual maps annotated with semantic information in this paper. We employ the CRF-RNN algorithm for semantic segmentation, and integrate the semantic algorithm with ORB-SLAM to achieve the semantic mapping. In order to get real-scale 3-D visual maps, we use the RGB-D data as the input of our system. We test our semantic mapping system with our self-generated RGB-D dataset. The experimental results demonstrate that our system is able to reliably annotate the semantic information in the resulting 3-D point-cloud maps.
AB - Geometric structure and appearance information of environments are main outputs of Visual Simultaneous Localization and Mapping (Visual SLAM) systems. They serve as the fundamental knowledge for robotic applications in unknown environments. Nowadays, more and more robotic applications require semantic information in visual maps to achieve better performance. However, most of the current Visual SLAM systems are not equipped with the semantic annotation capability. In order to address this problem, we develop a novel system to build 3-D Visual maps annotated with semantic information in this paper. We employ the CRF-RNN algorithm for semantic segmentation, and integrate the semantic algorithm with ORB-SLAM to achieve the semantic mapping. In order to get real-scale 3-D visual maps, we use the RGB-D data as the input of our system. We test our semantic mapping system with our self-generated RGB-D dataset. The experimental results demonstrate that our system is able to reliably annotate the semantic information in the resulting 3-D point-cloud maps.
UR - http://www.scopus.com/inward/record.url?scp=85031698457&partnerID=8YFLogxK
U2 - 10.1109/ICAR.2017.8023671
DO - 10.1109/ICAR.2017.8023671
M3 - Conference article published in proceeding or book
AN - SCOPUS:85031698457
T3 - 2017 18th International Conference on Advanced Robotics, ICAR 2017
SP - 589
EP - 594
BT - 2017 18th International Conference on Advanced Robotics, ICAR 2017
PB - IEEE
T2 - 18th International Conference on Advanced Robotics, ICAR 2017
Y2 - 10 July 2017 through 12 July 2017
ER -