IDA-Net: Intensity-distribution aware networks for semantic segmentation of 3D MLS point clouds in indoor corridor environments

Zhipeng Luo, Pengxin Chen, Wenzhong Shi (Corresponding Author), Jonathan Li

Research output: Journal article publicationJournal articleAcademic researchpeer-review

Abstract

Semantic segmentation of 3D mobile laser scanning point clouds is the foundational task for scene understanding in several fields. Most existing segmentation methods tend to simply stack the common point attributes, such as the coordinates and intensity, but ignore their heterogeneous. This paper presents IDA-Net, an intensity-distribution aware network that mines the uniqueness and discrepancy of these two modalities in a separate way for point cloud segmentation under indoor corridor environments. Specifically, IDA-Net consists of two key components. Firstly, an intensity-distribution aware (IDA) descriptor is proposed to mine the intensity distribution pattern. It outputs a multi-channel mask for each point to represent the intensity distribution information. Secondly, a two-stage embedding network is designed to fuse the coordinates and intensity information efficiently. It includes a guiding operation in training stage and a refining operation in testing stage. IDA-Net was evaluated on two indoor corridor areas. Experimental results show that the proposed method significantly improves the performance of segmentation. Specifically, with backbone of KPConv, IDA-Net achieves high mIoU of 90.58% and 88.94% on the above two testing areas respectively, which demonstrates the superiority of the designed IDA descriptor and two-stage embedding network.
Original languageEnglish
Article number102904
JournalInternational Journal of Applied Earth Observation and Geoinformation
DOIs
Publication statusPublished - 1 Aug 2022

Fingerprint

Dive into the research topics of 'IDA-Net: Intensity-distribution aware networks for semantic segmentation of 3D MLS point clouds in indoor corridor environments'. Together they form a unique fingerprint.

Cite this