Deep Multimodal Fusion Network for Semantic Segmentation Using Remote Sensing Image and LiDAR Data

Yangjie Sun, Zhongliang Fu, Chuanxia Sun, Yinglei Hu, Shengyuan Zhang

Research output: Journal article publicationJournal articleAcademic researchpeer-review

42 Citations (Scopus)


Extracting semantic information from very-high-resolution (VHR) aerial images is a prominent topic in the Earth observation research. An increasing number of different sensor platforms are appearing in remote sensing, each of which can provide corresponding multimodal supplemental or enhanced information, such as optical images, light detection and ranging (LiDAR) point clouds, infrared images, or inertial measurement unit (IMU) data. However, these current deep networks for LiDAR and VHR images have not fully utilized the complete potential of multimodal data. The stacked multimodal fusion network (MFNet) ignores the structural differences between the modalities and the manual statistical characteristics within the modalities. For multimodal remote sensing data and its corresponding carefully designed handcrafted features, we designed a novel deep MFNet that can use multimodal VHR aerial images and LiDAR data and the corresponding intramodal features, such as LiDAR-derived features [slope and normalized digital surface model (NDSM)] and imagery-derived features [infrared–red–green (IRRG), normalized difference vegetation index (NDVI), and difference of Gaussian (DoG)]. Technically, we introduce the attention mechanism and multimodal learning to adaptively fuse intermodal and intramodal features. Specifically, we designed a multimodal fusion mechanism, pyramid dilation blocks, and a multilevel feature fusion module. Through these modules, our network realized the adaptive fusion of multimodal features, improved the receptive field, and enhanced the global-to-local contextual fusion effect. Moreover, we used a multiscale supervision training scheme to optimize the network. Extensive experimental results and ablation studies on the ISPRS semantic dataset and IEEE GRSS DFC Zeebrugge dataset show the effectiveness of our proposed MFNet.

Original languageEnglish
JournalIEEE Transactions on Geoscience and Remote Sensing
Publication statusPublished - Sept 2021


  • Feature extraction
  • Image segmentation
  • Laser radar
  • Semantics
  • Sensors
  • Sun
  • Task analysis

ASJC Scopus subject areas

  • Electrical and Electronic Engineering
  • General Earth and Planetary Sciences


Dive into the research topics of 'Deep Multimodal Fusion Network for Semantic Segmentation Using Remote Sensing Image and LiDAR Data'. Together they form a unique fingerprint.

Cite this