Assessment of spatiotemporal fusion algorithms for planet and worldview images

Chiman Kwan, Xiaolin Zhu, Feng Gao, Bryan Chou, Daniel Perez, Jiang Li, Yuzhong Shen, Krzysztof Koperski, Giovanni Marchisio

Research output: Journal article publicationJournal articleAcademic researchpeer-review

24 Citations (Scopus)

Abstract

Although Worldview-2 (WV) images (non-pansharpened) have 2-m resolution, the re-visit times for the same areas may be seven days or more. In contrast, Planet images are collected using small satellites that can cover the whole Earth almost daily. However, the resolution of Planet images is 3.125 m. It would be ideal to fuse these two satellites images to generate high spatial resolution (2 m) and high temporal resolution (1 or 2 days) images for applications such as damage assessment, border monitoring, etc. that require quick decisions. In this paper, we evaluate three approaches to fusingWorldview (WV) and Planet images. These approaches are known as Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), Flexible Spatiotemporal Data Fusion (FSDAF), and Hybrid Color Mapping (HCM), which have been applied to the fusion of MODIS and Landsat images in recent years. Experimental results using actual Planet andWorldview images demonstrated that the three aforementioned approaches have comparable performance and can all generate high quality prediction images.

Original languageEnglish
Article number1051
JournalSensors (Switzerland)
Volume18
Issue number4
DOIs
Publication statusPublished - 1 Apr 2018

Keywords

  • Forward prediction
  • Image fusion
  • Pansharpening
  • Planet
  • Spatiotemporal
  • Worldview

ASJC Scopus subject areas

  • Analytical Chemistry
  • Atomic and Molecular Physics, and Optics
  • Biochemistry
  • Instrumentation
  • Electrical and Electronic Engineering

Cite this