A hybrid color mapping approach to fusing MODIS and Landsat images for forward prediction

Chiman Kwan, Bence Budavari, Feng Gao, Xiaolin Zhu

Research output: Journal article publicationJournal articleAcademic researchpeer-review

39 Citations (Scopus)


We present a simple, and efficient approach to fusing MODIS and Landsat images. It is well known that MODIS images have high temporal resolution and low spatial resolution, whereas Landsat images are just the opposite. Similar to earlier approaches, our goal is to fuse MODIS and Landsat images to yield high spatial and high temporal resolution images. Our approach consists of two steps. First, a mapping is established between two MODIS images, where one is at an earlier time, t1, and the other one is at the time of prediction, tp. Second, this mapping is applied to map a known Landsat image at t1to generate a predicted Landsat image at tp. Similar to the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), SpatioTemporal Image-Fusion Model (STI-FM), and the Flexible Spatiotemporal DAta Fusion (FSDAF) approaches, only one pair of MODIS and Landsat images is needed for prediction. Using seven performance metrics, experiments involving actual Landsat and MODIS images demonstrated that the proposed approach achieves comparable or better fusion performance than that of STARFM, STI-FM, and FSDAF.
Original languageEnglish
Article number520
JournalRemote Sensing
Issue number4
Publication statusPublished - 1 Apr 2018


  • Data fusion
  • Hybrid color mapping
  • Landsat
  • Remote sensing
  • Super-resolution

ASJC Scopus subject areas

  • Earth and Planetary Sciences(all)

Cite this