Abstract
Video composition aims at cloning a patch from the source video into the target scene to create a seamless and harmonious blending frame sequence. Previous work in video composition usually suffers from artifacts around the blending region and spatial-temporal consistency when illumination intensity varies in the input source and target video. We propose an illumination-guided video composition method via a unified spatial and temporal optimization framework. Our method can produce globally consistent composition results and maintain the temporal coherency. We first compute a spatial-temporal blending boundary iteratively. For each frame, the gradient field of the target and source frames are mixed adaptively based on gradients and inter-frame color difference. The temporal consistency is further obtained by optimizing luminance gradients throughout all the composition frames. Moreover, we extend the mean-value cloning by smoothing discrepancies between the source and target frames, then eliminate the color distribution overflow exponentially to reduce falsely blending pixels. Various experiments have shown the effectiveness and high-quality performance of our illumination-guided composition.
Original language | English |
---|---|
Pages (from-to) | 5077-5090 |
Number of pages | 14 |
Journal | IEEE Transactions on Image Processing |
Volume | 28 |
Issue number | 10 |
DOIs | |
Publication status | Published - Oct 2019 |
Keywords
- illumination aware
- image cloning
- gradient fields
- video composition
ASJC Scopus subject areas
- Software
- Signal Processing
- Computer Vision and Pattern Recognition
- Computer Graphics and Computer-Aided Design