Abstract
Texture smoothing aims to smooth out textures in images, while retaining the prominent structures. This paper presents a saliency-aware approach to the problem with two key contributions. First, we design a deep saliency network with guided non-local blocks (GNLBs) for learning long-range pixel dependencies by taking the predicted saliency map at former layer as the guidance image to help suppress the non-saliency regions in the shallow layer. The GNLB computes the saliency response at a position by a weighted sum of features at all positions, and enables us to produce results that outperform existing deep saliency models. Second, we formulate a joint optimization framework to take saliency information when iteratively separating textures from structures: on the texture layer, we smooth out structures with the help of the saliency information and migrate structures from the texture to structure layer, while on the structure layer, we adopt another deep model to detect edges and simultaneous sparse coding to push textures back to the texture layer. We tested our method on a rich variety of images and compared it with several state-of-the-art methods. Both visual and quantitative comparison results show that our method better preserves structures while removing the texture components.
Original language | English |
---|---|
Article number | 8585158 |
Pages (from-to) | 2471-2484 |
Number of pages | 14 |
Journal | IEEE Transactions on Visualization and Computer Graphics |
Volume | 26 |
Issue number | 7 |
DOIs | |
Publication status | Published - 1 Jul 2020 |
Keywords
- deep learning
- guided non-local block
- saliency detection
- Texture smoothing
ASJC Scopus subject areas
- Software
- Signal Processing
- Computer Vision and Pattern Recognition
- Computer Graphics and Computer-Aided Design