TY - JOUR
T1 - Globally and locally semantic colorization via exemplar-based broad-GAN
AU - Li, Haoxuan
AU - Sheng, Bin
AU - Li, Ping
AU - Ali, Riaz
AU - Chen, C. L. Philip
N1 - Funding Information:
This work was supported in part by the National Natural Science Foundation of China under Grant 61872241 and Grant 61572316, in part by the National Key Research and Development Program of China under Grant 2019YFB1703600, in part by the Science and Technology Commission of Shanghai Municipality under Grant 18410750700 and Grant 17411952600, and in part by The Hong Kong Polytechnic University under Grant P0030419, Grant P0030929, and Grant P0035358.
Publisher Copyright:
© 1992-2012 IEEE.
PY - 2021/10
Y1 - 2021/10
N2 - Given a target grayscale image and a reference color image, exemplar-based image colorization aims to generate a visually natural-looking color image by transforming meaningful color information from the reference image to the target image. It remains a challenging problem due to the differences in semantic content between the target image and the reference image. In this paper, we present a novel globally and locally semantic colorization method called exemplar-based conditional broad-GAN, a broad generative adversarial network (GAN) framework, to deal with this limitation. Our colorization framework is composed of two sub-networks: the match sub-net and the colorization sub-net. We reconstruct the target image with a dictionary-based sparse representation in the match sub-net, where the dictionary consists of features extracted from the reference image. To enforce global-semantic and local-structure self-similarity constraints, global-local affinity energy is explored to constrain the sparse representation for matching consistency. Then, the matching information of the match sub-net is fed into the colorization sub-net as the perceptual information of the conditional broad-GAN to facilitate the personalized results. Finally, inspired by the observation that a broad learning system is able to extract semantic features efficiently, we further introduce a broad learning system into the conditional GAN and propose a novel loss, which substantially improves the training stability and the semantic similarity between the target image and the ground truth. Extensive experiments have shown that our colorization approach outperforms the state-of-the-art methods, both perceptually and semantically.
AB - Given a target grayscale image and a reference color image, exemplar-based image colorization aims to generate a visually natural-looking color image by transforming meaningful color information from the reference image to the target image. It remains a challenging problem due to the differences in semantic content between the target image and the reference image. In this paper, we present a novel globally and locally semantic colorization method called exemplar-based conditional broad-GAN, a broad generative adversarial network (GAN) framework, to deal with this limitation. Our colorization framework is composed of two sub-networks: the match sub-net and the colorization sub-net. We reconstruct the target image with a dictionary-based sparse representation in the match sub-net, where the dictionary consists of features extracted from the reference image. To enforce global-semantic and local-structure self-similarity constraints, global-local affinity energy is explored to constrain the sparse representation for matching consistency. Then, the matching information of the match sub-net is fed into the colorization sub-net as the perceptual information of the conditional broad-GAN to facilitate the personalized results. Finally, inspired by the observation that a broad learning system is able to extract semantic features efficiently, we further introduce a broad learning system into the conditional GAN and propose a novel loss, which substantially improves the training stability and the semantic similarity between the target image and the ground truth. Extensive experiments have shown that our colorization approach outperforms the state-of-the-art methods, both perceptually and semantically.
KW - adversarial generative networks
KW - broad learning
KW - example-based
KW - Image colorization
KW - image manipulation
UR - http://www.scopus.com/inward/record.url?scp=85117198464&partnerID=8YFLogxK
U2 - 10.1109/TIP.2021.3117061
DO - 10.1109/TIP.2021.3117061
M3 - Journal article
C2 - 34633929
AN - SCOPUS:85117198464
SN - 1057-7149
VL - 30
SP - 8526
EP - 8539
JO - IEEE Transactions on Image Processing
JF - IEEE Transactions on Image Processing
ER -