Color constancy is an important process in camera pipeline to remove the color bias of captured image caused by scene illumination. Recently, significant improvements in color constancy accuracy have been achieved by using deep neural networks (DNNs). However, existing DNNbased color constancy methods learn distinct mappings for different cameras, which require a costly data acquisition process for each camera device. In this paper, we start a pioneer work to introduce multi-domain learning to color constancy area. For different camera devices, we train a branch of networks which share the same feature extractor and illuminant estimator, and only employ a camera-specific channel re-weighting module to adapt to the camera-specific characteristics. Such a multi-domain learning strategy enables us to take benefit from crossdevice training data. The proposed multi-domain learning color constancy method achieved state-of-the-art performance on three commonly used benchmark datasets. Furthermore, we also validate the proposed method in a fewshot color constancy setting. Given a new unseen device with limited number of training samples, our method is capable of delivering accurate color constancy by merely learning the camera-specific parameters from the few-shot dataset. Our project page is publicly available at https://github.com/msxiaojin/MDLCC.
|Number of pages||10|
|Publication status||Published - Apr 2020|
|Event||IEEE Conference on Computer Vision and Pattern Recognition 2020 - |
Duration: 14 Jun 2020 → 19 Jun 2020
|Conference||IEEE Conference on Computer Vision and Pattern Recognition 2020|
|Abbreviated title||IEEE CVPR 2020|
|Period||14/06/20 → 19/06/20|