Multi-Domain Learning for Accurate and Few-Shot Color Constancy

Jin Xiao, Shuhang Gu, Lei Zhang

Research output: Unpublished conference presentation (presented paper, abstract, poster)Conference presentation (not published in journal/proceeding/book)Academic researchpeer-review

Abstract

Color constancy is an important process in camera pipeline to remove the color bias of captured image caused by scene illumination. Recently, significant improvements in color constancy accuracy have been achieved by using deep neural networks (DNNs). However, existing DNNbased color constancy methods learn distinct mappings for different cameras, which require a costly data acquisition process for each camera device. In this paper, we start a pioneer work to introduce multi-domain learning to color constancy area. For different camera devices, we train a branch of networks which share the same feature extractor and illuminant estimator, and only employ a camera-specific channel re-weighting module to adapt to the camera-specific characteristics. Such a multi-domain learning strategy enables us to take benefit from crossdevice training data. The proposed multi-domain learning color constancy method achieved state-of-the-art performance on three commonly used benchmark datasets. Furthermore, we also validate the proposed method in a fewshot color constancy setting. Given a new unseen device with limited number of training samples, our method is capable of delivering accurate color constancy by merely learning the camera-specific parameters from the few-shot dataset. Our project page is publicly available at https://github.com/msxiaojin/MDLCC.
Original languageEnglish
Pages3255-3264
Number of pages10
DOIs
Publication statusPublished - Apr 2020
EventIEEE Conference on Computer Vision and Pattern Recognition 2020 -
Duration: 14 Jun 202019 Jun 2020
http://cvpr2020.thecvf.com/

Conference

ConferenceIEEE Conference on Computer Vision and Pattern Recognition 2020
Abbreviated titleIEEE CVPR 2020
Period14/06/2019/06/20
Internet address

Cite this