Practical cross-sensor color constancy using a dual-mapping strategy

Shuwei Yue, Minchen Wei

Research output: Journal article publicationConference articleAcademic researchpeer-review

1 Citation (Scopus)

Abstract

Deep Neural Networks (DNNs) have been widely used for illumination estimation, which is time-consuming and requires sensor-specific data collection. Our proposed method uses a dual-mapping strategy and only requires a simple white point from a test sensor under a D65 condition. This allows us to derive a mapping matrix, enabling the reconstructions of image data and illuminants. In the second mapping phase, we transform the reconstructed image data into sparse features, which are then optimized with a lightweight multi-layer perceptron (MLP) model using the re-constructed illuminants as ground truths. This approach effectively reduces sensor discrepancies and delivers performance on par with leading cross-sensor methods. It only requires a small amount of memory (∼0.003 MB), and takes ∼1 hour training on an RTX3070Ti GPU. More importantly, the method can be implemented very fast, with ∼0.3 ms and ∼1 ms on a GPU or CPU respectively, and is not sensitive to the input image resolution. Therefore, it offers a practical solution to the great challenges of data recollection that is faced by the industry.

Original languageEnglish
Pages (from-to)96-101
Number of pages6
JournalFinal Program and Proceedings - IS and T/SID Color Imaging Conference
Volume31
Issue number1
DOIs
Publication statusPublished - Nov 2023
Event31st Color and Imaging Conference - Color Science and Engineering Systems, Technologies, and Applications, CIC 2023 - Paris, France
Duration: 13 Nov 202317 Nov 2023

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition
  • Electronic, Optical and Magnetic Materials
  • Atomic and Molecular Physics, and Optics

Fingerprint

Dive into the research topics of 'Practical cross-sensor color constancy using a dual-mapping strategy'. Together they form a unique fingerprint.

Cite this