Abstract
Deep Neural Networks (DNNs) have been widely used for illumination estimation, which is time-consuming and requires sensor-specific data collection. Our proposed method uses a dual-mapping strategy and only requires a simple white point from a test sensor under a D65 condition. This allows us to derive a mapping matrix, enabling the reconstructions of image data and illuminants. In the second mapping phase, we transform the reconstructed image data into sparse features, which are then optimized with a lightweight multi-layer perceptron (MLP) model using the re-constructed illuminants as ground truths. This approach effectively reduces sensor discrepancies and delivers performance on par with leading cross-sensor methods. It only requires a small amount of memory (∼0.003 MB), and takes ∼1 hour training on an RTX3070Ti GPU. More importantly, the method can be implemented very fast, with ∼0.3 ms and ∼1 ms on a GPU or CPU respectively, and is not sensitive to the input image resolution. Therefore, it offers a practical solution to the great challenges of data recollection that is faced by the industry.
Original language | English |
---|---|
Pages (from-to) | 96-101 |
Number of pages | 6 |
Journal | Final Program and Proceedings - IS and T/SID Color Imaging Conference |
Volume | 31 |
Issue number | 1 |
DOIs | |
Publication status | Published - Nov 2023 |
Event | 31st Color and Imaging Conference - Color Science and Engineering Systems, Technologies, and Applications, CIC 2023 - Paris, France Duration: 13 Nov 2023 → 17 Nov 2023 |
ASJC Scopus subject areas
- Computer Vision and Pattern Recognition
- Electronic, Optical and Magnetic Materials
- Atomic and Molecular Physics, and Optics