TY - GEN
T1 - Pay attention to devils: A photometric stereo network for better details
AU - Ju, Yakun
AU - Lam, Kin Man
AU - Chen, Yang
AU - Qi, Lin
AU - Dong, Junyu
N1 - Funding Information:
The work was supported by the National Key R & D Program of China under Grant (2018AAA0100602), the National Key Scientific Instrument and Equipment Development Projects of China (41927805), the National Natural Science Foundation of China (61501417, 61976123), and the Joint Funds of the National Natural Science Foundation of China-Shandong (U1706218). We thank Guanying Chen for code and help. We also thank Hiroaki Santo for his help with the providing of comparison results.
Publisher Copyright:
© 2020 Inst. Sci. inf., Univ. Defence in Belgrade. All rights reserved.
PY - 2021/1
Y1 - 2021/1
N2 - We present an attention-weighted loss in a photometric stereo neural network to improve 3D surface recovery accuracy in complex-structured areas, such as edges and crinkles, where existing learning-based methods often failed. Instead of using a uniform penalty for all pixels, our method employs the attention-weighted loss learned in a self-supervise manner for each pixel, avoiding blurry reconstruction result in such difficult regions. The network first estimates a surface normal map and an adaptive attention map, and then the latter is used to calculate a pixel-wise attention-weighted loss that focuses on complex regions. In these regions, the attention-weighted loss applies higher weights of the detail-preserving gradient loss to produce clear surface reconstructions. Experiments on real datasets show that our approach significantly outperforms traditional photometric stereo algorithms and state-of-the-art learning-based methods.
AB - We present an attention-weighted loss in a photometric stereo neural network to improve 3D surface recovery accuracy in complex-structured areas, such as edges and crinkles, where existing learning-based methods often failed. Instead of using a uniform penalty for all pixels, our method employs the attention-weighted loss learned in a self-supervise manner for each pixel, avoiding blurry reconstruction result in such difficult regions. The network first estimates a surface normal map and an adaptive attention map, and then the latter is used to calculate a pixel-wise attention-weighted loss that focuses on complex regions. In these regions, the attention-weighted loss applies higher weights of the detail-preserving gradient loss to produce clear surface reconstructions. Experiments on real datasets show that our approach significantly outperforms traditional photometric stereo algorithms and state-of-the-art learning-based methods.
UR - http://www.scopus.com/inward/record.url?scp=85097329991&partnerID=8YFLogxK
M3 - Conference article published in proceeding or book
AN - SCOPUS:85097329991
T3 - IJCAI International Joint Conference on Artificial Intelligence
SP - 694
EP - 700
BT - Proceedings of the 29th International Joint Conference on Artificial Intelligence, IJCAI 2020
A2 - Bessiere, Christian
PB - International Joint Conferences on Artificial Intelligence
T2 - 29th International Joint Conference on Artificial Intelligence, IJCAI 2020
Y2 - 1 January 2021
ER -