TY - JOUR
T1 - Accurate light field depth estimation with superpixel regularization over partially occluded regions
AU - Chen, Jie
AU - Hou, Junhui
AU - Ni, Yun
AU - Chau, Lap Pui
N1 - Funding Information:
This work was supported by the ST Engineering-NTU Corporate Lab through the NRF corporate lab@university scheme. The work of J. Hou was supported by the CityU Start-up Grant for New Faculty under Grant 7200537/CS.
Funding Information:
63 10.1109/TIP.2018.2839524 0b00006487d43d9a Active orig-research F T F F F F F Publish 10 IEEE 1057-7149 © 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information. Depth estimation is a fundamental problem for light field photography applications. Numerous methods have been proposed in recent years, which either focus on crafting cost terms for more robust matching, or on analyzing the geometry of scene structures embedded in the epipolar-plane images. Significant improvements have been made in terms of overall depth estimation error; however, current state-of-the-art methods still show limitations in handling intricate occluding structures and complex scenes with multiple occlusions. To address these challenging issues, we propose a very effective depth estimation framework which focuses on regularizing the initial label confidence map and edge strength weights. Specifically, we first detect partially occluded boundary regions (POBR) via superpixel-based regularization. Series of shrinkage/reinforcement operations are then applied on the label confidence map and edge strength weights over the POBR. We show that after weight manipulations, even a low-complexity weighted least squares model can produce much better depth estimation than the state-of-the-art methods in terms of average disparity error rate, occlusion boundary precision-recall rate, and the preservation of intricate visual features. 0 0000-0001-8419-4620 Chen, J. Jie Chen Jie Jie Chen Chen School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore Author [email protected] 0 0000-0003-3431-2021 Hou, J. Junhui Hou Junhui Junhui Hou Hou Department of Computer Science, City University of Hong Kong, Hong Kong Author [email protected] 0 Ni, Y. Yun Ni Yun Yun Ni Ni School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore Author [email protected] 0 0000-0003-4932-0593 Chau, L. Lap-Pui Chau Lap-Pui Lap-Pui Chau Chau School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore Author [email protected] 2018 Oct. 2018 5 21 2018 6 28 7984719 08362692.pdf 1-1 8362692 Estimation Robustness Image edge detection Cameras Uncertainty Noise measurement Geometry Light field superpixel partially occluded border region weight manipulation National Research Foundation Singapore 10.13039/501100001381 CityU Start-up Grant for New Faculty 7200537/CS
Publisher Copyright:
© 2018 IEEE.
PY - 2018/10
Y1 - 2018/10
N2 - Depth estimation is a fundamental problem for light field photography applications. Numerous methods have been proposed in recent years, which either focus on crafting cost terms for more robust matching, or on analyzing the geometry of scene structures embedded in the epipolar-plane images. Significant improvements have been made in terms of overall depth estimation error; however, current state-of-the-art methods still show limitations in handling intricate occluding structures and complex scenes with multiple occlusions. To address these challenging issues, we propose a very effective depth estimation framework which focuses on regularizing the initial label confidence map and edge strength weights. Specifically, we first detect partially occluded boundary regions (POBR) via superpixel-based regularization. Series of shrinkage/reinforcement operations are then applied on the label confidence map and edge strength weights over the POBR. We show that after weight manipulations, even a low-complexity weighted least squares model can produce much better depth estimation than the state-of-the-art methods in terms of average disparity error rate, occlusion boundary precision-recall rate, and the preservation of intricate visual features.
AB - Depth estimation is a fundamental problem for light field photography applications. Numerous methods have been proposed in recent years, which either focus on crafting cost terms for more robust matching, or on analyzing the geometry of scene structures embedded in the epipolar-plane images. Significant improvements have been made in terms of overall depth estimation error; however, current state-of-the-art methods still show limitations in handling intricate occluding structures and complex scenes with multiple occlusions. To address these challenging issues, we propose a very effective depth estimation framework which focuses on regularizing the initial label confidence map and edge strength weights. Specifically, we first detect partially occluded boundary regions (POBR) via superpixel-based regularization. Series of shrinkage/reinforcement operations are then applied on the label confidence map and edge strength weights over the POBR. We show that after weight manipulations, even a low-complexity weighted least squares model can produce much better depth estimation than the state-of-the-art methods in terms of average disparity error rate, occlusion boundary precision-recall rate, and the preservation of intricate visual features.
KW - Light field
KW - partially occluded border region
KW - superpixel
KW - weight manipulation
UR - http://www.scopus.com/inward/record.url?scp=85047607238&partnerID=8YFLogxK
U2 - 10.1109/TIP.2018.2839524
DO - 10.1109/TIP.2018.2839524
M3 - Journal article
C2 - 29969399
AN - SCOPUS:85047607238
SN - 1057-7149
VL - 27
SP - 4889
EP - 4900
JO - IEEE Transactions on Image Processing
JF - IEEE Transactions on Image Processing
IS - 10
ER -