TY - GEN
T1 - Deep Depth Fusion for Black, Transparent, Reflective and Texture-Less Objects
AU - Chai, Chun Yu
AU - Wu, Yu Po
AU - Tsao, Shiao Li
N1 - Publisher Copyright:
© 2020 IEEE.
Copyright:
Copyright 2020 Elsevier B.V., All rights reserved.
PY - 2020/5
Y1 - 2020/5
N2 - Structured-light and stereo cameras, which are widely used to construct point clouds for robotic applications, have different limitations on estimating depth values. Structured-light cameras fail in black, transparent, and reflective objects, which influence the light path; stereo cameras fail in texture-less objects. In this work, we propose a depth fusion model that complements these two types of methods to generate high-quality point clouds for short-range robotic applications. The model first determines the fusion weights from the two input depth images and then refines the fused depth using color features. We construct a dataset containing the aforementioned challenging objects and report the performance of our proposed model. The results reveal that our method reduces the average L1 distance on depth prediction by 75% and 52% compared with the original depth output of the structured-light camera and the stereo model, respectively. A noticeable improvement on the Iterative Closest Point (ICP) algorithm can be achieved by using the refined depth images output from our method.
AB - Structured-light and stereo cameras, which are widely used to construct point clouds for robotic applications, have different limitations on estimating depth values. Structured-light cameras fail in black, transparent, and reflective objects, which influence the light path; stereo cameras fail in texture-less objects. In this work, we propose a depth fusion model that complements these two types of methods to generate high-quality point clouds for short-range robotic applications. The model first determines the fusion weights from the two input depth images and then refines the fused depth using color features. We construct a dataset containing the aforementioned challenging objects and report the performance of our proposed model. The results reveal that our method reduces the average L1 distance on depth prediction by 75% and 52% compared with the original depth output of the structured-light camera and the stereo model, respectively. A noticeable improvement on the Iterative Closest Point (ICP) algorithm can be achieved by using the refined depth images output from our method.
UR - http://www.scopus.com/inward/record.url?scp=85092710428&partnerID=8YFLogxK
U2 - 10.1109/ICRA40945.2020.9196894
DO - 10.1109/ICRA40945.2020.9196894
M3 - Conference contribution
AN - SCOPUS:85092710428
T3 - Proceedings - IEEE International Conference on Robotics and Automation
SP - 6766
EP - 6772
BT - 2020 IEEE International Conference on Robotics and Automation, ICRA 2020
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2020 IEEE International Conference on Robotics and Automation, ICRA 2020
Y2 - 31 May 2020 through 31 August 2020
ER -