TY - JOUR
T1 - BiFuse++
T2 - Self-Supervised and Efficient Bi-Projection Fusion for 360° Depth Estimation
AU - Wang, Fu En
AU - Yeh, Yu Hsuan
AU - Tsai, Yi Hsuan
AU - Chiu, Wei Chen
AU - Sun, Min
N1 - Publisher Copyright:
© 1979-2012 IEEE.
PY - 2023/5/1
Y1 - 2023/5/1
N2 - Due to the rise of spherical cameras, monocular 360° depth estimation becomes an important technique for many applications (e.g., autonomous systems). Thus, state-of-the-art frameworks for monocular 360° depth estimation such as bi-projection fusion in BiFuse are proposed. To train such a framework, a large number of panoramas along with the corresponding depth ground truths captured by laser sensors are required, which highly increases the cost of data collection. Moreover, since such a data collection procedure is time-consuming, the scalability of extending these methods to different scenes becomes a challenge. To this end, self-training a network for monocular depth estimation from 360° videos is one way to alleviate this issue. However, there are no existing frameworks that incorporate bi-projection fusion into the self-training scheme, which highly limits the self-supervised performance since bi-projection fusion can leverage information from different projection types. In this paper, we propose BiFuse++ to explore the combination of bi-projection fusion and the self-training scenario. To be specific, we propose a new fusion module and Contrast-Aware Photometric Loss to improve the performance of BiFuse and increase the stability of self-training on real-world videos. We conduct both supervised and self-supervised experiments on benchmark datasets and achieve state-of-the-art performance.
AB - Due to the rise of spherical cameras, monocular 360° depth estimation becomes an important technique for many applications (e.g., autonomous systems). Thus, state-of-the-art frameworks for monocular 360° depth estimation such as bi-projection fusion in BiFuse are proposed. To train such a framework, a large number of panoramas along with the corresponding depth ground truths captured by laser sensors are required, which highly increases the cost of data collection. Moreover, since such a data collection procedure is time-consuming, the scalability of extending these methods to different scenes becomes a challenge. To this end, self-training a network for monocular depth estimation from 360° videos is one way to alleviate this issue. However, there are no existing frameworks that incorporate bi-projection fusion into the self-training scheme, which highly limits the self-supervised performance since bi-projection fusion can leverage information from different projection types. In this paper, we propose BiFuse++ to explore the combination of bi-projection fusion and the self-training scenario. To be specific, we propose a new fusion module and Contrast-Aware Photometric Loss to improve the performance of BiFuse and increase the stability of self-training on real-world videos. We conduct both supervised and self-supervised experiments on benchmark datasets and achieve state-of-the-art performance.
KW - 360
KW - monocular depth estimation
KW - omnidirectional vision
UR - http://www.scopus.com/inward/record.url?scp=85152177401&partnerID=8YFLogxK
U2 - 10.1109/TPAMI.2022.3203516
DO - 10.1109/TPAMI.2022.3203516
M3 - Article
C2 - 36049011
AN - SCOPUS:85152177401
SN - 0162-8828
VL - 45
SP - 5448
EP - 5460
JO - IEEE Transactions on Pattern Analysis and Machine Intelligence
JF - IEEE Transactions on Pattern Analysis and Machine Intelligence
IS - 5
ER -