Bridging Unsupervised and Supervised Depth from Focus via All-in-Focus Supervision

Ning Hsu Wang, Ren Wang, Yu Lun Liu, Yu Hao Huang, Yu Lin Chang, Chia Ping Chen, Kevin Jou

研究成果: Conference contribution同行評審

14 引文 斯高帕斯(Scopus)

摘要

Depth estimation is a long-lasting yet important task in computer vision. Most of the previous works try to estimate depth from input images and assume images are all-in-focus (AiF), which is less common in real-world applications. On the other hand, a few works take defocus blur into account and consider it as another cue for depth estimation. In this paper, we propose a method to estimate not only a depth map but an AiF image from a set of images with different focus positions (known as a focal stack). We design a shared architecture to exploit the relationship between depth and AiF estimation. As a result, the proposed method can be trained either supervisedly with ground truth depth, or unsupervisedly with AiF images as supervisory signals. We show in various experiments that our method outperforms the state-of-the-art methods both quantitatively and qualitatively, and also has higher efficiency in inference time.

原文English
主出版物標題Proceedings - 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021
發行者Institute of Electrical and Electronics Engineers Inc.
頁面12601-12611
頁數11
ISBN(電子)9781665428125
DOIs
出版狀態Published - 2021
事件18th IEEE/CVF International Conference on Computer Vision, ICCV 2021 - Virtual, Online, Canada
持續時間: 11 10月 202117 10月 2021

出版系列

名字Proceedings of the IEEE International Conference on Computer Vision
ISSN(列印)1550-5499

Conference

Conference18th IEEE/CVF International Conference on Computer Vision, ICCV 2021
國家/地區Canada
城市Virtual, Online
期間11/10/2117/10/21

指紋

深入研究「Bridging Unsupervised and Supervised Depth from Focus via All-in-Focus Supervision」主題。共同形成了獨特的指紋。

引用此