BoostMVSNeRFs: Boosting MVS-based NeRFs to Generalizable View Synthesis in Large-scale Scenes

Chih Hai Su, Chih Yao Hu, Shr Ruei Tsai, Jie Ying Lee, Chin Yang Lin, Yu Lun Liu

研究成果: Conference contribution同行評審

摘要

While Neural Radiance Fields (NeRFs) have demonstrated exceptional quality, their protracted training duration remains a limitation. Generalizable and MVS-based NeRFs, although capable of mitigating training time, often incur tradeoffs in quality. This paper presents a novel approach called BoostMVSNeRFs to enhance the rendering quality of MVS-based NeRFs in large-scale scenes. We first identify limitations in MVS-based NeRF methods, such as restricted viewport coverage and artifacts due to limited input views. Then, we address these limitations by proposing a new method that selects and combines multiple cost volumes during volume rendering. Our method does not require training and can adapt to any MVS-based NeRF methods in a feed-forward fashion to improve rendering quality. Furthermore, our approach is also end-to-end trainable, allowing fine-tuning on specific scenes. We demonstrate the effectiveness of our method through experiments on large-scale datasets, showing significant rendering quality improvements in large-scale scenes and unbounded outdoor scenarios.

原文English
主出版物標題Proceedings - SIGGRAPH 2024 Conference Papers
編輯Stephen N. Spencer
發行者Association for Computing Machinery, Inc
ISBN(電子)9798400705250
DOIs
出版狀態Published - 13 7月 2024
事件SIGGRAPH 2024 Conference Papers - Denver, 美國
持續時間: 28 7月 20241 8月 2024

出版系列

名字Proceedings - SIGGRAPH 2024 Conference Papers

Conference

ConferenceSIGGRAPH 2024 Conference Papers
國家/地區美國
城市Denver
期間28/07/241/08/24

指紋

深入研究「BoostMVSNeRFs: Boosting MVS-based NeRFs to Generalizable View Synthesis in Large-scale Scenes」主題。共同形成了獨特的指紋。

引用此