TimeNeRF: Building Generalizable Neural Radiance Fields across Time from Few-Shot Input Views

Hsiang Hui Hung, Huu Phu Do, Yung Hui Li, Ching Chun Huang*

*此作品的通信作者

研究成果: Conference contribution同行評審

摘要

We present TimeNeRF, a generalizable neural rendering approach for rendering novel views at arbitrary viewpoints and at arbitrary times, even with few input views. For real-world applications, it is expensive to collect multiple views and inefficient to re-optimize for unseen scenes. Moreover, as the digital realm, particularly the metaverse, strives for increasingly immersive experiences, the ability to model 3D environments that naturally transition between day and night becomes paramount. While current techniques based on Neural Radiance Fields (NeRF) have shown remarkable proficiency in synthesizing novel views, the exploration of NeRF's potential for temporal 3D scene modeling remains limited, with no dedicated datasets available for this purpose. To this end, our approach harnesses the strengths of multi-view stereo, neural radiance fields, and disentanglement strategies across diverse datasets. This equips our model with the capability for generalizability in a few-shot setting, allows us to construct an implicit content radiance field for scene representation, and further enables the building of neural radiance fields at any arbitrary time. Finally, we synthesize novel views of that time via volume rendering. Experiments show that TimeNeRF can render novel views in a few-shot setting without per-scene optimization. Most notably, it excels in creating realistic novel views that transition smoothly across different times, adeptly capturing intricate natural scene changes from dawn to dusk.

原文English
主出版物標題MM 2024 - Proceedings of the 32nd ACM International Conference on Multimedia
發行者Association for Computing Machinery, Inc
頁面253-262
頁數10
ISBN(電子)9798400706868
DOIs
出版狀態Published - 28 10月 2024
事件32nd ACM International Conference on Multimedia, MM 2024 - Melbourne, 澳大利亞
持續時間: 28 10月 20241 11月 2024

出版系列

名字MM 2024 - Proceedings of the 32nd ACM International Conference on Multimedia

Conference

Conference32nd ACM International Conference on Multimedia, MM 2024
國家/地區澳大利亞
城市Melbourne
期間28/10/241/11/24

指紋

深入研究「TimeNeRF: Building Generalizable Neural Radiance Fields across Time from Few-Shot Input Views」主題。共同形成了獨特的指紋。

引用此