Rendering complex scenes based on spatial subdivision, object-based depth mesh, and occlusion culling

Chih Chun Chen*, Bo Yin Lee, Jung-Hong Chuang, Wei Wen Feng, Ting Chiou

*此作品的通信作者

研究成果: Conference article同行評審

摘要

In this paper, we combine geometry-based and image-based rendering techniques to develop a VR navigation system that aims to have efficiency relatively independent of the scene complexity. The system has two stages. In the preprocessing stage, the x-y plane of a 3D scene is partitioned into equal-sized hexagonal navigation cells. Then for each navigation cell, we associate each object outside the cell with either a LOD mesh or an object-based depth mesh depending on its self-occlusion error. The object with the error larger than a user-specified threshold is associated with a LOD mesh of an appropriate resolution. For the object with error smaller than the threshold, we associated it with a depth mesh that is reduced from its original mesh based on the silhouette and depth information of its image rendered from the cell center. All LOD meshes are then culled by a conservative back-face computation, and then all LOD and depth meshes are culled by a conservative visibility computation, all aim to remove polygons that are invisible from any point inside the cell. At run-time stage, LOD meshes are rendered normally while depth meshes are rendered by texture mapping with their cached images. Techniques for run-time back-face culling and occlusion culling can be easily included. Our experimental results have depicted fast frame rates for complex environments with an acceptable quality-loss.

原文English
頁(從 - 到)45-54
頁數10
期刊Proceedings of SPIE - The International Society for Optical Engineering
4756
DOIs
出版狀態Published - 2002
事件Third International Conference on Virtual Reality and Its Application in Industry - Hangzhou, China
持續時間: 9 4月 200212 4月 2002

指紋

深入研究「Rendering complex scenes based on spatial subdivision, object-based depth mesh, and occlusion culling」主題。共同形成了獨特的指紋。

引用此