Model-based synthetic view generation from a monocular video sequence

Chun-Jen Tsai*, Peter Eisert, Bernd Girod, Aggelos K. Katsaggelos

*此作品的通信作者

研究成果同行評審

6 引文 斯高帕斯(Scopus)

摘要

In this paper a model-based multi-view image generation system for video conferencing is presented. The system assumes that a 3-D model of the person in front of the camera is available. It extracts texture from a speaking person sequence images and maps it to the static 3-D model during the video conference session. Since only the incrementally updated texture information is transmitted during the whole session, the bandwidth requirement is very small. Based on the experimental results one can conclude that the proposed system is very promising for practical applications.

原文English
頁面444-447
頁數4
DOIs
出版狀態Published - 1997
事件Proceedings of the 1997 International Conference on Image Processing. Part 2 (of 3) - Santa Barbara, CA, USA
持續時間: 26 10月 199729 10月 1997

Conference

ConferenceProceedings of the 1997 International Conference on Image Processing. Part 2 (of 3)
城市Santa Barbara, CA, USA
期間26/10/9729/10/97

指紋

深入研究「Model-based synthetic view generation from a monocular video sequence」主題。共同形成了獨特的指紋。

引用此