Realistic 3D facial animation parameters from mirror-reflected multi-view video

I-Chen Lin*, Jeng Sheng Yeh, Ming Ouhyoung

*此作品的通信作者

研究成果: Paper同行評審

11 引文 斯高帕斯(Scopus)

摘要

In this paper, a robust, accurate and inexpensive approach to estimate 3D facial motion from multi-view video is proposed, where two mirrors located near one's cheeks can reflect the side views of markers on one's face. Nice properties of mirrored images are utilized to simplify the proposed tracking algorithm significantly, while a Kalman filter is employed to reduce the noise and to predict the occluded markers positions. More than 50 markers on one's face are continuously tracked at 30 frames per second. The estimated 3D facial motion data has been practically applied to our facial animation system. In addition, the dataset of facial motion can also be applied to the analysis of co-articulation effects, facial expressions, and audio-visual hybrid recognition system.

原文English
頁面2-11
頁數10
DOIs
出版狀態Published - 1 12月 2001
事件14th Conference on Computer Animation - Seoul, Japan
持續時間: 7 11月 20018 11月 2001

Conference

Conference14th Conference on Computer Animation
國家/地區Japan
城市Seoul
期間7/11/018/11/01

指紋

深入研究「Realistic 3D facial animation parameters from mirror-reflected multi-view video」主題。共同形成了獨特的指紋。

引用此