Extraction of 3D Facial Motion Parameters from Mirror-Reflected Multi-View Video for Audio-Visual Synthesis

I. Chen Lin, Jeng Sheng Yeh, Ming Ouhyoung

Research output: Contribution to conferencePaperpeer-review

Abstract

The goal of our project is to collect the dataset of 3D facial motion parameters for the synthesis of talking head. However, the capture of human facial motion is usually an expensive task in some related researches, since special devices must be applied, such as optical or electronic trackers. In this paper, we propose a robust, accurate and inexpensive approach to estimate human facial motion from mirror-reflected videos. The approach takes advantages of the characteristics between original and mirrored image, and can be more robust than most of other general-purposed stereovision approach in the motion analysis for mirror-reflected videos. A preliminary dataset of facial motion parameters of MPEG-4 and French visemes and with voice data has been acquired, the estimated data are also applied to our facial animation system.

Original languageEnglish
Pages66-71
Number of pages6
StatePublished - 2001
Event2001 International Conference on Auditory-Visual Speech Processing, AVSP 2001 - Aalborg, Denmark
Duration: 7 Sep 20019 Sep 2001

Conference

Conference2001 International Conference on Auditory-Visual Speech Processing, AVSP 2001
Country/TerritoryDenmark
CityAalborg
Period7/09/019/09/01

Fingerprint

Dive into the research topics of 'Extraction of 3D Facial Motion Parameters from Mirror-Reflected Multi-View Video for Audio-Visual Synthesis'. Together they form a unique fingerprint.

Cite this