Talking Head Generation Based on 3D Morphable Facial Model

Hsin Yu Shen, Wen Jiin Tsai

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

This paper presents a framework for one-shot talking-head video generation which takes a single person image and audio clips as input and synthesizes photo-realistic videos with natural head-poses and lip motion synced to the driving audio. The main idea behind this framework is to use 3D Morphable Model (3DMM) parameters as intermediate representation in generating the videos. We design an Expression Predictor and a Head Pose Predictor to predict facial expression and head-pose parameters from audio, respectively, and adopt a 3DMM model to extract identity and texture parameters from the reference image. With these parameters, facial images are rendered as an auxiliary to guide video generation. Compared to widely used facial landmarks, 3DMM parameters are more powerful in representing facial details. Experimental results show that our method can generate realistic talking-head videos and outperform many state-of-the-art methods.

Original languageEnglish
Title of host publication2024 Picture Coding Symposium, PCS 2024 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9798350358483
DOIs
StatePublished - 2024
Event2024 Picture Coding Symposium, PCS 2024 - Taichung, Taiwan
Duration: 12 Jun 202414 Jun 2024

Publication series

Name2024 Picture Coding Symposium, PCS 2024 - Proceedings

Conference

Conference2024 Picture Coding Symposium, PCS 2024
Country/TerritoryTaiwan
CityTaichung
Period12/06/2414/06/24

Keywords

  • 3DMM
  • deep learning
  • image-to-image translation
  • self-attention
  • talking-head generation

Fingerprint

Dive into the research topics of 'Talking Head Generation Based on 3D Morphable Facial Model'. Together they form a unique fingerprint.

Cite this