Feature-point driven 3d expression editing

Chii Yuan Chuang, I-Chen Lin, Yung Sheng Lo, Chao Chih Lin

Research output: Contribution to conferencePaperpeer-review


Producing a life-like 3D facial expression is usually a labor-intensive process. In movie and game industries, motion capture and 3D scanning techniques, acquiring motion data from real persons, are used to speed up the production. However, acquiring dynamic and subtle details, such as wrinkles, on a face are still difficult or expensive. In this paper, we propose a feature-point-driven approach to synthesize novel expressions with details. Our work can be divided into two main parts: acquisition of 3D facial details and expression synthesis. 3D facial details are estimated from sample images by a shape-from-shading technique. While employing relation between specific feature points and facial surfaces in prototype images, our system provides an intuitive editing tool to synthesize 3D geometry and corresponding 2D textures or 3D detailed normals of novel expressions. Besides expression editing, the proposed method can also be extended to enhance existing motion capture data with facial details.

Original languageEnglish
Number of pages6
StatePublished - Mar 2007
Event2nd International Conference on Computer Graphics Theory and Applications, GRAPP 2007 - Barcelona, Spain
Duration: 8 Mar 200711 Mar 2007


Conference2nd International Conference on Computer Graphics Theory and Applications, GRAPP 2007


  • Facial animation
  • Facial expression
  • Graphical interfaces
  • Surface reconstruction


Dive into the research topics of 'Feature-point driven 3d expression editing'. Together they form a unique fingerprint.

Cite this