Characteristic-Preserving Latent Space for Unpaired Cross-Domain Translation of 3D Point Clouds

Jia Wen Zheng, Jhen Yung Hsu, Chih Chia Li, I. Chen Lin*

*此作品的通信作者

研究成果: Article同行評審

2 引文 斯高帕斯(Scopus)

摘要

This article aims at unpaired shape-to-shape transformation for 3D point clouds, for instance, turning a chair to its table counterpart. Recent work for 3D shape transfer or deformation highly relies on paired inputs or specific correspondences. However, it is usually not feasible to assign precise correspondences or prepare paired data from two domains. A few methods start to study unpaired learning, but the characteristics of a source model may not be preserved after transformation. To overcome the difficulty of unpaired learning for transformation, we propose alternately training the autoencoder and translators to construct shape-aware latent space. This latent space based on novel loss functions enables our translators to transform 3D point clouds across domains and maintain the consistency of shape characteristics. We also crafted a test dataset to objectively evaluate the performance of point-cloud translation. The experiments demonstrate that our framework can construct high-quality models and retain more shape characteristics during cross-domain translation compared to the state-of-the-art methods. Moreover, we also present shape editing applications with our proposed latent space, including shape-style mixing and shape-type shifting, which do not require retraining a model.

原文English
頁(從 - 到)5212-5226
頁數15
期刊IEEE Transactions on Visualization and Computer Graphics
30
發行號8
DOIs
出版狀態Published - 2024

指紋

深入研究「Characteristic-Preserving Latent Space for Unpaired Cross-Domain Translation of 3D Point Clouds」主題。共同形成了獨特的指紋。

引用此