CSSNet: Image-based clothing style switch

Shao Pin Huang, Der Lor Way*, Zen-Chung Shih

*此作品的通信作者

研究成果: Paper同行評審

摘要

We propose a framework, the CSSNet to exchange the upper clothes across people with different pose, body shape and clothing. We present an approach consists of three stages. (1) Disentangling the features, such as cloth, body pose and semantic segmentation from source and target person. (2) Synthesizing realistic and high resolution target dressing style images. (3) Transfer the complex logo from source clothing to target wearing. Our proposed end-to-end neural network architecture which can generate the specific person to wear the target clothing. In addition, we also propose a post process method to recover the complex logos on network outputs which are missing or blurring. Our results display more realistic and higher quality than previous methods. Our method can also preserve cloth shape and texture simultaneously.

原文American English
頁數6
DOIs
出版狀態Published - 1 6月 2020
事件International Workshop on Advanced Imaging Technology, IWAIT 2020 - Yogyakarta, Indonesia
持續時間: 5 1月 20207 1月 2020

Conference

ConferenceInternational Workshop on Advanced Imaging Technology, IWAIT 2020
國家/地區Indonesia
城市Yogyakarta
期間5/01/207/01/20

指紋

深入研究「CSSNet: Image-based clothing style switch」主題。共同形成了獨特的指紋。

引用此