CSSNet: Image-based clothing style switch

Shao Pin Huang, Der Lor Way*, Zen-Chung Shih


研究成果: Paper同行評審


We propose a framework, the CSSNet to exchange the upper clothes across people with different pose, body shape and clothing. We present an approach consists of three stages. (1) Disentangling the features, such as cloth, body pose and semantic segmentation from source and target person. (2) Synthesizing realistic and high resolution target dressing style images. (3) Transfer the complex logo from source clothing to target wearing. Our proposed end-to-end neural network architecture which can generate the specific person to wear the target clothing. In addition, we also propose a post process method to recover the complex logos on network outputs which are missing or blurring. Our results display more realistic and higher quality than previous methods. Our method can also preserve cloth shape and texture simultaneously.

原文American English
出版狀態Published - 1 6月 2020
事件International Workshop on Advanced Imaging Technology, IWAIT 2020 - Yogyakarta, Indonesia
持續時間: 5 1月 20207 1月 2020


ConferenceInternational Workshop on Advanced Imaging Technology, IWAIT 2020


深入研究「CSSNet: Image-based clothing style switch」主題。共同形成了獨特的指紋。