TY - GEN
T1 - Domain adaptation meets disentangled representation learning and style transfer
AU - Tran, Vu Hoang
AU - Huang, Ching-Chun
PY - 2019/10
Y1 - 2019/10
N2 - In this paper, we face the challenges of un-supervised domain adaptation and propose a novel threein-one framework where three tasks domain adaptation, disentangled representation, and style transfer are considered simultaneously. Firstly, the learned features are disentangled into common parts and specific parts. The common parts represent the transferrable features, which are suitable for domain adaptation with less negative transfer. Conversely, the specific parts characterize the unique style of each individual domain. Based on this, the new concept of feature exchange across domains, which can not only enhance the transferability of common features but also be useful for image style transfer, is introduced. These designs allow us to introduce five types of training objectives to realize the three challenging tasks at the same time. The experimental results show that our architecture can be adaptive well to full transfer learning and partial transfer learning upon a well-learned disentangled representation. Besides, the trained network also demonstrates high potential to generate style-transferred images. © 2019 IEEE.
AB - In this paper, we face the challenges of un-supervised domain adaptation and propose a novel threein-one framework where three tasks domain adaptation, disentangled representation, and style transfer are considered simultaneously. Firstly, the learned features are disentangled into common parts and specific parts. The common parts represent the transferrable features, which are suitable for domain adaptation with less negative transfer. Conversely, the specific parts characterize the unique style of each individual domain. Based on this, the new concept of feature exchange across domains, which can not only enhance the transferability of common features but also be useful for image style transfer, is introduced. These designs allow us to introduce five types of training objectives to realize the three challenging tasks at the same time. The experimental results show that our architecture can be adaptive well to full transfer learning and partial transfer learning upon a well-learned disentangled representation. Besides, the trained network also demonstrates high potential to generate style-transferred images. © 2019 IEEE.
KW - Feature extraction , Task analysis , Semantics , Single photon emission computed tomography , Training , Training data , Generative adversarial networks
UR - http://www.scopus.com/inward/record.url?scp=85076766954&partnerID=8YFLogxK
U2 - 10.1109/SMC.2019.8914053
DO - 10.1109/SMC.2019.8914053
M3 - Conference contribution
AN - SCOPUS:85076766954
T3 - Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics
SP - 2998
EP - 3005
BT - 2019 IEEE International Conference on Systems, Man and Cybernetics, SMC 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2019 IEEE International Conference on Systems, Man and Cybernetics, SMC 2019
Y2 - 6 October 2019 through 9 October 2019
ER -