Image-to-image Translation via Contour-consistency Networks

Hsiang Ying Wang, Hsin Chun Lin, Chih Hsien Hsia*, Natnuntnita Siriphockpirom, Hsien I. Lin, Yung Yao Chen

*此作品的通信作者

研究成果: Article同行評審

摘要

In this paper, a novel framework for image-to-image translation, in which contour-consistency networks are used to solve the problem of inconsistency between the contours of generated and original images, is proposed. The objective of this study was to address the lack of an adequate training set. At the generator end, the original map is sampled by an encoder to obtain the encoder feature map; the attention feature map is then obtained using the attention module. Using the attention feature map, the decoder can ascertain where more conversions are required. The mechanism at the discriminator end is similar to that at the generator end. The map is sampled through an encoder to obtain the encoder feature map and then converted into the attention feature map. Finally, the map is classified by the classifier as real or fake. Experimental results demonstrate the effectiveness of the proposed method.

原文English
頁(從 - 到)515-522
頁數8
期刊Sensors and Materials
34
發行號2
DOIs
出版狀態Published - 2022

指紋

深入研究「Image-to-image Translation via Contour-consistency Networks」主題。共同形成了獨特的指紋。

引用此