DualFormer: A unified bidirectional sequence-to-sequence learning

Jen-Tzung Chien, Wei Hsiang Chang

研究成果: Conference article同行評審

9 引文 斯高帕斯(Scopus)

摘要

This paper presents a new dual domain mapping based on a unified bidirectional sequence-to-sequence (seq2seq) learning. Traditionally, dual learning in domain mapping was constructed with intrinsic connection where the conditional generative models in two directions were mutually leveraged and combined. The additional feedback from the other generation direction was used to regularize sequential learning in original direction of domain mapping. Domain matching between source sequence and target sequence was accordingly improved. However, the reconstructions for knowledge in two domains were ignored. The dual information based on separate models in two training directions was not sufficiently discovered. To cope with this weakness, this study proposes a closed-loop seq2seq learning where domain mapping and domain knowledge are jointly learned. In particular, a new feature-level dual learning is incorporated to build a dualformer where feature integration and feature reconstruction are further performed to bridge dual tasks. Experiments demonstrate the merit of the proposed dualformer for machine translation based on the multi-objective seq2seq learning.

原文English
頁(從 - 到)7718-7722
頁數5
期刊ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
2021-June
DOIs
出版狀態Published - 6 6月 2021
事件2021 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2021 - Virtual, Toronto, 加拿大
持續時間: 6 6月 202111 6月 2021

指紋

深入研究「DualFormer: A unified bidirectional sequence-to-sequence learning」主題。共同形成了獨特的指紋。

引用此