ADVERSARIAL MASK TRANSFORMER FOR SEQUENTIAL LEARNING

Hou Lio, Shang En Li, Jen Tzung Chien

研究成果: Conference contribution同行評審

1 引文 斯高帕斯(Scopus)

摘要

Mask language model has been successfully developed to build a transformer for robust language understanding. The transformer-based language model has achieved excellent results in various downstream applications. However, typical mask language model is trained by predicting the randomly masked words and is used to transfer the knowledge from rich-resource pre-training task to low-resource downstream tasks. This study incorporates a rich contextual embedding from pre-trained model and strengthens the attention layers for sequence-to-sequence learning. In particular, an adversarial mask mechanism is presented to deal with the shortcoming of random mask and accordingly enhance the robustness in word prediction for language understanding. The adversarial mask language model is trained in accordance with a minimax optimization over the word prediction loss. The worst-case mask is estimated to build an optimal and robust language model. The experiments on two machine translation tasks show the merits of the adversarial mask transformer.

原文English
主出版物標題2022 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2022 - Proceedings
發行者Institute of Electrical and Electronics Engineers Inc.
頁面4178-4182
頁數5
ISBN(電子)9781665405409
DOIs
出版狀態Published - 2022
事件47th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2022 - Virtual, Online, Singapore
持續時間: 23 5月 202227 5月 2022

出版系列

名字ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
2022-May
ISSN(列印)1520-6149

Conference

Conference47th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2022
國家/地區Singapore
城市Virtual, Online
期間23/05/2227/05/22

指紋

深入研究「ADVERSARIAL MASK TRANSFORMER FOR SEQUENTIAL LEARNING」主題。共同形成了獨特的指紋。

引用此