Self-Supervised Adversarial Training for Contrastive Sentence Embedding

Jen Tzung Chien*, Yuan An Chen

*此作品的通信作者

研究成果: Conference article同行評審

5 引文 斯高帕斯(Scopus)

摘要

The defense against adversarial attacks was originally proposed for computer vision, and recently such an adversarial training (AT) has been emerging for natural language understanding. In an AT process, the adversarial perturbations are added on the input word embeddings as the noisy data which are included to allow the trained model to be noise invariant and accordingly improve the model generalization. However, the performance of existing works was bounded under the supervised or semi-supervised setting. In addition, the contrastive learning (CL) has obtained a significant performance in a self-supervised pre-training for language models. This paper presents a novel method to re-formulate CL to meet a self-supervised classification objective. Using this new formula, a self-supervised AT method is proposed for training an efficient sentence encoder. Experiments show that the pro-posed CL can improve the previous methods to find unsupervised sentence embeddings. With the help of AT, this method further surpasses the previous supervised methods.

指紋

深入研究「Self-Supervised Adversarial Training for Contrastive Sentence Embedding」主題。共同形成了獨特的指紋。

引用此