Contrastive Disentangled Learning for Memory-Augmented Transformer

Jen Tzung Chien, Shang En Li

研究成果: Conference article同行評審

3 引文 斯高帕斯(Scopus)

摘要

This paper developed a new memory-augmented sequential learning based on a contrastive disentangled transformer. Conventionally, transformer is insufficient to characterize long sequences since the sequence length is restricted to avoid the requirement of overlarge memory. A direct solution to handle this issue is to divide long sequence into short segments, but the context fragmentation will happen. In this paper, the contrastive disentangled memory is exploited to deal with the increasing computation cost as well as the overlarge memory requirement due to long sequences. In particular, an informative selection over the disentangled memory slots is proposed for iterative updating in a large-span sequence representation. This paper maximizes the semantic diversity of memory slots and captures the contextual semantics via contrastive learning. The experiments on language understanding show that the context fragmentation is mitigated by the proposed method with reduced computation.

原文English
頁(從 - 到)2958-2962
頁數5
期刊Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
2023-August
DOIs
出版狀態Published - 2023
事件24th International Speech Communication Association, Interspeech 2023 - Dublin, Ireland
持續時間: 20 8月 202324 8月 2023

指紋

深入研究「Contrastive Disentangled Learning for Memory-Augmented Transformer」主題。共同形成了獨特的指紋。

引用此