TY - JOUR
T1 - Contrastive Disentangled Learning for Memory-Augmented Transformer
AU - Chien, Jen Tzung
AU - Li, Shang En
N1 - Publisher Copyright:
© 2023 International Speech Communication Association. All rights reserved.
PY - 2023
Y1 - 2023
N2 - This paper developed a new memory-augmented sequential learning based on a contrastive disentangled transformer. Conventionally, transformer is insufficient to characterize long sequences since the sequence length is restricted to avoid the requirement of overlarge memory. A direct solution to handle this issue is to divide long sequence into short segments, but the context fragmentation will happen. In this paper, the contrastive disentangled memory is exploited to deal with the increasing computation cost as well as the overlarge memory requirement due to long sequences. In particular, an informative selection over the disentangled memory slots is proposed for iterative updating in a large-span sequence representation. This paper maximizes the semantic diversity of memory slots and captures the contextual semantics via contrastive learning. The experiments on language understanding show that the context fragmentation is mitigated by the proposed method with reduced computation.
AB - This paper developed a new memory-augmented sequential learning based on a contrastive disentangled transformer. Conventionally, transformer is insufficient to characterize long sequences since the sequence length is restricted to avoid the requirement of overlarge memory. A direct solution to handle this issue is to divide long sequence into short segments, but the context fragmentation will happen. In this paper, the contrastive disentangled memory is exploited to deal with the increasing computation cost as well as the overlarge memory requirement due to long sequences. In particular, an informative selection over the disentangled memory slots is proposed for iterative updating in a large-span sequence representation. This paper maximizes the semantic diversity of memory slots and captures the contextual semantics via contrastive learning. The experiments on language understanding show that the context fragmentation is mitigated by the proposed method with reduced computation.
KW - contrastive learning
KW - disentangled memory
KW - language understanding
KW - Sequential learning
KW - transformer
UR - http://www.scopus.com/inward/record.url?scp=85171532628&partnerID=8YFLogxK
U2 - 10.21437/Interspeech.2023-1652
DO - 10.21437/Interspeech.2023-1652
M3 - Conference article
AN - SCOPUS:85171532628
SN - 2308-457X
VL - 2023-August
SP - 2958
EP - 2962
JO - Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
JF - Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
T2 - 24th International Speech Communication Association, Interspeech 2023
Y2 - 20 August 2023 through 24 August 2023
ER -