TY - GEN
T1 - Data Efficient Incremental Learning via Attentive Knowledge Replay
AU - Lee, Yi Lun
AU - Chen, Dian Shan
AU - Lee, Chen Yu
AU - Tsai, Yi Hsuan
AU - Chiu, Wei Chen
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Class-incremental learning (CIL) tackles the problem of continuously optimizing a classification model to support growing number of classes, where the data of novel classes arrive in streams. Recent works propose to use representative exemplars of learnt classes, and replay the knowledge of them afterward under certain memory constraints. However, training on a fixed set of exemplars with an imbalanced proportion to the new data leads to strong biases in the trained models. In this paper, we propose an attentive knowledge replay framework to refresh the knowledge of previously learnt classes during incremental learning, which generates virtual training samples by blending between pairs of data. Particularly, we design an attention module that learns to predict the adaptive blending weights in accordance with their relative importance to the overall objective, where the importance is derived from the change of the image features over incremental phases. Our strategy of attentive knowledge replay encourages the model to learn smoother decision boundaries and thus improves its generalization beyond memorizing the exemplars. We validate our design in a standard class-incremental learning setup and demonstrate its flexibility in various settings.
AB - Class-incremental learning (CIL) tackles the problem of continuously optimizing a classification model to support growing number of classes, where the data of novel classes arrive in streams. Recent works propose to use representative exemplars of learnt classes, and replay the knowledge of them afterward under certain memory constraints. However, training on a fixed set of exemplars with an imbalanced proportion to the new data leads to strong biases in the trained models. In this paper, we propose an attentive knowledge replay framework to refresh the knowledge of previously learnt classes during incremental learning, which generates virtual training samples by blending between pairs of data. Particularly, we design an attention module that learns to predict the adaptive blending weights in accordance with their relative importance to the overall objective, where the importance is derived from the change of the image features over incremental phases. Our strategy of attentive knowledge replay encourages the model to learn smoother decision boundaries and thus improves its generalization beyond memorizing the exemplars. We validate our design in a standard class-incremental learning setup and demonstrate its flexibility in various settings.
UR - http://www.scopus.com/inward/record.url?scp=85187300327&partnerID=8YFLogxK
U2 - 10.1109/SMC53992.2023.10394002
DO - 10.1109/SMC53992.2023.10394002
M3 - Conference contribution
AN - SCOPUS:85187300327
T3 - Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics
SP - 2952
EP - 2959
BT - 2023 IEEE International Conference on Systems, Man, and Cybernetics
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2023 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2023
Y2 - 1 October 2023 through 4 October 2023
ER -