MAML IS A NOISY CONTRASTIVE LEARNER IN CLASSIFICATION

Chia Hsiang Kao, Wei Chen Chiu, Pin Yu Chen

研究成果同行評審

12 引文 斯高帕斯(Scopus)

摘要

Model-agnostic meta-learning (MAML) is one of the most popular and widely adopted meta-learning algorithms, achieving remarkable success in various learning problems. Yet, with the unique design of nested inner-loop and outer-loop updates, which govern the task-specific and meta-model-centric learning, respectively, the underlying learning objective of MAML remains implicit, impeding a more straightforward understanding of it. In this paper, we provide a new perspective of the working mechanism of MAML. We discover that MAML is analogous to a meta-learner using a supervised contrastive objective in classification. The query features are pulled towards the support features of the same class and against those of different classes. Such contrastiveness is experimentally verified via an analysis based on the cosine similarity. Moreover, we reveal that vanilla MAML has an undesirable interference term originating from the random initialization and the cross-task interaction. We thus propose a simple but effective technique, the zeroing trick, to alleviate the interference. Extensive experiments are conducted on both mini-ImageNet and Omniglot datasets to validate the consistent improvement brought by our proposed method.

原文English
出版狀態Published - 2022
事件10th International Conference on Learning Representations, ICLR 2022 - Virtual, Online
持續時間: 25 4月 202229 4月 2022

Conference

Conference10th International Conference on Learning Representations, ICLR 2022
城市Virtual, Online
期間25/04/2229/04/22

指紋

深入研究「MAML IS A NOISY CONTRASTIVE LEARNER IN CLASSIFICATION」主題。共同形成了獨特的指紋。

引用此