Specific Expert Learning: Enriching Ensemble Diversity via Knowledge Distillation

Wei Cheng Kao, Hong Xia Xie, Chih Yang Lin, Wen Huang Cheng

Research output: Contribution to journalArticlepeer-review

Abstract

In recent years, ensemble methods have shown sterling performance and gained popularity in visual tasks. However, the performance of an ensemble is limited by the paucity of diversity among the models. Thus, to enrich the diversity of the ensemble, we present the distillation approach--learning from experts (LFEs). Such method involves a novel knowledge distillation (KD) method that we present, specific expert learning (SEL), which can reduce class selectivity and improve the performance on specific weaker classes and overall accuracy. Through SEL, models can acquire different knowledge from distinct networks with various areas of expertise, and a highly diverse ensemble can be obtained afterward. Our experimental results demonstrate that, on CIFAR-10, the accuracy of the ResNet-32 increases 0.91% with SEL, and that the ensemble trained by SEL increases accuracy by 1.13%. Compared to state-of-the-art approaches, for example, DML only improves accuracy by 0.3% and 1.02% on single ResNet-32 and the ensemble, respectively. Furthermore, our proposed architecture also can be applied to ensemble distillation (ED), which applies KD on the ensemble model. In conclusion, our experimental results show that our proposed SEL not only improves the accuracy of a single classifier but also boosts the diversity of the ensemble model.

Original languageEnglish
JournalIEEE Transactions on Cybernetics
DOIs
StateAccepted/In press - 2021

Keywords

  • Boosting
  • Deep learning
  • Diversity reception
  • ensemble diversity
  • knowledge distillation (KD)
  • Knowledge engineering
  • MIMICs
  • Predictive models
  • Task analysis
  • Visualization

Fingerprint

Dive into the research topics of 'Specific Expert Learning: Enriching Ensemble Diversity via Knowledge Distillation'. Together they form a unique fingerprint.

Cite this