NeuralScale: Efficient scaling of neurons for resource-constrained deep neural networks

Eugene Lee, Chen-Yi Lee

研究成果: Conference article同行評審

7 引文 斯高帕斯(Scopus)

摘要

Deciding the amount of neurons during the design of a deep neural network to maximize performance is not intuitive. In this work, we attempt to search for the neuron (filter) configuration of a fixed network architecture that maximizes accuracy. Using iterative pruning methods as a proxy, we parametrize the change of the neuron (filter) number of each layer with respect to the change in parameters, allowing us to efficiently scale an architecture across arbitrary sizes. We also introduce architecture descent which iteratively refines the parametrized function used for model scaling. The combination of both proposed methods is coined as NeuralScale. To prove the efficiency of NeuralScale in terms of parameters, we show empirical simulations on VGG11, MobileNetV2 and ResNet18 using CIFAR10, CIFAR100 and TinyImageNet as benchmark datasets. Our results show an increase in accuracy of 3.04%, 8.56% and 3.41% for VGG11, MobileNetV2 and ResNet18 on CIFAR10, CIFAR100 and TinyImageNet respectively under a parameter-constrained setting (output neurons (filters) of default configuration with scaling factor of 0.25).

原文English
文章編號9156813
頁(從 - 到)1475-1484
頁數10
期刊Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
DOIs
出版狀態Published - 2020
事件2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020 - Virtual, Online, United States
持續時間: 14 6月 202019 6月 2020

指紋

深入研究「NeuralScale: Efficient scaling of neurons for resource-constrained deep neural networks」主題。共同形成了獨特的指紋。

引用此