Compressing DNN Parameters for Model Loading Time Reduction

Yang Ming Yeh, Jennifer Shueh Inn Hu, Yen Yu Lin, Yi Chang Lu

研究成果: Conference contribution同行評審

2 引文 斯高帕斯(Scopus)

摘要

Deep neural network (DNN) has been applied to a variety of computer vision tasks these days. However, DNN often suffers from its enormous execution time even with the aid of GPU. In this paper, we argue that the bandwidth bottleneck between GPU and GDRAM has to be addressed. To reduce loading time, we propose a DNN acceleration approach which compresses DNN parameters before loading model information to GPU and performs decompressing on GPU. Using JPEG compression as an example, the loss of the test accuracy can be kept within 4%, while an 8 × parameter-size reduction is achieved for VGG16.

原文English
主出版物標題2019 IEEE International Conference on Consumer Electronics - Asia, ICCE-Asia 2019
發行者Institute of Electrical and Electronics Engineers Inc.
頁面78-79
頁數2
ISBN(電子)9781728133362
DOIs
出版狀態Published - 6月 2019
事件4th IEEE International Conference on Consumer Electronics - Asia, ICCE-Asia 2019 - Bangkok, 泰國
持續時間: 12 6月 201914 6月 2019

出版系列

名字2019 IEEE International Conference on Consumer Electronics - Asia, ICCE-Asia 2019

Conference

Conference4th IEEE International Conference on Consumer Electronics - Asia, ICCE-Asia 2019
國家/地區泰國
城市Bangkok
期間12/06/1914/06/19

指紋

深入研究「Compressing DNN Parameters for Model Loading Time Reduction」主題。共同形成了獨特的指紋。

引用此