Optimization of GPU Memory Usage for Training Deep Neural Networks

Che Lun Hung*, Chine fu Hsin, Hsiao Hsi Wang, Chuan Yi Tang

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations

Abstract

Recently, Deep Neural Networks have been successfully utilized in many domains; especially in computer vision. Many famous convolutional neural networks, such as VGG, ResNet, Inception, and so forth, are used for image classification, object detection, and so forth. The architecture of these state-of-the-art neural networks has become deeper and complicated than ever. In this paper, we propose a method to solve the problem of large memory requirement in the process of training a model. The experimental result shows that the proposed algorithm is able to reduce the GPU memory significantly.

Original languageEnglish
Title of host publicationPervasive Systems, Algorithms and Networks - 16th International Symposium, I-SPAN 2019, Proceedings
EditorsChristian Esposito, Jiman Hong, Kim-Kwang Raymond Choo
PublisherSpringer
Pages289-293
Number of pages5
ISBN (Print)9783030301422
DOIs
StatePublished - 2019
Event16th International Symposium on Pervasive Systems, Algorithms and Networks, I-SPAN 2019 - Naples, Italy
Duration: 16 Sep 201920 Sep 2019

Publication series

NameCommunications in Computer and Information Science
Volume1080 CCIS
ISSN (Print)1865-0929
ISSN (Electronic)1865-0937

Conference

Conference16th International Symposium on Pervasive Systems, Algorithms and Networks, I-SPAN 2019
Country/TerritoryItaly
CityNaples
Period16/09/1920/09/19

Keywords

  • Convolutional Neural Networks
  • Deep Neural Network
  • GPU

Fingerprint

Dive into the research topics of 'Optimization of GPU Memory Usage for Training Deep Neural Networks'. Together they form a unique fingerprint.

Cite this