Memory capacity aware non-blocking data transfer on GPGPU

Hao Wei Liu, Hsien Kai Kuo, Kuan Ting Chen, Bo-Cheng Lai

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

    4 Scopus citations

    Abstract

    The massive data demand of GPGPUs requires expensive memory modules, such as GDDR, to support high data bandwidth. The high cost poses constraints on the total memory capacity available to GPGPUs, and the data need to be transferred between the host CPUs and GPGPUs. However, the long latency of data transfers has resulted in significant performance overhead. To alleviate this issue, the modern GPGPUs have implemented the non-blocking data transfer allowing a GPGPU to perform computing while the data is being transmitted. This paper proposes a capacity aware scheduling algorithm that exploits the non-blocking data transfer in modern GPGPUs. By effectively taking the advantage of non-blocking transfers, experiment results demonstrate an average of 24.01% performance improvement when compared to existing approaches that only consider memory capacity.

    Original languageEnglish
    Title of host publication2013 IEEE Workshop on Signal Processing Systems, SiPS 2013
    PublisherInstitute of Electrical and Electronics Engineers Inc.
    Pages395-400
    Number of pages6
    ISBN (Print)9781467362382
    DOIs
    StatePublished - 2013
    Event2013 IEEE Workshop on Signal Processing Systems, SiPS 2013 - Taipei, Taiwan
    Duration: 16 Oct 201318 Oct 2013

    Publication series

    NameIEEE Workshop on Signal Processing Systems, SiPS: Design and Implementation
    ISSN (Print)1520-6130

    Conference

    Conference2013 IEEE Workshop on Signal Processing Systems, SiPS 2013
    Country/TerritoryTaiwan
    CityTaipei
    Period16/10/1318/10/13

    Keywords

    • GPGPU
    • Memory Optimization
    • Nonblocking data transfer

    Fingerprint

    Dive into the research topics of 'Memory capacity aware non-blocking data transfer on GPGPU'. Together they form a unique fingerprint.

    Cite this