Enhancing Data Reuse in Cache Contention Aware Thread Scheduling on GPGPU

Chin Fu Lu, Hsien Kai Kuo, Bo-Cheng Lai

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

GPGPUs have been widely adopted as throughput processing platforms for modern big-data and cloud computing. Attaining a high performance design on a GPGPU requires careful tradeoffs among various design concerns. Data reuse, cache contention, and thread level parallelism, have been demonstrated as three imperative performance factors for a GPGPU. The correlated performance impacts of these factors pose non-Trivial concerns when scheduling threads on GPGPUs. This paper proposes a three-staged scheduling scheme to coschedule the threads with consideration of the three factors. The experiment results on a set of irregular parallel applications, when compared with previous approaches, have demonstrated up to 70% execution time improvement.

Original languageEnglish
Title of host publicationProceedings - 2016 10th International Conference on Complex, Intelligent, and Software Intensive Systems, CISIS 2016
EditorsLeonard Barolli, Fatos Xhafa, Makoto Ikeda
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages351-356
Number of pages6
ISBN (Electronic)9781509009879
DOIs
StatePublished - 19 Dec 2016
Event10th International Conference on Complex, Intelligent, and Software Intensive Systems, CISIS 2016 - Fukuoka, Japan
Duration: 6 Jul 20168 Jul 2016

Publication series

NameProceedings - 2016 10th International Conference on Complex, Intelligent, and Software Intensive Systems, CISIS 2016

Conference

Conference10th International Conference on Complex, Intelligent, and Software Intensive Systems, CISIS 2016
Country/TerritoryJapan
CityFukuoka
Period6/07/168/07/16

Keywords

  • cache
  • GPGPU
  • performance
  • thread scheduling

Fingerprint

Dive into the research topics of 'Enhancing Data Reuse in Cache Contention Aware Thread Scheduling on GPGPU'. Together they form a unique fingerprint.

Cite this