Pagoda: Fine-Grained GPU Resource Virtualization for Narrow Tasks

Tsung Tai Yeh, Amit Sabne, Putt Sakdhnagool, Rudolf Eigenmann, Timothy G. Rogers

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

21 Scopus citations

Abstract

Massively multithreaded GPUs achieve high throughput by running thousands of threads in parallel. To fully utilize the hardware, workloads spawn work to the GPU in bulk by launching large tasks, where each task is a kernel that contains thousands of threads that occupy the entire GPU. GPUs face severe underutilization and their performance benefits vanish if the tasks are narrow, i.e., they contain < 500 threads. Latency-sensitive applications in network, signal, and image processing that generate a large number of tasks with relatively small inputs are examples of such limited parallelism. This paper presents Pagoda, a runtime system that virtualizes GPU resources, using an OS-like daemon kernel called MasterKernel. Tasks are spawned from the CPU onto Pagoda as they become available, and are scheduled by the MasterKernel at the warp granularity. Experimental results demonstrate that Pagoda achieves a geometric mean speedup of 5.70x over PThreads running on a 20-core CPU, 1.51x over CUDA-HyperQ, and 1.69x over GeMTC, the state-of- the-art runtime GPU task scheduling system.

Original languageEnglish
Title of host publicationPPoPP 2017 - Proceedings of the 22nd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming
PublisherAssociation for Computing Machinery
Pages221-234
Number of pages14
ISBN (Electronic)9781450344937
DOIs
StatePublished - 26 Jan 2017
Event22nd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPoPP 2017 - Austin, United States
Duration: 4 Feb 20178 Feb 2017

Publication series

NameProceedings of the ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPOPP

Conference

Conference22nd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPoPP 2017
Country/TerritoryUnited States
CityAustin
Period4/02/178/02/17

Keywords

  • GPU runtime system
  • Task parallelism
  • Utilization

Fingerprint

Dive into the research topics of 'Pagoda: Fine-Grained GPU Resource Virtualization for Narrow Tasks'. Together they form a unique fingerprint.

Cite this