PFACC: An OpenACC-like programming model for irregular nested parallelism

Ming Hsiang Huang, Wuu Yang*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

OpenACC is a directive-based programming model which allows programmers to write graphic processing unit (GPU) programs by simply annotating parallel loops. However, OpenACC has poor support for irregular nested parallel loops, which are natural choices to express nested parallelism. We propose PFACC, a programming model similar to OpenACC. PFACC directives can be used to annotate parallel loops and to guide data movement between different levels of memory hierarchy. Parallel loops can be arbitrarily nested or be placed inside functions that would be (possibly recursively) called in other parallel loops. The PFACC translator translates C programs with PFACC directives into CUDA programs by inserting runtime iteration-sharing and memory allocation routines. The PFACC runtime iteration-sharing routine is a two-level mechanism. Thread blocks dynamically organize loop iterations into batches and execute the batches in a depth-first order. Different thread blocks share iterations among one another with an iteration-stealing mechanism. PFACC generates CUDA programs with reasonable memory usage because of the depth-first execution order. The two-level iteration-sharing mechanism is implemented purely in software and fits well with the CUDA thread hierarchy. Experiments show that PFACC outperforms CUDA dynamic parallelism in terms of performance and code size on most benchmarks.

Original languageEnglish
JournalSoftware - Practice and Experience
DOIs
StateAccepted/In press - 2020

Keywords

  • dynamic scheduling
  • GPGPU
  • irregular parallelism
  • nested parallelism
  • OpenACC
  • parallel programming model
  • PFACC

Fingerprint

Dive into the research topics of 'PFACC: An OpenACC-like programming model for irregular nested parallelism'. Together they form a unique fingerprint.

Cite this