A cache hierarchy aware thread mapping methodology for GPGPUs

Bo-Cheng Lai, Hsien Kai Kuo, Jing Yang Jou

    研究成果: Article同行評審

    6 引文 斯高帕斯(Scopus)


    The recently proposed GPGPU architecture has added a multi-level hierarchy of shared cache to better exploit the data locality of general purpose applications. The GPGPU design philosophy allocates most of the chip area to processing cores, and thus results in a relatively small cache shared by a large number of cores when compared with conventional multi-core CPUs. Applying a proper thread mapping scheme is crucial for gaining from constructive cache sharing and avoiding resource contention among thousands of threads. However, due to the significant differences on architectures and programming models, the existing thread mapping approaches for multi-core CPUs do not perform as effective on GPGPUs. This paper proposes a formal model to capture both the characteristics of threads as well as the cache sharing behavior of multi-level shared cache. With appropriate proofs, the model forms a solid theoretical foundation beneath the proposed cache hierarchy aware thread mapping methodology for multi-level shared cache GPGPUs. The experiments reveal that the three-staged thread mapping methodology can successfully improve the data reuse on each cache level of GPGPUs and achieve an average of 2.3× to 4.3× runtime enhancement when compared with existing approaches.

    頁(從 - 到)884-898
    期刊IEEE Transactions on Computers
    出版狀態Published - 1 4月 2015


    深入研究「A cache hierarchy aware thread mapping methodology for GPGPUs」主題。共同形成了獨特的指紋。