TY - GEN
T1 - Optimizing GPU Cache Policies for MI Workloads∗
AU - Alsop, Johnathan
AU - Sinclair, Matthew D.
AU - Bharadwaj, Srikant
AU - Dutu, Alexandru
AU - Gutierrez, Anthony
AU - Kayiran, Onur
AU - Lebeane, Michael
AU - Potter, Brandon
AU - Puthoor, Sooraj
AU - Zhang, Xianwei
AU - Yeh, Tsung Tai
AU - Beckmann, Bradford M.
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/11
Y1 - 2019/11
N2 - In recent years, machine intelligence (MI) applications have emerged as a major driver for the computing industry. Optimizing these workloads is important, but complicated. As memory demands grow and data movement overheads increasingly limit performance, determining the best GPU caching policy to use for a diverse range of MI workloads represents one important challenge. To study this, we evaluate 17 MI applications and characterize their behavior using a range of GPU caching strategies. In our evaluations, we find that the choice of caching policy in GPU caches involves multiple performance trade-offs and interactions, and there is no one-size-fits-all GPU caching policy for MI workloads. Based on detailed simulation results, we motivate and evaluate a set of cache optimizations that consistently match the performance of the best static GPU caching policies.
AB - In recent years, machine intelligence (MI) applications have emerged as a major driver for the computing industry. Optimizing these workloads is important, but complicated. As memory demands grow and data movement overheads increasingly limit performance, determining the best GPU caching policy to use for a diverse range of MI workloads represents one important challenge. To study this, we evaluate 17 MI applications and characterize their behavior using a range of GPU caching strategies. In our evaluations, we find that the choice of caching policy in GPU caches involves multiple performance trade-offs and interactions, and there is no one-size-fits-all GPU caching policy for MI workloads. Based on detailed simulation results, we motivate and evaluate a set of cache optimizations that consistently match the performance of the best static GPU caching policies.
KW - GPU caching
KW - execution-driven simulation
KW - machine intelligence
KW - machine learning
UR - http://www.scopus.com/inward/record.url?scp=85083108580&partnerID=8YFLogxK
U2 - 10.1109/IISWC47752.2019.9041977
DO - 10.1109/IISWC47752.2019.9041977
M3 - Conference contribution
AN - SCOPUS:85083108580
T3 - Proceedings of the 2019 IEEE International Symposium on Workload Characterization, IISWC 2019
SP - 243
EP - 248
BT - Proceedings of the 2019 IEEE International Symposium on Workload Characterization, IISWC 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 15th IEEE International Symposium on Workload Characterization, IISWC 2019
Y2 - 3 November 2019 through 5 November 2019
ER -