DOI QR코드

DOI QR Code

New Two-Level L1 Data Cache Bypassing Technique for High Performance GPUs

  • Received : 2020.09.15
  • Accepted : 2020.11.18
  • Published : 2021.02.28

Abstract

On-chip caches of graphics processing units (GPUs) have contributed to improved GPU performance by reducing long memory access latency. However, cache efficiency remains low despite the facts that recent GPUs have considerably mitigated the bottleneck problem of L1 data cache. Although the cache miss rate is a reasonable metric for cache efficiency, it is not necessarily proportional to GPU performance. In this study, we introduce a second key determinant to overcome the problem of predicting the performance gains from L1 data cache based on the assumption that miss rate only is not accurate. The proposed technique estimates the benefits of the cache by measuring the balance between cache efficiency and throughput. The throughput of the cache is predicted based on the warp occupancy information in the warp pool. Then, the warp occupancy is used for a second bypass phase when workloads show an ambiguous miss rate. In our proposed architecture, the L1 data cache is turned off for a long period when the warp occupancy is not high. Our two-level bypassing technique can be applied to recent GPU models and improves the performance by 6% on average compared to the architecture without bypassing. Moreover, it outperforms the conventional bottleneck-based bypassing techniques.

Keywords

Acknowledgement

This work was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (No. NRF-2018R1A2B6005740).

References

  1. W. Jia, K. A. Shaw, and M. Martonosi, "MRPB: memory request prioritization for massively parallel processors," in Proceedings of 2014 IEEE 20th International Symposium on High Performance Computer Architecture (HPCA), Orlando, FL, 2014, pp. 272-283.
  2. NVIDIA Corporation, "NVIDIA Tesla P100: GP100 Pascal Architecture," 2016 [Online]. Available: https://images.nvidia.com/content/pdf/tesla/whitepaper/pascal-architecture-whitepaper.pdf.
  3. C. T. Do, J. M. Kim, and C. H. Kim, "Application characteristics-aware sporadic cache bypassing for high performance GPGPUs," Journal of Parallel and Distributed Computing, vol. 122, pp. 238-250, 2018. https://doi.org/10.1016/j.jpdc.2018.09.001
  4. J. Zhang, Y. He, F. Shen, and H. Tan, "Memory-aware TLP throttling and cache bypassing for GPUs," Cluster Computing, vol. 22, no. 1, pp. 871-883, 2019. https://doi.org/10.1007/s10586-017-1396-0
  5. NVIDIA Corporation, "NVIDA Tesla V100 GPU architecture," 2017 [Online]. Available: http://images.nvidia.com/content/volta-architecture/pdf/volta-architecture-whitepaper.pdf.
  6. M. Gebhart, S. W. Keckler, B. Khailany, R. Krashinsky, and W. J. Dally, "Unifying primary cache, scratch, and register file memories in a throughput processor," in Proceedings of 2012 45th Annual IEEE/ACM International Symposium on Microarchitecture, Vancouver, Canada, 2012, pp. 96-106.
  7. X. Xie, Y. Liang, Y. Wang, G. Sun, and T. Wang, "Coordinated static and dynamic cache bypassing for GPUs," in Proceedings of 2015 IEEE 21st International Symposium on High Performance Computer Architecture (HPCA), Burlingame, CA, 2015, pp. 76-88.
  8. C. T. Do, J. M. Kim, and C. H. Kim, "Early miss prediction based periodic cache bypassing for high performance GPUs," Microprocessors and Microsystems, vol. 55, pp. 44-54, 2017. https://doi.org/10.1016/j.micpro.2017.09.007
  9. X. Chen, L. W. Chang, C. I. Rodrigues, J. Lv, Z. Wang, and W. M. Hwu, "Adaptive cache management for energy-efficient GPU computing," in Proceedings of 2014 47th Annual IEEE/ACM International Symposium on Microarchitecture, Cambridge, UK, 2014, pp. 343-355.
  10. A. Sethia, D. A. Jamshidi, and S. Mahlke, "Mascar: speeding up GPU warps by reducing memory pitstops," in Proceedings of 2015 IEEE 21st International Symposium on High Performance Computer Architecture (HPCA), Burlingame, CA, 2015, pp. 174-185.
  11. G. Koo, Y. Oh, W. W. Ro, and M. Annavaram, "Access pattern-aware cache management for improving data utilization in GPU," in Proceedings of the 44th Annual International Symposium on Computer Architecture, Toronto, Canada, 2017, pp. 307-319.
  12. J. Fang, X. Zhang, S. Liu, and Z. Chang, "Miss-aware LLC buffer management strategy based on heterogeneous multi-core," The Journal of Supercomputing, vol. 75, no. 8, pp. 4519-4528, 2019. https://doi.org/10.1007/s11227-019-02763-3
  13. M. Khairy, A. Jain, T. M. Aamodt, and T. G. Rogers, "A detailed model for contemporary GPU memory systems," in Proceedings of 2019 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), Madison, WI, 2019, pp. 141-142.
  14. NVIDIA Corporation, "NVIDIA GeForce GTX 1080," 2016 [Online]. Available: https://international.download.nvidia.com/geforce-com/international/pdfs/GeForce_GTX_1080_Whitepaper_FINAL.pdf.
  15. Z. Jia, M. Maggioni, B. Staiger, and D. P. Scarpazza, "Dissecting the NVIDIA Volta GPU architecture via microbenchmarking," 2018 [Online]. Available: https://arxiv.org/abs/1804.06826.
  16. M. Bari, L. Stoltzfus, P. Lin, C. Liao, M. Emani, and B. Chapman, "Is data placement optimization still relevant on newer GPUs?," 2018 [Online]. Available: https://www.osti.gov/servlets/purl/1489476.
  17. A. Karki, C. P. Keshava, S. M. Shivakumar, J. Skow, G. M. Hegde, and H. Jeon, "Tango: a deep neural network benchmark suite for various accelerators," in Proceedings of 2019 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), Madison, WI, 2019, pp. 137-138.