DOI QR코드

DOI QR Code

Multithreaded and Overlapped Systolic Array for Depthwise Separable Convolution

깊이별 분리 합성곱을 위한 다중 스레드 오버랩 시스톨릭 어레이

  • Jongho Yoon (Department of Electrical Engineering, Pohang University of Science and Technology) ;
  • Seunggyu Lee (Hyosung Ventures) ;
  • Seokhyeong Kang (Department of Electrical Engineering, Pohang University of Science and Technology)
  • Received : 2023.12.15
  • Accepted : 2024.01.04
  • Published : 2024.01.31

Abstract

When processing depthwise separable convolution, low utilization of processing elements (PEs) is one of the challenges of systolic array (SA). In this study, we propose a new SA architecture to maximize throughput in depthwise convolution. Moreover, the proposed SA performs subsequent pointwise convolution on the idle PEs during depthwise convolution computation to increase the utilization. After the computation, we utilize unused PEs to boost the remaining pointwise convolution. Consequently, the proposed 128x128 SA achieves a 4.05x and 1.75x speed improvement and reduces the energy consumption by 66.7 % and 25.4 %, respectively, compared to the basic SA and RiSA in MobileNetV3.

깊이별 분리 합성곱 (Depthwise Separable Convolution)을 처리할 때, processing element (PE)의 저활용성은 시스톨릭 어레이 (SA)의 한계점 중 하나이다. 본 연구에서는 깊이별 합성곱의 처리량을 극대화하기 위한 새로운 SA 아키텍처를 제안한다. 더불어, 제안된 SA 는 깊이별 합성곱 계산 중에 유휴 PE 에서 후속 점별 합성곱 (pointwise convolution)을 수행하여 활용도를 증가시킨다. 모든 깊이별 합성곱 연산 후에는 모든 PE 를 활용하여 나머지 점별 합성곱 연산의 속도를 향상시킨다. 결과적으로, 제안된 128×128 SA 는 MobileNetV3 연산 시, 기본 SA 및 RiSA 와 비교하여 속도가 4.05 배, 1.75 배 향상되고, 에너지 소비량을 각각 66.7 %, 25.4 % 감소한다.

Keywords

Acknowledgement

The EDA Tool was supported by the IC Design Education Center.

References

  1. S. Chetlur, C. Woolley, P. Vandermersch, J. Cohen, J. Tran et al. "cuDNN: Efficient Primitives for Deep Learning", arXiv preprint arXiv:1410.0759, 2014.
  2. N.P. Jouppi, C. Young, N. Patil, D. Patterson, G. Agrawal et al., "In- Datacenter Performance Analysis of a Tensor Processing Unit", Int. Symp. on Computer Architecture (ISCA), 2017, pp. 1-12.
  3. S. Markidis, S. W. D. Chien, E. Laure, I. B. Peng, J. S. Vetter, "NVIDIA Tensor Core Programmability, Performance & Precision", Int. Symp. on Parallel and Distributed Processing Symp. Workshops (IPDPSW), 2018, pp. 522-531.
  4. A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang et al., "Mobilenets: Efficient Convolutional Neural Networks for Mobile Vision Applications", arXiv preprint arXiv:1704.04861, 2017.
  5. M. Sandler, A. Howard, M. Zhu, A. Zhmoginow, L. C. Chen, "Mo- bileNetV2: Inverted Residuals and Linear Bottlenecks", Computer Vision and Pattern Recognition (CVPR), 2018, pp. 4510-4520.
  6. A. Howear, M. Sandler, G. Chu, L. C. Chen, B. Chen, M. Tan, "Searching for MobileNetv3", Int. Conf. on Computer Vision (ICCV), 2019, pp. 1314-1324.
  7. M. Tan, Q. Le, "Efficientnet: Rethinking Model Scaling for Convolu- tional Neural Networks", Proc. Machine Learning Research (PMLR), 2019, pp. 6105-6114.
  8. Z. Liu, H. Mao, C. Y. Wu, C. Feichtenhofer, T. Darrell, S. Xie, "A ConvNet for the 2020s", arXiv preprint arXiv:2201.03545, 2022.
  9. Z. Dai, H. Liu, QV. Le, M. Tan, A. Howear, M. Sandler, G. Chu, L. C. Chen, B. Chen, M. Tan, "CoAtNet: Marrying Convolution and Attention for All Data Sizes", Advances in Neural Information Processing Systems 34, 2021, pp. 3965-3977.
  10. S. Ghodrati, B. H. Ahn, J. Kim, S. Kinzer, B. R. Yatham et al., "Planaria: Dynamic Architecture Fission for Spatial Multi-Tenant Acceleration of Deep Neural Networks", Int. Symp. on Microarchitecture (MICRO), 2020, pp. 681-697.
  11. J. Lee, J. Choi, J. Kim, J. Lee, Y. Kim, "Dataflow Mirroring: Archi- tectural Support for Highly Efficient Fine-Grained Spatial Multitasking on Systolic Array NPUs", Design Automation Conf. (DAC), 2021, pp. 247-252.
  12. H. Cho, "RiSA: A Reinforced Systolic Array for Depthwise Convolu- tions and Embedded Tensor Reshaping", Trans. Embedded Computing Systems (TECS) 20.5s, 2021, pp. 1-20. https://doi.org/10.1145/3476984
  13. R. Xu, S. Ma, Y. Wang, Y. Guo, "CMSA: Configurable Multi-directional Systolic Array for Convolutional Neural Networks", International Con- ference on Computer Design (ICCD), 2020, pp. 494-497.
  14. L. Bai, Y. Zhao and X. Huang, "A CNN Accelerator on FPGA Using Depthwise Separable Convolution", IEEE Trans. Circuits and Syst. II, Exp. Briefs, vol. 65, no. 10, pp. 1415-1419, Oct. 2018.
  15. R. Xu, S. Ma, Y. Wang, Y. Guo, "HeSA: Heterogeneous Systolic Array Architecture for Compact CNNs Hardware Accelerators", Design, Automation & Test in Europe Conf. & Exhibit. (DATE), 2021, pp. 657- 662.
  16. H. T. Kung, B. McDanel, S. Q. Zhang, "Adaptive Tiling: Apply Fixed- size Systolic Arrays to Sparse Convolutional Neural Networks", Int. Conf. on Pattern Recognition (ICPR), 2018, pp. 1006-1011.