DOI QR코드

DOI QR Code

Research Trends in Domestic and International Al chips

국내외 인공지능 반도체에 대한 연구 동향

  • 김현지 (한성대학교 정보컴퓨터공학과 ) ;
  • 윤세영 (한성대학교 융합보안학과 ) ;
  • 서화정 (한성대학교 융합보안학과)
  • Received : 2024.02.22
  • Accepted : 2024.03.20
  • Published : 2024.03.29

Abstract

Recently, large-scale artificial intelligence (AI) such as ChatGPT have been developed, and as AI is used across various industrial fields, attention is focused on AI chips (semiconductors). AI chips refer to chips designed for calculations for AI algorithms, and many companies at domestic and abroad, such as NVIDIA, Tesla, and ETRI, are developing AI chips. In this paper, we survey research trends on nine types of AI chips. Currently, many attempts have been made to improve the computational performance of most AI chips, and semiconductors for specific purposes are also being designed. In order to compare various AI semiconductors, each chip is analyzed in terms of operation unit, speed, power, and energy efficiency. We introduce currently existing optimization methodologies for AI computation. Based on this, future research directions for AI semiconductors are presented in this paper.

최근 ChatGPT와 같은 초거대 인공지능 기술이 발달하고 있으며, 다양한 산업 분야 전반에서 인공지능이 활용됨에 따라 인공지능 반도체에 대한 관심이 집중되고 있다. 인공지능 반도체는 인공지능 알고리즘을 위한 연산을위해 설계된 칩을 의미하며, NVIDIA, Tesla, ETRI 등과 같이 국내외 여러 기업에서 인공지능 반도체를 개발 중에 있다. 본 논문에서는 국내외 인공지능 반도체 9종에 대한 연구 동향을 파악한다. 현재 대부분의 인공지능 반도체는 연산 성능을 향상시키기 위한 시도들이 많이 진행되었으며, 특정 목적을 위한 반도체들 또한 설계되고 있다. 다양한 인공지능 반도체들에 대한 비교를 위해 연산 단위, 연산속도, 전력, 에너지 효율성 등의 측면에서 각 반도체에 대해 분석하고, 현재 존재하는 인공지능 연산을 위한 최적화 방법론에 대해 분석한다. 이를 기반으로 향후 인공지능 반도체의 연구 방향에 대해 제시한다.

Keywords

Acknowledgement

This research was financially supported by Hansung University.

References

  1. Owens, John D., et al. "GPU computing," Proceedings of the IEEE 96.5, pp. 879-899, 2008.  https://doi.org/10.1109/JPROC.2008.917757
  2. Choquette, Jack, et al. "NVIDIA Al00 tensor core GPU- Performance and innovation," IEEE Miero, vol. 41, no. 2, pp. 29-35, 2021.  https://doi.org/10.1109/MM.2021.3061394
  3. Wang, Yu Emma, Gu-Yeon Wei, and David Brooks. "Benchmarking TPU, GPU, and CPU platforms for deep learning," arXiv preprint arXiv:1907.10701, 2019. 
  4. Choi, Yujeong, and Minsoo Rhu. "Prema: A predictive multi-task scheduling algorithm for preemptible neural processing units," 2020 IEEE International Symposium on High Performance Computer Architecture (HPCA). IEEE, 2020. 
  5. Hoefler, Torsten, et al. "Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks," The Journal cf Machine Learning Research 22.1, pp. 10882-11005, 2021. 
  6. Wu, Hao, et al. "Integer quantization for deep learning inference: Principles and empirical evaluation," arXiv preprint arXiv:2004.09602, 2020. 
  7. Gou, Jianping, et al, "Knowledge distillation: A survey," International Journal of Computer Vision 129:,pp. 1789-1819, 2021.  https://doi.org/10.1007/s11263-021-01453-z
  8. NVIDIA Hl00 Tensor Core GPU Architecture- EXCEPTIONAL PERFORMANCE, SCALABILITY, AND SECURITY FOR THE DATA CENTER (2022), https://www.advancedclustering.com/wp-content/uploads/2022/03/gtc22-whitepaper-hopper.pdf, (accessed 18, 03, 2024) 
  9. Anne C. Elsterm et al. "Nvidia Hopper GPU and Grace GPU Highlights," Computing in Science & Engineering, vol. 24, no. 2, pp. 95-100, 2022. 
  10. MTIA v1: Meta's first-generation Al inference accelerator (2023), https://ai.meta.com/blog/meta-training-inference-accelerator-AI-MTIA, (accessed 18, 03, 2024) 
  11. E. Talpes et al" "The microarchitecture of dojo, tesla's exa-scale computer," IEEE Micro, vol. 43, no. 3, pp. 31-39, 2023.  https://doi.org/10.1109/MM.2023.3258906
  12. Deploying Transformers on the Apple Neural Engine (2022), https://machinelearning.apple.corn/research/neural-engine-transformers, (accessed 18, 03, 2024) 
  13. Yong Cheol Peter Cho, et al.. "AB9: A neural processor for inference acceleration," ETRI ETRI Journal, vol. 42, no. 4, pp. 491-504, Aug. 2020.  https://doi.org/10.4218/etrij.2020-0134
  14. Sapeon (2024), https://www.sapeon.com/, (accessed 18, 03, 2024) 
  15. FuriosaAI (2024), https://furiosa.ai/warboy/specs, (accessed 18, 03, 2024) 
  16. ATOM: 5nm Versatile Inference SoC, Versatile yet Energy Efficient Al System-on-Chip (2023), http://rebellions.ai/rebellions-product/atom-2/, (accessed 18, 03, 2024) 
  17. Albericio, Jorge, et al. "Cnvlutin: Ineffectual-neuron-free deep neural network computing," ACM SIGARCH Computer Architecture News, vol. 44, no. 3 pp. 1-13, 2016.  https://doi.org/10.1145/3007787.3001138
  18. Liu, Shaoli, et al. "Cambricon: An instruction set architecture for neural networks," ACM SIGARCH Computer Architecture News, vol. 44, no. 3, pp. 393-405, 2016. https://doi.org/10.1145/3007787.3001179