DOI QR코드

DOI QR Code

인공지능 프로세서 기술 동향

AI Processor Technology Trends

  • 발행 : 2018.10.01

초록

The Von Neumann based architecture of the modern computer has dominated the computing industry for the past 50 years, sparking the digital revolution and propelling us into today's information age. Recent research focus and market trends have shown significant effort toward the advancement and application of artificial intelligence technologies. Although artificial intelligence has been studied for decades since the Turing machine was first introduced, the field has recently emerged into the spotlight thanks to remarkable milestones such as AlexNet-CNN and Alpha-Go, whose neural-network based deep learning methods have achieved a ground-breaking performance superior to existing recognition, classification, and decision algorithms. Unprecedented results in a wide variety of applications (drones, autonomous driving, robots, stock markets, computer vision, voice, and so on) have signaled the beginning of a golden age for artificial intelligence after 40 years of relative dormancy. Algorithmic research continues to progress at a breath-taking pace as evidenced by the rate of new neural networks being announced. However, traditional Von Neumann based architectures have proven to be inadequate in terms of computation power, and inherently inefficient in their processing of vastly parallel computations, which is a characteristic of deep neural networks. Consequently, global conglomerates such as Intel, Huawei, and Google, as well as large domestic corporations and fabless companies are developing dedicated semiconductor chips customized for artificial intelligence computations. The AI Processor Research Laboratory at ETRI is focusing on the research and development of super low-power AI processor chips. In this article, we present the current trends in computation platform, parallel processing, AI processor, and super-threaded AI processor research being conducted at ETRI.

키워드

참고문헌

  1. J. von Neumann, "First Draft of a Report on EDVAC," Contract No. W-670-ORD-4926 between US Army and Univ. of Pennsylvania, June 30, 1945.
  2. S. Gill, "Parammel Programming," Comput. J., vol. 1, no. 1, Apr. 1958, pp. 2-10. https://doi.org/10.1093/comjnl/1.1.2
  3. A. Turing, "Computing Machinery and Intelligence," Mind, vol. 59, no. 236, Oct. 1950, pp. 433-460. https://doi.org/10.1093/mind/LIX.236.433
  4. D. Silver et al., "Matering the game of Go with deep neural networks and tree search," Nature, 529, Jan. 2016, pp. 484-489. https://doi.org/10.1038/nature16961
  5. S. Zhang et al., "Cambricon-X: An Accelerator for Sparse Neural Networks," in Annu. IEEE/ACM Int. Symp. Mocroarchietc., Taipei, Tawan, Oct. 15-19, 2016, pp. 1-12.
  6. N.P. Jouppi et al., "In-Datacenter Performance Analysis of a Tensor Processing Unit," Proc. Auun. Int. Symp. Comput. Architec., Toronto, Canada, June 24-28, 2017, pp. 1-12.
  7. P. Teich, "Under the Hood of Google's TPU2 Machine Learning Clusters," THENEXTPLATFORM, May 22, 2017. Available: http://www.nextplatform.com