DOI QR코드

DOI QR Code

SNN eXpress: Streamlining Low-Power AI-SoC Development With Unsigned Weight Accumulation Spiking Neural Network

  • Hyeonguk Jang (AI Edge SoC Research Section, Electronics and Telecommunications Research Institute) ;
  • Kyuseung Han (AI Edge SoC Research Section, Electronics and Telecommunications Research Institute) ;
  • Kwang-Il Oh (AI Edge SoC Research Section, Electronics and Telecommunications Research Institute) ;
  • Sukho Lee (AI Edge SoC Research Section, Electronics and Telecommunications Research Institute) ;
  • Jae-Jin Lee (AI Edge SoC Research Section, Electronics and Telecommunications Research Institute) ;
  • Woojoo Lee (School of Electrical and Electronics Engineering, Chung-Ang University)
  • 투고 : 2024.03.14
  • 심사 : 2024.08.13
  • 발행 : 2024.10.10

초록

SoCs with analog-circuit-based unsigned weight-accumulating spiking neural networks (UWA-SNNs) are a highly promising solution for achieving lowpower AI-SoCs. This paper addresses the challenges that must be overcome to realize the potential of UWA-SNNs in low-power AI-SoCs: (i) the absence of UWA-SNN learning methods and the lack of an environment for developing applications based on trained SNN models and (ii) the inherent issue of testing and validating applications on the system being nearly impractical until the final chip is fabricated owing to the mixed-signal circuit implementation of UWA-SNN-based SoCs. This paper argues that, by integrating the proposed solutions, the development of an EDA tool that enables the easy and rapid development of UWA-SNN-based SoCs is feasible, and demonstrates this through the development of the SNN eXpress (SNX) tool. The developed SNX automates the generation of RTL code, FPGA prototypes, and a software development kit tailored for UWA-SNN-based application development. Comprehensive details of SNX development and the performance evaluation and verification results of two AI-SoCs developed using SNX are also presented.

키워드

과제정보

This work was supported by the Electronics and Telecommunications Research Institute (ETRI) grant funded by the Korea government (24ZS1230, memory-computation convergence neuromorphic computing technology).

참고문헌

  1. S. Davidson and S. B. Furber, Comparison of Artificial and Spiking Neural Networks on Digital Hardware, Front. Neurosci. 15 (2021), 651141.
  2. M. Davies, N. Srinivasa, T.-H. Lin, G. Chinya, Y. Cao, S. H. Choday, G. Dimou, P. Joshi, N. Imam, S. Jain, and Y. Liao, Loihi: A Neuromorphic Manycore Processor With On-Chip Learning, IEEE Micro 38 (2018), no. 1, 82-99.
  3. S. Uenohara and K. Aihara, CMOS Mixed-Signal Spiking Neural Network Circuit Using a Time-Domain Digital-To-Analog Converter, (IEEE Int. Symp. Circuits and Systems, Daegu, Rep. of Korea), 2021, pp. 1-5.
  4. J. K. Eshraghian, X. Wang, and W. D. Lu, Memristor-Based Binarized Spiking Neural Networks: Challenges and Applications, IEEE Nanotechnol, Mag. 16 (2022), no. 2, 14-23.
  5. S. Hwang, J. Yu, G. H. Lee, M. S. Song, J. Chang, K. K. Min, T. Jang, J.-H. Lee, B.-G. Park, and H. Kim, Capacitor-Based Synaptic Devices for Hardware Spiking Neural Networks, IEEE Electron Device Lett. 43 (2022), no. 4, 549-552.
  6. K.-I. Oh, S.-E. Kim, T.-W. Kang, H. Kim, and J.-J. Lee, Design of 0.9v-Operating Low-Power AI SNN Analog Core, (Autumn Annual Conference of IEIE, Gwangju, Rep. of Korea), 2020, pp. 932-933.
  7. S. Hwang, H. Kim, J. Park, M.-W. Kwon, M.-H. Baek, J.-J. Lee, and B.-G. Park, System-Level Simulation of Hardware Spiking Neural Network Based on Synaptic Transistors and I&F Neuron Circuits, IEEE Electron Device Lett. 39 (2018), no. 9, 1441-1444.
  8. A. Joubert, B. Belhadj, O. Temam, and R. Heliot, Hardware Spiking Neurons Design: Analog or Digital? (Int. Joint Conf. on Neural Networks, Brisbane, Australia), 2012, pp. 1-5.
  9. S. Kim, S. Kim, S. Um, S. Kim, K. Kim, and H.-J. Yoo, NeuroCIM: ADC-Less Neuromorphic Computing-In-Memory Processor With Operation Gating/Stopping and Digital-Analog Networks, IEEE J. Solid-State Circ. 2023 (2023), 2931-2945.
  10. G. Bi and M. Poo, Synaptic Modifications in Cultured Hippocampal Neurons: Dependence on Spike Timing, Synaptic Strength, and Postsynaptic Cell Type, J. Neurosci. 18 (1998), no. 24, 10464-10472.
  11. P. U. Diehl and M. Cook, Unsupervised Learning Of Digit Recognition Using Spike-Timing-Dependent Plasticity, Front. computat. Neurosci. 9 (2015), 99.
  12. S. Song, K. D. Miller, and L. F. Abbott, Competitive Hebbian Learning Through Spike-Timing-Dependent Synaptic Plasticity, Nature Neurosci. 3 (2000), no. 9, 919-926.
  13. P. U. Diehl, D. Neil, J. Binas, M. Cook, S.-C. Liu, and M. Pfeiffer, Fast-Classifying, High-Accuracy Spiking Deep Networks Through Weight and Threshold Balancing, (Int. Joint Conf. Neural Networks, Killarney, Ireland), 2015, pp. 1-8.
  14. B. Rueckauer, I.-A. Lungu, Y. Hu, M. Pfeiffer, and S.-C. Liu, Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification, Frontiers Neurosci. 11 (2017), 682.
  15. W. Guo, M. E. Fouda, A. M. Eltawil, and K. N. Salama, Neural Coding in Spiking Neural Networks: A Comparative Study for Robust Neuromorphic Systems, Front. Neurosci. 15 (2021), 638474.
  16. R. Banner, Y. Nahshan, and D. Soudry, Post Training 4-Bit Quantization of Convolutional Networks for Rapid-Deployment, Adv. Neural Inform. Process. Sys. 32 (2019), 7950-7958.
  17. Y. Choukroun, E. Kravchik, F. Yang, and P. Kisilev, Low-Bit Quantization of Neural Networks for Efficient Inference, (IEEE/CVF Int. Conf. Computer Vision Workshop, Seoul, Rep. Korea), 2019, pp. 3009-3018.
  18. B. Jacob, S. Kligys, B. Chen, M. Zhu, M. Tang, A. Howard, H. Adam, and D. Kalenichenko, Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference, (Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA), 2018, pp. 2704-2713.
  19. R. Krishnamoorthi, Quantizing Deep Convolutional Networks for Efficient Inference: A Whitepaper, arXiv preprint, 2018. https://doi.org/10.48550/arXiv.1806.08342
  20. K. Han, S. Lee, K.-I. Oh, Y. Bae, H. Jang, J.-J. Lee, W. Lee, and M. Pedram, Developing TEI-Aware Ultralow-Power SoC Platforms for IoT End Nodes, IEEE Int. Things J. 8 (2021), no. 6, 4642-4656.
  21. E. Choi, J. Park, K. Lee, J.-J. Lee, K. Han, and W. Lee, Day-Night Architecture: Development of an Ultra-Low Power RISC-V Processor for Wearable Anomaly Detection, J. Syst. Architect. 152 (2024), 103161. https://doi.org/10.1016/j.sysarc.2024.103161
  22. H. Jang, K. Han, S. Lee, J.-J. Lee, S.-Y. Lee, J.-H. Lee, and W. Lee, Developing a Multicore Platform Utilizing Open RISC-V Cores, IEEE Access 9 (2021), 120010-120023.
  23. J. Park, E. Choi, K. Lee, J.-J. Lee, K. Han, and W. Lee, Developing an Ultra-Low Power RISC-V Processor for Anomaly Detection, (Design, Automation & Test in Europe Conf. & Exhibition, Antwerp, Belgium), 2023, pp. 1-2.
  24. J. Park, K. Han, E. Choi, S. Lee, J.-J. Lee, W. Lee, and M. Pedram, Florian: Developing a Low-Power RISC-V Multicore Processor With a Shared Lightweight FPU, (IEEE/ACM Int. Symp. Low Power Electronics and Design, Vienna, Austria), 2023, pp. 1-6.
  25. Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, Gradient-Based Learning Applied to Document Recognition, Proc.IEEE 86 (1998), no. 11, 2278-2324.
  26. J. Rodge, Binary-Image-Classifier-pytorch, 2024. https://github.com/jayrodge/Binary-Image-Classifier-PyTorch/tree/master/face
  27. ORCA, Orca-RISC-V, 2024. https://github.com/kammoh/ORCA-risc-v
  28. K. Asanovic, R. Avizienis, J. Bachrach, S. Beamer, D. Biancolin, C. Celio, H. Cook, D. Dabbelt, J. Hauser, A. Izraelevitz, S. Karandikar, B. Keller, D. Kim, J. Koenig, Y. Lee, E. Love, M. Maas, A. Magyar, H. Mao, M. Moreto, A. Ou, D. A. Patterson, B. Richards, C. Schmidt, S. Twigg, H. Vo, and A. Waterman, The Rocket Chip Generator. UCB/EECS-2016-17. EECS Department, University of California, Berkeley, 2016. http://www2.eecs.berkeley.edu/Pubs/TechRpts/2016/EECS-2016-17.html