가중치 뉴런 출력의 양자화 영향을 최소화하는 다층퍼셉트론 신경망 설계 방법

Design Method for an MLP Neural Network Which Minimizes the Effect by the Quantization of the Weights and the Neuron Outputs

  • 권오준 (한국전자통신연구원 기술조사팀 연구원) ;
  • 방승양 (포항공과대학 컴퓨터공학과)
  • 발행 : 1999.12.01

초록

이미 학습된 다층퍼셉트론 신경망을 디지털 VLSI 기술을 사용하여 하드웨어로 구현할 경우 신경망의 가중치 및 뉴런 출력들을 양자화해야 하는 문제가 발생한다. 이러한 신경망 변수들의 양자화는 결과적으로 주어진 입력에 대한 신경망의 최종 출력에서의 왜곡을 초래한다. 본 논문에서는 먼저 이러한 양자화로 인한 신경망 출력에서의 왜곡을 통계적으로 분석하였다. 분석 결과에 의하면 입력패턴 각 성분의 제곱들의 합과 가중치의 크기들이 양자화 영향에 주로 기여하는 것으로 나타났다. 이러한 분석 결과를 이용하여 양자화를 위한 정밀도가 주어졌을 때, 양자화 영향이 최소화된 다층퍼셉트론 신경망을 설계하는 방법을 제시하였다. 그리고 제안된 방법에 의해 얻은 신경망과 오류역전파 학습방법에 의하여 얻은 신경망의 성능을 비교함으로써 제안된 방법의 효율성을 입증하였다. 실험결과는 낮은 양자화 정밀도에서도 제안된 방법이 더 좋은 성능을 보였다.Abstract When we implement a multilayer perceptron with the digital VLSI technology, we generally have to quantize the weights and the neuron outputs. These quantizations eventually cause distortion in the output of the network for a given input. In this paper first we made a statistical analysis about the effect caused by the quantization on the output of the network. The analysis revealed that the sum of the squared input components and the sizes of the weights are the major factors which contribute to the quantization effect. We present a design method for an MLP which minimizes the quantization effect when the precision of the quantization is given. In order to show the effectiveness of the proposed method, we developed a network by our method and compared it with the one developed by the regular backpropagation. We could confirm that the network developed by our method performs better even with a low precision of the quantization.

키워드

참고문헌

  1. Computer v.29 no.3 Simulating Artificial Neural Networks on Parallel Architectures Nikola B.;Serbedzija
  2. Proc. IEEE International Conference on Neural Networks v.Ⅱ Modular Neural Network Hardware Implementation of Analog-Digital Mixed Operation Il-Song Han
  3. Advances in Neural Information Processing Systems 2(NIPS'92) Digital-Analog Hybrid Synapse Chips for Electronic Neural Networks A. Moopenn;T. Duong;A. P. Thakoor;D. S. Touretzky(eds.)
  4. IEEE Trans. on Neural Networks v.1 Sensitivity of Feedforward Neural Networks to Weight Errors Maryhelen Stevenson;Rodney Winter;Bernard Widrow
  5. IEEE Trans. on Neural Networks v.3 Analysis of the Effects of Quantization in Multilayer Neural Networks Using a Statistical Model Yun Xie;Marwan A. Jabri
  6. IEEE Trans. on Neural Networks v.6 The Effects of Quantization on Multilayer Neural Networks Gunhan Dundar;Kenneth Rose
  7. IEEE Trans. on Neural Networks v.3 Sensitivity Analysis of Multilayer Perceptron with Differentiable Activation Functions Jin-Young Choi;Chong-Ho Choi
  8. Electronics Letters v.28 Error and Variance Bounds on Sigmoidal Neurons with Weight and Input Errors D. Lovell;P.Bartlett;T. Downs
  9. Electronics Letters v.33 no.12 Design of a Fault Tolerant Multilayer Perceptron with a Desired Level of Robustness Oh-Jun Kwon;Sung-Yang Bang
  10. Feedforward Neural Networks with Constrained Weights Altaf H. Khan
  11. Proc. International Conference on Neural Information Processing v.1 Minimization of the Effects of Quantization in MLP using Neural Chips O. J. Kwon;S. Y. Bang
  12. Neural Networks v.9 Structural Learning with Forgetting Masumi Ishikawa
  13. Neural Networks for Pattern Recognition Christopher M. Bishop
  14. Probability, Random Variables, and Stochastic Processes A. Papoulis