DOI QR코드

DOI QR Code

다양한 동작 학습을 위한 깊은신경망 구조 비교

A Comparison of Deep Neural Network Structures for Learning Various Motions

  • 투고 : 2021.11.19
  • 심사 : 2021.11.26
  • 발행 : 2021.12.01

초록

최근 컴퓨터 애니메이션 분야에서는 기존의 유한상태기계나 그래프 기반의 방식들에서 벗어나 딥러닝을 이용한 동작 생성 방식이 많이 연구되고있다. 동작 학습에 요구되는 네트워크의 표현력은 학습해야하는 동작의 단순한 길이보다는 그 안에 포함된 동작의 다양성에 더 큰 영향을 받는다. 본 연구는 이처럼 학습해야하는 동작의 종류가 다양한 경우에 효율적인 네트워크 구조를 찾는것을 목표로 한다. 기본적인 fully-connected 구조, 여러개의 fully-connected 레이어를 병렬적으로 사용하는 mixture of experts구조, seq2seq처리에 널리 사용되는 순환신경망(RNN), 그리고 최근 시퀀스 형태의 데이터 처리를 위해 자연어 처리 분야에서 사용되고있는 transformer구조의 네트워크들을 각각 학습하고 비교한다.

Recently, in the field of computer animation, a method for generating motion using deep learning has been studied away from conventional finite-state machines or graph-based methods. The expressiveness of the network required for learning motions is more influenced by the diversity of motion contained in it than by the simple length of motion to be learned. This study aims to find an efficient network structure when the types of motions to be learned are diverse. In this paper, we train and compare three types of networks: basic fully-connected structure, mixture of experts structure that uses multiple fully-connected layers in parallel, recurrent neural network which is widely used to deal with seq2seq, and transformer structure used for sequence-type data processing in the natural language processing field.

키워드

과제정보

이 논문은 NCSOFT AI Center(NCSOFT2021-0767-K)와 2021년도 정부(과학기술정보통신부)의 재원으로 정보통신기획평가원의 지원을 받아 수행된 연구임 (No.2017-0-00878, SW컴퓨팅산업원천기술개발산업(SW스타랩)).

참고문헌

  1. J. Lee, J. Chai, P. S. A. Reitsma, J. K. Hodgins, and N. S. Pollard, "Interactive control of avatars animated with human motion data," ACM Trans. Graph., vol. 21, no. 3, pp. 491-500, 2002. https://doi.org/10.1145/566654.566607
  2. L. Kovar, M. Gleicher, and F. Pighin, "Motion graphs," ACM Trans. Graph., vol. 21, no. 3, pp. 473-482, July 2002. https://doi.org/10.1145/566654.566605
  3. M. Buttner and S. Clavet, "Motion matching - the road to next gen animation," in Proc. of Nucl.ai 2015, 2015.
  4. S. Clavet, "Motion matching and the road to next-gen animation," in Proc. of GDC 2016, 2016.
  5. K. Lee, S. Lee, and J. Lee, "Interactive character animation by learning multi-objective control," ACM Transactions on Graphics (TOG), vol. 37, no. 6, pp. 1-10, 2018.
  6. S. Lee, S. Lee, Y. Lee, and J. Lee, "Learning a family of motor skills from a single motion clip," ACM Trans. Graph., vol. 40, no. 4, 2021.
  7. D. Holden, T. Komura, and J. Saito, "Phase-functioned neural networks for character control," ACM Trans. Graph., vol. 36, no. 4, 2017.
  8. H. Zhang, S. Starke, T. Komura, and J. Saito, "Mode-adaptive neural networks for quadruped motion control," ACM Trans. Graph., vol. 37, no. 4, 2018.
  9. D. Holden, O. Kanoun, M. Perepichka, and T. Popa, "Learned motion matching," ACM Trans. Graph., vol. 39, no. 4, 2020.
  10. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, "Attention is all you need," in Advances in neural information processing systems, 2017, pp. 5998-6008.
  11. M. Flanders and P. Cordo, "Kinesthetic and visual control of a bimanual task: specification of direction and amplitude," Journal of Neuroscience, vol. 9, no. 2, 1989.
  12. A. P. Georgopoulos, J. F. Kalaska, and J. T. Massey, "Spatial trajectories and reaction times of aimed movements: effects of practice, uncertainty, and change in target location," Journal of neurophysiology, vol. 46, no. 4, 1981.
  13. S. Hochreiter and J. Schmidhuber, "Long short-term memory," Neural computation, vol. 9, no. 8, pp. 1735-1780, 1997. https://doi.org/10.1162/neco.1997.9.8.1735
  14. X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. Paul Smolley, "Least squares generative adversarial networks," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2794-2802.