DOI QR코드

DOI QR Code

행동인식을 위한 다중 영역 기반 방사형 GCN 알고리즘

Multi-Region based Radial GCN algorithm for Human action Recognition

  • 장한별 (전남대학교 전자컴퓨터공학과) ;
  • 이칠우 (전남대학교 컴퓨터정보통신공학과)
  • 투고 : 2022.01.28
  • 심사 : 2022.02.24
  • 발행 : 2022.02.28

초록

본 논문에서는 딥러닝을 기반으로 입력영상의 옵티컬 플로우(optical flow)와 그래디언트(gradient)를 이용하여 종단간 행동인식이 가능한 다중영역 기반 방사성 GCN(MRGCN: Multi-region based Radial Graph Convolutional Network) 알고리즘에 대해 기술한다. 이 방법은 데이터 취득이 어렵고 계산이 복잡한 스켈레톤 정보를 사용하지 않기 때문에 카메라만을 주로 사용하는 일반 CCTV 환경에도 활용이 가능하다. MRGCN의 특징은 입력영상의 옵티컬플로우와 그래디언트를 방향성 히스토그램으로 표현한 후 계산량 축소를 위해 6개의 특징 벡터로 변환하여 사용한다는 것과 시공간 영역에서 인체의 움직임과 형상변화를 계층적으로 전파시키기 위해 새롭게 고안한 방사형 구조의 네트워크 모델을 사용한다는 것이다. 또 데이터 입력 영역을 서로 겹치도록 배치하여 각 노드 간에 공간적으로 단절이 없는 정보를 입력으로 사용한 것도 중요한 특징이다. 30가지의 행동에 대해 성능평가 실험을 수행한 결과 스켈레톤 데이터를 입력으로 사용한 기존의 GCN기반 행동인식과 동등한 84.78%의 Top-1 정확도를 얻을 수 있었다. 이 결과로부터 취득이 어려운 스켈레톤 정보를 사용하지 않는 MRGCN이 복잡한 행동인식이 필요한 실제 상황에서 더욱 실용적인 방법임을 알 수 있었다.

In this paper, multi-region based Radial Graph Convolutional Network (MRGCN) algorithm which can perform end-to-end action recognition using the optical flow and gradient of input image is described. Because this method does not use information of skeleton that is difficult to acquire and complicated to estimate, it can be used in general CCTV environment in which only video camera is used. The novelty of MRGCN is that it expresses the optical flow and gradient of the input image as directional histograms and then converts it into six feature vectors to reduce the amount of computational load and uses a newly developed radial type network model to hierarchically propagate the deformation and shape change of the human body in spatio-temporal space. Another important feature is that the data input areas are arranged being overlapped each other, so that information is not spatially disconnected among input nodes. As a result of performing MRGCN's action recognition performance evaluation experiment for 30 actions, it was possible to obtain Top-1 accuracy of 84.78%, which is superior to the existing GCN-based action recognition method using skeleton data as an input.

키워드

과제정보

본 연구는 문화체육관광부 및 한국컨텐츠진흥원의 2021년도 문화기술연구개발 지원사업으로 수행되었음. (R2020060002).

참고문헌

  1. Han-Byul Jang, Dae-Jin Kim, and Chil-Woo Lee, "Human Action Recognition based on ST-GCN using Opticalflow and Image Gradient," The 9th International Conference on Smart Media and Applications, pp. 255-260, Nov. 2020.
  2. Jang, Han-Byul, and Chil-Woo Lee, "ST-GCN Based Human Action Recognition with Abstracted Three Features of Optical Flow and Image Gradient," International Workshop on Frontiers of Computer Vision. Springer, pp. 203-217, Cham, , July. 2021.
  3. Jang, Han-Byul, and Chil-Woo Lee, "A human action recognition based on MRGCN using overlapped data acquisition regions," The 10th International Conference on Smart Media and Applications, pp. 10-15, Gunsan-si, South Korea, Sep. 2021.
  4. Yan Sijie, Yuanjun Xiong, and Dahua Lin, "Spatial Temporal Graph Convolutional Networks for Skeleton-based Action Recognition," Thirty-second AAAI conference on artificial intelligence, pp. 7444-7452, Louisiana, USA, Feb. 2018.
  5. Shahroudy Amir, et al. "Ntu rgb+ d: A large scale dataset for 3d human activity analysis," Proc. IEEE conference on computer vision and pattern recognition, pp. 1010-1019, 2016.
  6. Vemulapalli Raviteja, Felipe Arrate, and Rama Chellappa, "Human action recognition by representing 3d skeletons as points in a lie group," Proc. IEEE conference on computer vision and pattern recognition, 2014.
  7. Hussein Mohamed E., et al. "Human Action Recognition Using a Temporal Hierarchy of Covariance Descriptors on 3d Joint Locations," Proc. Twenty-third international joint conference on artificial intelligence, pp. 2466-2472, 2013.
  8. Li Chuankun, et al. "Skeleton-based action recognition using LSTM and CNN," 2017 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), IEEE, pp. 585-590, July. 2017.
  9. Du Yong, Wei Wang, and Liang Wang, "Hierarchical recurrent neural network for skeleton based action recognition," Proc. IEEE conference on computer vision and pattern recognition, pp. 1110-1118, 2015.
  10. Liu, Jun, et al. "Spatio-temporal lstm with trust gates for 3d human action recognition," European conference on computer vision, Springer, pp. 816-833, Cham, 2016.
  11. Thakkar Kalpit, and P. J. Narayanan, "Part-based graph convolutional network for action recognition," arXiv preprint arXiv:1809.04983, Sep. 2018.
  12. Li. Maosen, et al. "Actional-structural graph convolutional networks for skeleton-based action recognition," Proc. IEEE/CVF conference on computer vision and pattern recognition(CVPR), pp. 3595-3603, June. 2019.
  13. Gao Xiang, et al. "Optimized skeleton-based action recognition via sparsified graph regression," Proc. 27th ACM International Conference on Multimedia, pp. 601-610, Oct. 2019.
  14. 문성희, 이칠우, "누적 히스토그램과 랜덤 포레스트를 이용한 머리방향 추정," 스마트미디어저널, 제5권, 제1호, 38-43쪽, 2016년 3월
  15. 김준영 , 조성원, "얼굴 인식 기반 위변장 감지 시스템," 스마트미디어저널, 제4권, 제4호, 9-17쪽, 2015년 12월
  16. Lucas Bruce D., and Takeo Kanade, "An Iterative Image Registration Technique with an Application to Stereo Vision," Proc. DARPA Image Understanding Workshop, pp. 121-130, Apr. 1981.