• Title/Summary/Keyword: 모션 템플릿

Search Result 16, Processing Time 0.03 seconds

Spatial-Temporal Scale-Invariant Human Action Recognition using Motion Gradient Histogram (모션 그래디언트 히스토그램 기반의 시공간 크기 변화에 강인한 동작 인식)

  • Kim, Kwang-Soo;Kim, Tae-Hyoung;Kwak, Soo-Yeong;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.12
    • /
    • pp.1075-1082
    • /
    • 2007
  • In this paper, we propose the method of multiple human action recognition on video clip. For being invariant to the change of speed or size of actions, Spatial-Temporal Pyramid method is applied. Proposed method can minimize the complexity of the procedures owing to select Motion Gradient Histogram (MGH) based on statistical approach for action representation feature. For multiple action detection, Motion Energy Image (MEI) of binary frame difference accumulations is adapted and then we detect each action of which area is represented by MGH. The action MGH should be compared with pre-learning MGH having pyramid method. As a result, recognition can be done by the analyze between action MGH and pre-learning MGH. Ten video clips are used for evaluating the proposed method. We have various experiments such as mono action, multiple action, speed and site scale-changes, comparison with previous method. As a result, we can see that proposed method is simple and efficient to recognize multiple human action with stale variations.

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

Collaborative Tracking Using Multiple Network Cameras (다수의 네트워크 카메라를 이용한 협동 추적)

  • Jeon, Hyoung-Seok;Jung, Jun-Young;Joo, Young-Hoon;Shin, Sang-Keun
    • Proceedings of the KIEE Conference
    • /
    • 2011.07a
    • /
    • pp.1888-1889
    • /
    • 2011
  • 본 논문에서는 다수의 네트워크 카메라를 이용한 협동 추적 알고리즘을 제안하고자 한다. 이를 위해 먼저 모션 템플릿 기법을 통하여 영상내의 움직임 영역을 추출한다. 이후 움직임 영역이 추출되면 이웃한 카메라에 협동요청을 하고 칼만 필터를 이용하여 움직임 영역의 위치를 보정하여 정확한 PTZ변수를 설정한다. 또한 협동요청을 받은 이웃 카메라는 요청받은 PTZ변수를 이용하여 움직임 물체를 협동 추적한다. 마지막으로, 본 논문에서 제안하는 협동 추적 알고리즘에 대한 실험을 통하여 제안된 협동 추적 알고리즘의 성능분석 및 그 응용 가능성을 증명한다.

  • PDF

Content design for a guide robot using Flash (플래시를 이용한 안내용 로봇 콘텐츠 개발)

  • Lee, Young-Chul;Lee, Min-Chul;Kim, Ki-Soo;Jeong, Gu-Min;Ahn, Hyun-Sik;Moon, Chan-Woo;Jeong, Hyun-Chul
    • Proceedings of the KIEE Conference
    • /
    • 2008.10b
    • /
    • pp.359-360
    • /
    • 2008
  • 본 논문에서는 유진 로봇에서 제공되는 로봇 모션 command를 이용하여 플래시 툴의 타임라인을 기반으로 동작 가능한 안내용 로봇 콘텐츠의 제작방법을 제시한다. 유진 로봇에서 제공되는 로봇 command에서 안내용 콘텐츠의 필요한 명령어를 추출 하고 명령어 템플릿을 제작 한다. 플래시를 이용하여 콘텐츠 제작을 하고, 플래시의 타임라인에 유진 로봇에서 제공되는 command를 삽입한다. 시뮬레이터를 통하여 테스트하여 검증 한 뒤, 로봇에 직접 다운로드 하여 콘텐츠를 실행한다.

  • PDF

Moving Object Tracking System for Dock Safety Monitoring (선착장 안전 모니터링을 위한 이동 객체 추적 시스템)

  • Park, Mi-Jeong;Hong, Seong-Il;Yoo, Seung-Hyeok;Kim, Kyeong-Og;Song, Jong-Nam;Kim, Eung-Kon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.10 no.8
    • /
    • pp.867-874
    • /
    • 2015
  • Hoist have been installed at wharfs or seashore in the center of fishing village fraternities. A Hoist is used in harbor loading and unloading fishing gear or seafoods and is a device to refloat fishing boats into a breakwater or land in case of typhoon or bad weather. In this paper, we propose image perception and moving objects tracking system that detects boat's damage, theft and trespassing occurred at the wharf. This system detects objects' motion in real time by using the motion templet and tracks to concentrate on a moving object(person, boat, etc.) by using a PTZ camera.

Depth-Based Recognition System for Continuous Human Action Using Motion History Image and Histogram of Oriented Gradient with Spotter Model (모션 히스토리 영상 및 기울기 방향성 히스토그램과 적출 모델을 사용한 깊이 정보 기반의 연속적인 사람 행동 인식 시스템)

  • Eum, Hyukmin;Lee, Heejin;Yoon, Changyong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.6
    • /
    • pp.471-476
    • /
    • 2016
  • In this paper, recognition system for continuous human action is explained by using motion history image and histogram of oriented gradient with spotter model based on depth information, and the spotter model which performs action spotting is proposed to improve recognition performance in the recognition system. The steps of this system are composed of pre-processing, human action and spotter modeling and continuous human action recognition. In pre-processing process, Depth-MHI-HOG is used to extract space-time template-based features after image segmentation, and human action and spotter modeling generates sequence by using the extracted feature. Human action models which are appropriate for each of defined action and a proposed spotter model are created by using these generated sequences and the hidden markov model. Continuous human action recognition performs action spotting to segment meaningful action and meaningless action by the spotter model in continuous action sequence, and continuously recognizes human action comparing probability values of model for meaningful action sequence. Experimental results demonstrate that the proposed model efficiently improves recognition performance in continuous action recognition system.