• Title/Summary/Keyword: Motion Silhouette Image

Search Result 24, Processing Time 0.026 seconds

A General Representation of Motion Silhouette Image: Generic Motion Silhouette Image(GMSI) (움직임 실루엣 영상의 일반적인 표현 방식에 대한 연구)

  • Hong, Sung-Jun;Lee, Hee-Sung;Kim, Eun-Tai
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.8
    • /
    • pp.749-753
    • /
    • 2007
  • In this paper, a generalized version of the Motion Silhouette Image(MSI) called the Generic Motion Silhouette Image (GMSI) is proposed for gait recognition. The GMSI is a gray-level image and involves the spatiotemporal information of individual motion. The GMSI not only generalizes the MSI but also reflects a flexible feature of a gait sequence. Along with the GMSI, we use the Principal Component Analysis(PCA) to reduce the dimensionality of the GMSI and the Nearest Neighbor(NN) for classification. We apply the proposed feature to NLPR database and compare it with the conventional MSI. Experimental results show the effectiveness of the GMSI.

Gait Recognition using Modified Motion Silhouette Image (개선된 움직임 실루엣 영상을 이용한 발걸음 인식에 관한 연구)

  • Hong Sung-Jun;Lee Hee-Sung;Oh Kyong-Sae;Kim Eun-Tai
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.3
    • /
    • pp.266-270
    • /
    • 2006
  • In this paper, we propose the human identification system based on Hidden Markov model using gait. Since each gait cycle consists of a set of continuous motion states and transition across states has probabilistic dependences, individual gait can be modeled using Hidden Markov model. We assume that individual gait consists of N discrete transitions and we propose gait feature representation, Modified Motion Silhouette Image (MMSI) to represent and recognize individual gait. MMSI is defined as a gray-level image and it provides not only spatial information but also temporal information. The experimental results show gait recognition performance of proposed system.

Gait Recognition using Modified Motion Silhouette Image (개선된 움직임 실루엣 영상을 이용한 발걸음 인식에 관한 연구)

  • Hong Seong-Jun;Lee Hui-Seong;O Gyeong-Se;Kim Eun-Tae
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2006.05a
    • /
    • pp.49-52
    • /
    • 2006
  • 본 논문에서는 은닉 마르코프 모델을 바탕으로 하는 발걸음을 이용한 개인 식별 시스템을 제안한다. 개인의 발걸음은 연속적인 자세나 움직임의 집합으로 나타낼 수 있는데, 구조적으로 연속적인 움직임의 변화는 확률적인 특성을 가지고 있기 때문에 은닉 마르코프 모델을 이용하여 적절하게 모델링 할 수 있다. 개인의 발걸음은 N개의 이산적인 자세 간의 전이로 이루어졌다고 가정하였으며, 이를 계산하기 위해 MMSI라는 발걸음 특징 모델을 제안하였다. MMSI는 발걸음 인식에 중요한 역할을 하는 시공간적인 정보를 가지고 있는 그레이-스케일 영상이다. 실험 결과는 MMSI를 이용하여 은닉 마르코프 모델을 바탕으로 한 발걸음 인식 결과를 보여준다.

  • PDF

Automatic Detecting of Joint of Human Body and Mapping of Human Body using Humanoid Modeling (인체 모델링을 이용한 인체의 조인트 자동 검출 및 인체 매핑)

  • Kwak, Nae-Joung;Song, Teuk-Seob
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.4
    • /
    • pp.851-859
    • /
    • 2011
  • In this paper, we propose the method that automatically extracts the silhouette and the joints of consecutive input image, and track joints to trace object for interaction between human and computer. Also the proposed method presents the action of human being to map human body using joints. To implement the algorithm, we model human body using 14 joints to refer to body size. The proposed method converts RGB color image acquired through a single camera to hue, saturation, value images and extracts body's silhouette using the difference between the background and input. Then we automatically extracts joints using the corner points of the extracted silhouette and the data of body's model. The motion of object is tracted by applying block-matching method to areas around joints among all image and the human's motion is mapped using positions of joints. The proposed method is applied to the test videos and the result shows that the proposed method automatically extracts joints and effectively maps human body by the detected joints. Also the human's action is aptly expressed to reflect locations of the joints

CNN-based Gesture Recognition using Motion History Image

  • Koh, Youjin;Kim, Taewon;Hong, Min;Choi, Yoo-Joo
    • Journal of Internet Computing and Services
    • /
    • v.21 no.5
    • /
    • pp.67-73
    • /
    • 2020
  • In this paper, we present a CNN-based gesture recognition approach which reduces the memory burden of input data. Most of the neural network-based gesture recognition methods have used a sequence of frame images as input data, which cause a memory burden problem. We use a motion history image in order to define a meaningful gesture. The motion history image is a grayscale image into which the temporal motion information is collapsed by synthesizing silhouette images of a user during the period of one meaningful gesture. In this paper, we first summarize the previous traditional approaches and neural network-based approaches for gesture recognition. Then we explain the data preprocessing procedure for making the motion history image and the neural network architecture with three convolution layers for recognizing the meaningful gestures. In the experiments, we trained five types of gestures, namely those for charging power, shooting left, shooting right, kicking left, and kicking right. The accuracy of gesture recognition was measured by adjusting the number of filters in each layer in the proposed network. We use a grayscale image with 240 × 320 resolution which defines one meaningful gesture and achieved a gesture recognition accuracy of 98.24%.

Model-based Body Motion Tracking of a Walking Human (모델 기반의 보행자 신체 추적 기법)

  • Lee, Woo-Ram;Ko, Han-Seok
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.6
    • /
    • pp.75-83
    • /
    • 2007
  • A model based approach of tracking the limbs of a walking human subject is proposed in this paper. The tracking process begins by building a data base composed of conditional probabilities of motions between the limbs of a walking subject. With a suitable amount of video footage from various human subjects included in the database, a probabilistic model characterizing the relationships between motions of limbs is developed. The motion tracking of a test subject begins with identifying and tracking limbs from the surveillance video image using the edge and silhouette detection methods. When occlusion occurs in any of the limbs being tracked, the approach uses the probabilistic motion model in conjunction with the minimum cost based edge and silhouette tracking model to determine the motion of the limb occluded in the image. The method has shown promising results of tracking occluded limbs in the validation tests.

Human Motion Tracking by Combining View-based and Model-based Methods for Monocular Video Sequences (하나의 비디오 입력을 위한 모습 기반법과 모델 사용법을 혼용한 사람 동작 추적법)

  • Park, Ji-Hun;Park, Sang-Ho;Aggarwal, J.K.
    • The KIPS Transactions:PartB
    • /
    • v.10B no.6
    • /
    • pp.657-664
    • /
    • 2003
  • Reliable tracking of moving humans is essential to motion estimation, video surveillance and human-computer interface. This paper presents a new approach to human motion tracking that combines appearance-based and model-based techniques. Monocular color video is processed at both pixel level and object level. At the pixel level, a Gaussian mixture model is used to train and classily individual pixel colors. At the object level, a 3D human body model projected on a 2D image plane is used to fit the image data. Our method does not use inverse kinematics due to the singularity problem. While many others use stochastic sampling for model-based motion tracking, our method is purely dependent on nonlinear programming. We convert the human motion tracking problem into a nonlinear programming problem. A cost function for parameter optimization is used to estimate the degree of the overlapping between the foreground input image silhouette and a projected 3D model body silhouette. The overlapping is computed using computational geometry by converting a set of pixels from the image domain to a polygon in the real projection plane domain. Our method is used to recognize various human motions. Motion tracking results from video sequences are very encouraging.

Vision-Based Real-Time Motion Capture System

  • Kim, Tae-Ho;Jo, Kang-Hyun;Yoon, Yeo-Hong;Kang, Hyun-Duk;Kim, Dae-Nyeon;Kim, Se-Yoon;Lee, In-Ho;Park, Chang-Jun;Leem Nan-Hee;Kim, Sung-Een
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.171.5-171
    • /
    • 2001
  • Information that is acquired by adhered sensors on a body has been commonly used for the three-dimensional real-time motion capture algorithm. This paper describes realtime motion capture algorithm using computer vision. In a real-time image sequence, human body silhouette is extracted use a background subtraction between background image and the reference image. Then a human standing posture whether forward or backward is estimated by extraction of skin region in the silhoutte. After then, the principal axis is calculated in the torso and the face region is estimated on the principal axis. Feature points, which are essential condition to track the human gesture, are obtained ...

  • PDF

Markerless Motion Capture Algorithm for Lizard Biomimetics (소형 도마뱀 운동 분석을 위한 마커리스 모션 캡쳐 알고리즘)

  • Kim, Chang Hoi;Kim, Tae Won;Shin, Ho Cheol;Lee, Heung Ho
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.9
    • /
    • pp.136-143
    • /
    • 2013
  • In this paper, a algorithm to find joints of a small animal like a lizard from the multiple-view silhouette images is presented. The proposed algorithm is able to calculate the 3D coordinates so that the locomotion of the lizard is markerlessly reconstructed. The silhouette images of the lizard was obtained by a adaptive threshold algorithm. The skeleton image of the silhouette image was obtained by Zhang-Suen method. The back-bone line, head and tail point were detected with the A* search algorithm and the elimination of the ortho-diagonal connection algorithm. Shoulder joints and hip joints of a lizard were found by $3{\times}3$ masking of the thicked back-bone line. Foot points were obtained by morphology calculation. Finally elbow and knee joint were calculated by the ortho distance from the lines of foot points and shoulder/hip joint. The performance of the suggested algorithm was evaluated through the experiment of detecting joints of a small lizard.

Gesture Extraction for Ubiquitous Robot-Human Interaction (유비쿼터스 로봇과 휴먼 인터액션을 위한 제스쳐 추출)

  • Kim, Moon-Hwan;Joo, Young-Hoon;Park, Jin-Bae
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.12
    • /
    • pp.1062-1067
    • /
    • 2005
  • This paper discusses a skeleton feature extraction method for ubiquitous robot system. The skeleton features are used to analyze human motion and pose estimation. In different conventional feature extraction environment, the ubiquitous robot system requires more robust feature extraction method because it has internal vibration and low image quality. The new hybrid silhouette extraction method and adaptive skeleton model are proposed to overcome this constrained environment. The skin color is used to extract more sophisticated feature points. Finally, the experimental results show the superiority of the proposed method.