• Title/Summary/Keyword: posture recognition

Search Result 136, Processing Time 0.031 seconds

Learning Similarity between Hand-posture and Structure for View-invariant Hand-posture Recognition (관측 시점에 강인한 손 모양 인식을 위한 손 모양과 손 구조 사이의 학습 기반 유사도 결정 방법)

  • Jang Hyo-Young;Jung Jin-Woo;Bien Zeung-Nam
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.3
    • /
    • pp.271-274
    • /
    • 2006
  • This paper deals with a similarity decision method between the shape of hand-postures and their structures to improve performance of the vision-based hand-posture recognition system. Hand-posture recognition by vision sensors has difficulties since the human hand is an object with high degrees of freedom, and hence grabbed images present complex self-occlusion effects and, even for one hand-posture, various appearances according to viewing directions. Therefore many approaches limit the relative angle between cameras and hands or use multiple cameras. The former approach, however, restricts user's operation area. The latter requires additional considerations on the way of merging the results from each camera image to get the final recognition result. To recognize hand-postures, we use both of appearance and structural features and decide the similarity between the two types of features by learning.

A Study on Vision-based Robust Hand-Posture Recognition Using Reinforcement Learning (강화 학습을 이용한 비전 기반의 강인한 손 모양 인식에 대한 연구)

  • Jang Hyo-Young;Bien Zeung-Nam
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.3 s.309
    • /
    • pp.39-49
    • /
    • 2006
  • This paper proposes a hand-posture recognition method using reinforcement learning for the performance improvement of vision-based hand-posture recognition. The difficulties in vision-based hand-posture recognition lie in viewing direction dependency and self-occlusion problem due to the high degree-of-freedom of human hand. General approaches to deal with these problems include multiple camera approach and methods of limiting the relative angle between cameras and the user's hand. In the case of using multiple cameras, however, fusion techniques to induce the final decision should be considered. Limiting the angle of user's hand restricts the user's freedom. The proposed method combines angular features and appearance features to describe hand-postures by a two-layered data structure and reinforcement learning. The validity of the proposed method is evaluated by appling it to the hand-posture recognition system using three cameras.

Hand Gesture Recognition Algorithm Robust to Complex Image (복잡한 영상에 강인한 손동작 인식 방법)

  • Park, Sang-Yun;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.7
    • /
    • pp.1000-1015
    • /
    • 2010
  • In this paper, we propose a novel algorithm for hand gesture recognition. The hand detection method is based on human skin color, and we use the boundary energy information to locate the hand region accurately, then the moment method will be employed to locate the hand palm center. Hand gesture recognition can be separated into 2 step: firstly, the hand posture recognition: we employ the parallel NNs to deal with problem of hand posture recognition, pattern of a hand posture can be extracted by utilize the fitting ellipses method, which separates the detected hand region by 12 ellipses and calculates the white pixels rate in ellipse line. the pattern will be input to the NNs with 12 input nodes, the NNs contains 4 output nodes, each output node out a value within 0~1, the posture is then represented by composed of the 4 output codes. Secondly, the hand gesture tracking and recognition: we employed the Kalman filter to predict the position information of gesture to create the position sequence, distance relationship between positions will be used to confirm the gesture. The simulation have been performed on Windows XP to evaluate the efficiency of the algorithm, for recognizing the hand posture, we used 300 training images to train the recognizing machine and used 200 images to test the machine, the correct number is up to 194. And for testing the hand tracking recognition part, we make 1200 times gesture (each gesture 400 times), the total correct number is 1002 times. These results shows that the proposed gesture recognition algorithm can achieve an endurable job for detecting the hand and its' gesture.

An Eye Location based Head Posture Recognition Method and Its Application in Mouse Operation

  • Chen, Zhe;Yang, Bingbing;Yin, Fuliang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.3
    • /
    • pp.1087-1104
    • /
    • 2015
  • An eye location based head posture recognition method is proposed in this paper. First, face is detected using skin color method, and eyebrow and eye areas are located based on gray gradient in face. Next, pupil circles are determined using edge detection circle method. Finally, head postures are recognized based on eye location information. The proposed method has high recognition precision and is robust for facial expressions and different head postures, and can be used in mouse operation. The experimental results reveal the validity of proposed method.

Hand Gesture Recognition using Optical Flow Field Segmentation and Boundary Complexity Comparison based on Hidden Markov Models

  • Park, Sang-Yun;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.4
    • /
    • pp.504-516
    • /
    • 2011
  • In this paper, we will present a method to detect human hand and recognize hand gesture. For detecting the hand region, we use the feature of human skin color and hand feature (with boundary complexity) to detect the hand region from the input image; and use algorithm of optical flow to track the hand movement. Hand gesture recognition is composed of two parts: 1. Posture recognition and 2. Motion recognition, for describing the hand posture feature, we employ the Fourier descriptor method because it's rotation invariant. And we employ PCA method to extract the feature among gesture frames sequences. The HMM method will finally be used to recognize these feature to make a final decision of a hand gesture. Through the experiment, we can see that our proposed method can achieve 99% recognition rate at environment with simple background and no face region together, and reduce to 89.5% at the environment with complex background and with face region. These results can illustrate that the proposed algorithm can be applied as a production.

Skeletal Joint Correction Method based on Body Area Information for Climber Posture Recognition (클라이머 자세인식을 위한 신체영역 기반 스켈레톤 보정)

  • Chung, Daniel;Ko, Ilju
    • Journal of Korea Game Society
    • /
    • v.17 no.5
    • /
    • pp.133-142
    • /
    • 2017
  • Recently, screen climbing contents such as sports climbing learning program and screen climbing games. Especially, there are many researches on screen climbing games. In this paper, we propose the skeleton correction method based on the body area of a climber to improve the posture recognition accuracy. The correction method consists of the modified skeletal frame normalization with abnormal skeleton joint filtering, the classification of body area into joint parts, and the final skeleton joint correction. The skeletal information obtained by the proposed method can be used to compare the climber's posture and the ideal climbing posture.

The Hand Posture Recognition Using IR-Sensor Array (적외선센서 어레이를 이용한 손동작 검출 방법)

  • Song, Tae-Houn;Jeong, Soon-Mook;Jung, Hyun-Uk;Kwon, Key-Ho;Jeon, Jae-Wook
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.432-435
    • /
    • 2009
  • This paper proposes a hand posture recognition with pattern-matching method, embedding a simple paradigm using an Infrared sensor array. Our pattern-matching based hand posture recognition is specification supports fun and the user experience when communicating between humans and telecommunication devices, including robots. Our non-contact type input device (IR-Sensor Array) transmits commands to control mobile robots. It can also control Google Earth’s map searching programs, and other applications.

  • PDF

Posture features and emotion predictive models for affective postures recognition (감정 자세 인식을 위한 자세특징과 감정예측 모델)

  • Kim, Jin-Ok
    • Journal of Internet Computing and Services
    • /
    • v.12 no.6
    • /
    • pp.83-94
    • /
    • 2011
  • Main researching issue in affective computing is to give a machine the ability to recognize the emotion of a person and to react it properly. Efforts in that direction have mainly focused on facial and oral cues to get emotions. Postures have been recently considered as well. This paper aims to discriminate emotions posture by identifying and measuring the saliency of posture features that play a role in affective expression. To do so, affective postures from human subjects are first collected using a motion capture system, then emotional features in posture are described with spatial ones. Through standard statistical techniques, we verified that there is a statistically significant correlation between the emotion intended by the acting subjects, and the emotion perceived by the observers. Discriminant Analysis are used to build affective posture predictive models and to measure the saliency of the proposed set of posture features in discriminating between 6 basic emotional states. The evaluation of proposed features and models are performed using a correlation between actor-observer's postures set. Quantitative experimental results show that proposed set of features discriminates well between emotions, and also that built predictive models perform well.

Development of a 2D Posture Measurement System to Evaluate Musculoskeletal Workload (근골격계 부하 평가를 위한 2차원 자세 측정 시스템 개발)

  • Park, Sung-Joon;Park, Jae-Kyu;Choe, Jae-Ho
    • Journal of the Ergonomics Society of Korea
    • /
    • v.24 no.3
    • /
    • pp.43-52
    • /
    • 2005
  • A two-dimensional posture measurement system was developed to evaluate the risks of work-related musculoskeletal disorders(MSDs) easily on various conditions of work. The posture measurement system is an essential tool to analyze the workload for preventing work-related musculoskeletal disorders. Although several posture measurement systems have been developed for workload assessment, some restrictions in industry still exist because of its difficulty on measuring work postures. In this study, an image recognition algorithm was developed based on a neural network method to measure work posture. Each joint angle of human body was automatically measured from the recognized images through the algorithm, and the measurement system makes it possible to evaluate the risks of work-related musculoskeletal disorders easily on various working conditions. The validation test on upper body postures was carried out to examine the accuracy of the measured joint angle data from the system, and the results showed good measuring performance for each joint angle. The differences between the joint angles measured directly and the angles measured by posture measurement software were not statistically significant. It is expected that the result help to properly estimate physical workload and can be used as a postural analysis system to evaluate the risk of work-related musculoskeletal disorders in industry.

Rotation Invariant 3D Star Skeleton Feature Extraction (회전무관 3D Star Skeleton 특징 추출)

  • Chun, Sung-Kuk;Hong, Kwang-Jin;Jung, Kee-Chul
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.10
    • /
    • pp.836-850
    • /
    • 2009
  • Human posture recognition has attracted tremendous attention in ubiquitous environment, performing arts and robot control so that, recently, many researchers in pattern recognition and computer vision are working to make efficient posture recognition system. However the most of existing studies is very sensitive to human variations such as the rotation or the translation of body. This is why the feature, which is extracted from the feature extraction part as the first step of general posture recognition system, is influenced by these variations. To alleviate these human variations and improve the posture recognition result, this paper presents 3D Star Skeleton and Principle Component Analysis (PCA) based feature extraction methods in the multi-view environment. The proposed system use the 8 projection maps, a kind of depth map, as an input data. And the projection maps are extracted from the visual hull generation process. Though these data, the system constructs 3D Star Skeleton and extracts the rotation invariant feature using PCA. In experimental result, we extract the feature from the 3D Star Skeleton and recognize the human posture using the feature. Finally we prove that the proposed method is robust to human variations.