• Title/Summary/Keyword: Pose Recognition

Search Result 278, Processing Time 0.024 seconds

Realtime Face Recognition by Analysis of Feature Information (특징정보 분석을 통한 실시간 얼굴인식)

  • Chung, Jae-Mo;Bae, Hyun;Kim, Sung-Shin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.9
    • /
    • pp.822-826
    • /
    • 2001
  • The statistical analysis of the feature extraction and the neural networks are proposed to recognize a human face. In the preprocessing step, the normalized skin color map with Gaussian functions is employed to extract the region of face candidate. The feature information in the region of the face candidate is used to detect the face region. In the recognition step, as a tested, the 120 images of 10 persons are trained by the backpropagation algorithm. The images of each person are obtained from the various direction, pose, and facial expression. Input variables of the neural networks are the geometrical feature information and the feature information that comes from the eigenface spaces. The simulation results of 10 persons show that the proposed method yields high recognition rates.

  • PDF

A Head Gesture Recognition Method based on Eigenfaces using SOM and PRL (SOM과 PRL을 이용한 고유얼굴 기반의 머리동작 인식방법)

  • Lee, U-Jin;Gu, Ja-Yeong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.3
    • /
    • pp.971-976
    • /
    • 2000
  • In this paper a new method for head gesture recognition is proposed. A the first stage, face image data are transformed into low dimensional vectors by principal component analysis (PCA), which utilizes the high correlation between face pose images. The a self organization map(SM) is trained by the transformed face vectors, in such a that the nodes at similar locations respond to similar poses. A sequence of poses which comprises each model gesture goes through PCA and SOM, and the result is stored in the database. At the recognition stage any sequence of frames goes through the PCA and SOM, and the result is compared with the model gesture stored in the database. To improve robustness of classification, probabilistic relaxation labeling(PRL) is used, which utilizes the contextural information imbedded in the adjacent poses.

  • PDF

Face Recognition Using Feature Information and Neural Network

  • Chung, Jae-Mo;Bae, Hyeon;Kim, Sung-Shin
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.55.2-55
    • /
    • 2001
  • The statistical analysis of the feature extraction and the neural networks are proposed to recognize a human face. In the preprocessing step, the normalized skin color map with Gaussian functions is employed to extract the region efface candidate. The feature information in the region of face candidate is used to detect a face region. In the recognition step, as a tested, the 360 images of 30 persons are trained by the backpropagation algorithm. The images of each person are obtained from the various direction, pose, and facial expression, Input variables of the neural networks are the feature information that comes from the eigenface spaces. The simulation results of 30 persons show that the proposed method yields high recognition rates.

  • PDF

A Study on Hand-signal Recognition System in 3-dimensional Space (3차원 공간상의 수신호 인식 시스템에 대한 연구)

  • 장효영;김대진;김정배;변증남
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.3
    • /
    • pp.103-114
    • /
    • 2004
  • This paper deals with a system that is capable of recognizing hand-signals in 3-dimensional space. The system uses 2 color cameras as input devices. Vision-based gesture recognition system is known to be user-friendly because of its contact-free characteristic. But as with other applications using a camera as an input device, there are difficulties under complex background and varying illumination. In order to detect hand region robustly from a input image under various conditions without any special gloves or markers, the paper uses previous position information and adaptive hand color model. The paper defines a hand-signal as a combination of two basic elements such as 'hand pose' and 'hand trajectory'. As an extensive classification method for hand pose, the paper proposes 2-stage classification method by using 'small group concept'. Also, the paper suggests a complementary feature selection method from images from two color cameras. We verified our method with a hand-signal application to our driving simulator.

Robust Real-time Pose Estimation to Dynamic Environments for Modeling Mirror Neuron System (거울 신경 체계 모델링을 위한 동적 환경에 강인한 실시간 자세추정)

  • Jun-Ho Choi;Seung-Min Park
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.3
    • /
    • pp.583-588
    • /
    • 2024
  • With the emergence of Brain-Computer Interface (BCI) technology, analyzing mirror neurons has become more feasible. However, evaluating the accuracy of BCI systems that rely on human thoughts poses challenges due to their qualitative nature. To harness the potential of BCI, we propose a new approach to measure accuracy based on the characteristics of mirror neurons in the human brain that are influenced by speech speed, depending on the ultimate goal of movement. In Chapter 2 of this paper, we introduce mirror neurons and provide an explanation of human posture estimation for mirror neurons. In Chapter 3, we present a powerful pose estimation method suitable for real-time dynamic environments using the technique of human posture estimation. Furthermore, we propose a method to analyze the accuracy of BCI using this robotic environment.

Face Recognition using Fisherface Method with Fuzzy Membership Degree (퍼지 소속도를 갖는 Fisherface 방법을 이용한 얼굴인식)

  • 곽근창;고현주;전명근
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.6
    • /
    • pp.784-791
    • /
    • 2004
  • In this study, we deal with face recognition using fuzzy-based Fisherface method. The well-known Fisherface method is more insensitive to large variation in light direction, face pose, and facial expression than Principal Component Analysis method. Usually, the various methods of face recognition including Fisherface method give equal importance in determining the face to be recognized, regardless of typicalness. The main point here is that the proposed method assigns a feature vector transformed by PCA to fuzzy membership rather than assigning the vector to particular class. In this method, fuzzy membership degrees are obtained from FKNN(Fuzzy K-Nearest Neighbor) initialization. Experimental results show better recognition performance than other methods for ORL and Yale face databases.

Image-based Localization Recognition System for Indoor Autonomous Navigation (실내 자율 비행을 위한 영상 기반의 위치 인식 시스템)

  • Moon, SungTae;Cho, Dong-Hyun;Han, Sang-Hyuck
    • Aerospace Engineering and Technology
    • /
    • v.12 no.1
    • /
    • pp.128-136
    • /
    • 2013
  • Recently, the localization recognition system research has been studied using various sensors according to increased interest in autonomous navigation flight. In case of indoor environment which cannot support GPS information, we have to look for another way to recognize current position. The Image-based localization recognition system has been interested although there are lots of way to know current pose. In this paper, we explain the localization recognition system based on mark and implementation of autonomous navigation flight. In order to apply to real environment which cannot support marks, localization based on real-time 3D map building is discussed.

Character Recognition and Search for Media Editing (미디어 편집을 위한 인물 식별 및 검색 기법)

  • Park, Yong-Suk;Kim, Hyun-Sik
    • Journal of Broadcast Engineering
    • /
    • v.27 no.4
    • /
    • pp.519-526
    • /
    • 2022
  • Identifying and searching for characters appearing in scenes during multimedia video editing is an arduous and time-consuming process. Applying artificial intelligence to labor-intensive media editing tasks can greatly reduce media production time, improving the creative process efficiency. In this paper, a method is proposed which combines existing artificial intelligence based techniques to automate character recognition and search tasks for video editing. Object detection, face detection, and pose estimation are used for character localization and face recognition and color space analysis are used to extract unique representation information.

A Study on Improvement of the Human Posture Estimation Method for Performing Robots (공연로봇을 위한 인간자세 추정방법 개선에 관한 연구)

  • Park, Cheonyu;Park, Jaehun;Han, Jeakweon
    • Journal of Broadcast Engineering
    • /
    • v.25 no.5
    • /
    • pp.750-757
    • /
    • 2020
  • One of the basic tasks for robots to interact with humans is to quickly and accurately grasp human behavior. Therefore, it is necessary to increase the accuracy of human pose recognition when the robot is estimating the human pose and to recognize it as quickly as possible. However, when the human pose is estimated using deep learning, which is a representative method of artificial intelligence technology, recognition accuracy and speed are not satisfied at the same time. Therefore, it is common to select one of a top-down method that has high inference accuracy or a bottom-up method that has high processing speed. In this paper, we propose two methods that complement the disadvantages while including both the advantages of the two methods mentioned above. The first is to perform parallel inference on the server using multi GPU, and the second is to mix bottom-up and One-class Classification. As a result of the experiment, both of the methods presented in this paper showed improvement in speed. If these two methods are applied to the entertainment robot, it is expected that a highly reliable interaction with the audience can be performed.

A Real-time Hand Pose Recognition Method with Hidden Finger Prediction (은닉된 손가락 예측이 가능한 실시간 손 포즈 인식 방법)

  • Na, Min-Young;Choi, Jae-In;Kim, Tae-Young
    • Journal of Korea Game Society
    • /
    • v.12 no.5
    • /
    • pp.79-88
    • /
    • 2012
  • In this paper, we present a real-time hand pose recognition method to provide an intuitive user interface through hand poses or movements without a keyboard and a mouse. For this, the areas of right and left hands are segmented from the depth camera image, and noise removal is performed. Then, the rotation angle and the centroid point of each hand area are calculated. Subsequently, a circle is expanded at regular intervals from a centroid point of the hand to detect joint points and end points of the finger by obtaining the midway points of the hand boundary crossing. Lastly, the matching between the hand information calculated previously and the hand model of previous frame is performed, and the hand model is recognized to update the hand model for the next frame. This method enables users to predict the hidden fingers through the hand model information of the previous frame using temporal coherence in consecutive frames. As a result of the experiment on various hand poses with the hidden fingers using both hands, the accuracy showed over 95% and the performance indicated over 32 fps. The proposed method can be used as a contactless input interface in presentation, advertisement, education, and game applications.