• Title/Summary/Keyword: Skin Tracking performance

Search Result 24, Processing Time 0.021 seconds

The Estimation of Hand Pose Based on Mean-Shift Tracking Using the Fusion of Color and Depth Information for Marker-less Augmented Reality (비마커 증강현실을 위한 색상 및 깊이 정보를 융합한 Mean-Shift 추적 기반 손 자세의 추정)

  • Lee, Sun-Hyoung;Hahn, Hern-Soo;Han, Young-Joon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.7
    • /
    • pp.155-166
    • /
    • 2012
  • This paper proposes a new method of estimating the hand pose through the Mean-Shift tracking algorithm using the fusion of color and depth information for marker-less augmented reality. On marker-less augmented reality, the most of previous studies detect the hand region using the skin color from simple experimental background. Because finger features should be detected on the hand, the hand pose that can be measured from cameras is restricted considerably. However, the proposed method can easily detect the hand pose from complex background through the new Mean-Shift tracking method using the fusion of the color and depth information from 3D sensor. The proposed method of estimating the hand pose uses the gravity point and two random points on the hand without largely constraints. The proposed Mean-Shift tracking method has about 50 pixels error less than general tracking method just using color value. The augmented reality experiment of the proposed method shows results of its performance being as good as marker based one on the complex background.

A Dynamic Hand Gesture Recognition System Incorporating Orientation-based Linear Extrapolation Predictor and Velocity-assisted Longest Common Subsequence Algorithm

  • Yuan, Min;Yao, Heng;Qin, Chuan;Tian, Ying
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.9
    • /
    • pp.4491-4509
    • /
    • 2017
  • The present paper proposes a novel dynamic system for hand gesture recognition. The approach involved is comprised of three main steps: detection, tracking and recognition. First, the gesture contour captured by a 2D-camera is detected by combining the three-frame difference method and skin-color elliptic boundary model. Then, the trajectory of the hand gesture is extracted via a gesture-tracking algorithm based on an occlusion-direction oriented linear extrapolation predictor, where the gesture coordinate in next frame is predicted by the judgment of current occlusion direction. Finally, to overcome the interference of insignificant trajectory segments, the longest common subsequence (LCS) is employed with the aid of velocity information. Besides, to tackle the subgesture problem, i.e., some gestures may also be a part of others, the most probable gesture category is identified through comparison of the relative LCS length of each gesture, i.e., the proportion between the LCS length and the total length of each template, rather than the length of LCS for each gesture. The gesture dataset for system performance test contains digits ranged from 0 to 9, and experimental results demonstrate the robustness and effectiveness of the proposed approach.

Extraction of Skin Regions through Filtering-based Noise Removal (필터링 기반의 잡음 제거를 통한 피부 영역의 추출)

  • Jang, Seok-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.12
    • /
    • pp.672-678
    • /
    • 2020
  • Ultra-high-speed images that accurately depict the minute movements of objects have become common as low-cost and high-performance cameras that can film at high speeds have emerged. In this paper, the proposed method removes unexpected noise contained in images after input at high speed, and then extracts an area of interest that can represent personal information, such as skin areas, from the image in which noise has been removed. In this paper, noise generated by abnormal electrical signals is removed by applying bilateral filters. A color model created through pre-learning is then used to extract the area of interest that represents the personal information contained within the image. Experimental results show that the introduced algorithms remove noise from high-speed images and then extract the area of interest robustly. The approach presented in this paper is expected to be useful in various applications related to computer vision, such as image preprocessing, noise elimination, tracking and monitoring of target areas, etc.

Functions and Driving Mechanisms for Face Robot Buddy (얼굴로봇 Buddy의 기능 및 구동 메커니즘)

  • Oh, Kyung-Geune;Jang, Myong-Soo;Kim, Seung-Jong;Park, Shin-Suk
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.4
    • /
    • pp.270-277
    • /
    • 2008
  • The development of a face robot basically targets very natural human-robot interaction (HRI), especially emotional interaction. So does a face robot introduced in this paper, named Buddy. Since Buddy was developed for a mobile service robot, it doesn't have a living-being like face such as human's or animal's, but a typically robot-like face with hard skin, which maybe suitable for mass production. Besides, its structure and mechanism should be simple and its production cost also should be low enough. This paper introduces the mechanisms and functions of mobile face robot named Buddy which can take on natural and precise facial expressions and make dynamic gestures driven by one laptop PC. Buddy also can perform lip-sync, eye-contact, face-tracking for lifelike interaction. By adopting a customized emotional reaction decision model, Buddy can create own personality, emotion and motive using various sensor data input. Based on this model, Buddy can interact probably with users and perform real-time learning using personality factors. The interaction performance of Buddy is successfully demonstrated by experiments and simulations.

  • PDF