• Title/Summary/Keyword: Hand tracking

Search Result 350, Processing Time 0.023 seconds

Dynamic Bayesian Network based Two-Hand Gesture Recognition (동적 베이스망 기반의 양손 제스처 인식)

  • Suk, Heung-Il;Sin, Bong-Kee
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.4
    • /
    • pp.265-279
    • /
    • 2008
  • The idea of using hand gestures for human-computer interaction is not new and has been studied intensively during the last dorado with a significant amount of qualitative progress that, however, has been short of our expectations. This paper describes a dynamic Bayesian network or DBN based approach to both two-hand gestures and one-hand gestures. Unlike wired glove-based approaches, the success of camera-based methods depends greatly on the image processing and feature extraction results. So the proposed method of DBN-based inference is preceded by fail-safe steps of skin extraction and modeling, and motion tracking. Then a new gesture recognition model for a set of both one-hand and two-hand gestures is proposed based on the dynamic Bayesian network framework which makes it easy to represent the relationship among features and incorporate new information to a model. In an experiment with ten isolated gestures, we obtained the recognition rate upwards of 99.59% with cross validation. The proposed model and the related approach are believed to have a strong potential for successful applications to other related problems such as sign languages.

Real-time moving object tracking and distance measurement system using stereo camera (스테레오 카메라를 이용한 이동객체의 실시간 추적과 거리 측정 시스템)

  • Lee, Dong-Seok;Lee, Dong-Wook;Kim, Su-Dong;Kim, Tae-June;Yoo, Ji-Sang
    • Journal of Broadcast Engineering
    • /
    • v.14 no.3
    • /
    • pp.366-377
    • /
    • 2009
  • In this paper, we implement the real-time system which extracts 3-dimensional coordinates from right and left images captured by a stereo camera and provides users with reality through a virtual space operated by the 3-dimensional coordinates. In general, all pixels in correspondence region are compared for the disparity estimation. However, for a real time process, the central coordinates of the correspondence region are only used in the proposed algorithm. In the implemented system, 3D coordinates are obtained by using the depth information derived from the estimated disparity and we set user's hand as a region of interest(ROI). After user's hand is detected as the ROI, the system keeps tracking a hand's movement and generates a virtual space that is controled by the hand. Experimental results show that the implemented system could estimate the disparity in real -time and gave the mean-error less than 0.68cm within a range of distance, 1.5m. Also It had more than 90% accuracy in the hand recognition.

Detection Accuracy Improvement of Hang Region using Kinect (키넥트를 이용한 손 영역 검출의 정확도 개선)

  • Kim, Heeae;Lee, Chang Woo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.11
    • /
    • pp.2727-2732
    • /
    • 2014
  • Recently, the researches of object tracking and recognition using Microsoft's Kinect are being actively studied. In this environment human hand detection and tracking is the most basic technique for human computer interaction. This paper proposes a method of improving the accuracy of the detected hand region's boundary in the cluttered background. To do this, we combine the hand detection results using the skin color with the extracted depth image from Kinect. From the experimental results, we show that the proposed method increase the accuracy of the hand region detection than the method of detecting a hand region with a depth image only. If the proposed method is applied to the sign language or gesture recognition system it is expected to contribute much to accuracy improvement.

Development and Application of Automatic Rainfall Field Tracking Methods for Depth-Area-Duration Analysis (DAD 분석을 위한 자동 강우장 탐색기법의 개발 및 적용)

  • Kim, Yeon Su;Song, Mi Yeon;Lee, Gi Ha;Jung, Kwan Sue
    • Journal of Korea Water Resources Association
    • /
    • v.47 no.4
    • /
    • pp.357-370
    • /
    • 2014
  • This study aims to develop a rainfall field tracking method for depth-area-duration (DAD) analysis and assess whether the proposed tracking methods are able to properly estimate the maximum average areal rainfall (MAAR) within the study area during a rainfall period. We proposed three different rainfall field tracking algorithms (Box-tracking, Point-tracking, Advanced point-tracking) and then applied them to the virtual rainfall field with 1hr duration and also compared DAD curves of each method. In addition, we applied the three tracking methods and a traditional GIS-based tool to the typhoon 'Nari' rainfall event of the Yongdam-Dam watershed and then assess applicability of the proposed methods for DAD analysis. The results showed that Box-tracking was much faster than the other two tracking methods in terms of searching for the MAAR but it was impossible to describe rainfall spatial pattern during its tracking processes. On the other hand, both Point-tracking and Advanced point-tracking provided the MAAR by considering the spatial distribution of rainfall fields. In particular, Advanced point-tracking estimated the MAAR more accurately than Point-tracking in the virtual rainfall field, which has two rainfall centers with similar depths. The proposed automatic rainfall field tracking methods can be used as effective tools to analyze DAD relationship and also calculate areal reduction factor.

Vision-based hand gesture recognition system for object manipulation in virtual space (가상 공간에서의 객체 조작을 위한 비전 기반의 손동작 인식 시스템)

  • Park, Ho-Sik;Jung, Ha-Young;Ra, Sang-Dong;Bae, Cheol-Soo
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.553-556
    • /
    • 2005
  • We present a vision-based hand gesture recognition system for object manipulation in virtual space. Most conventional hand gesture recognition systems utilize a simpler method for hand detection such as background subtractions with assumed static observation conditions and those methods are not robust against camera motions, illumination changes, and so on. Therefore, we propose a statistical method to recognize and detect hand regions in images using geometrical structures. Also, Our hand tracking system employs multiple cameras to reduce occlusion problems and non-synchronous multiple observations enhance system scalability. Experimental results show the effectiveness of our method.

  • PDF

Hand Gesture Recognition for Understanding Conducting Action (지휘행동 이해를 위한 손동작 인식)

  • Je, Hong-Mo;Kim, Ji-Man;Kim, Dai-Jin
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2007.10c
    • /
    • pp.263-266
    • /
    • 2007
  • We introduce a vision-based hand gesture recognition fer understanding musical time and patterns without extra special devices. We suggest a simple and reliable vision-based hand gesture recognition having two features First, the motion-direction code is proposed, which is a quantized code for motion directions. Second, the conducting feature point (CFP) where the point of sudden motion changes is also proposed. The proposed hand gesture recognition system extracts the human hand region by segmenting the depth information generated by stereo matching of image sequences. And then, it follows the motion of the center of the gravity(COG) of the extracted hand region and generates the gesture features such as CFP and the direction-code finally, we obtain the current timing pattern of beat and tempo of the playing music. The experimental results on the test data set show that the musical time pattern and tempo recognition rate is over 86.42% for the motion histogram matching, and 79.75% fer the CFP tracking only.

  • PDF

A New Eye Tracking Method as a Smartphone Interface

  • Lee, Eui Chul;Park, Min Woo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.4
    • /
    • pp.834-848
    • /
    • 2013
  • To effectively use these functions many kinds of human-phone interface are used such as touch, voice, and gesture. However, the most important touch interface cannot be used in case of hand disabled person or busy both hands. Although eye tracking is a superb human-computer interface method, it has not been applied to smartphones because of the small screen size, the frequently changing geometric position between the user's face and phone screen, and the low resolution of the frontal cameras. In this paper, a new eye tracking method is proposed to act as a smartphone user interface. To maximize eye image resolution, a zoom lens and three infrared LEDs are adopted. Our proposed method has following novelties. Firstly, appropriate camera specification and image resolution are analyzed in order to smartphone based gaze tracking method. Secondly, facial movement is allowable in case of one eye region is included in image. Thirdly, the proposed method can be operated in case of both landscape and portrait screen modes. Fourthly, only two LED reflective positions are used in order to calculate gaze position on the basis of 2D geometric relation between reflective rectangle and screen. Fifthly, a prototype mock-up design module is made in order to confirm feasibility for applying to actual smart-phone. Experimental results showed that the gaze estimation error was about 31 pixels at a screen resolution of $480{\times}800$ and the average hit ratio of a $5{\times}4$ icon grid was 94.6%.

Evaluation of Bond Performance for AC overlay on PCC Pavement (AC / PCC 복합포장 경계면 재료의 부착 성능 평가)

  • Kim, Dong kyu;Hwang, Hyun sik;Christopher, Jabonero;Ryu, Sung woo;Cho, Yoon ho
    • International Journal of Highway Engineering
    • /
    • v.18 no.5
    • /
    • pp.1-9
    • /
    • 2016
  • PURPOSES : This study focuses on the evaluation of interface performance with varying surface texture and tack coat application in an asphalt overlay. METHODS : The evaluation is carried out in two phases: tracking test and interface bond strength test. Using an image processing tool, tracking test is conducted to evaluate the susceptibility of the tack coat material to produce excessive tracking during application. Using the pull-off test method, the bond strength test is performed to determine the ability of the interface layer to resist failure. RESULTS : Results show that the underseal application yields less tracking compared to other applications. However, the bond strength is barely within the minimum acceptable value. On the other hand, RSC-4 produces higher bond strength for all surface types, but the drying time is long, which produces excessive tracking. CONCLUSIONS : While underseal application may be suitable for a trackless condition, the bond strength is less appealing compared to the rest of the tack applications available. RSC-4 demonstrated a high and consistent bond strength performance, but more time is required for drying to avoid excessive tracking. Tack coat application and surface type combination produce varying results. Therefore, these should be considered when selecting suitable future tack coat application options.

Object Detection Using Predefined Gesture and Tracking (약속된 제스처를 이용한 객체 인식 및 추적)

  • Bae, Dae-Hee;Yi, Joon-Hwan
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.10
    • /
    • pp.43-53
    • /
    • 2012
  • In the this paper, a gesture-based user interface based on object detection using predefined gesture and the tracking of the detected object is proposed. For object detection, moving objects in a frame are computed by comparing multiple previous frames and predefined gesture is used to detect the target object among those moving objects. Any object with the predefined gesture can be used to control. We also propose an object tracking algorithm, namely density based meanshift algorithm, that uses color distribution of the target objects. The proposed object tracking algorithm tracks a target object crossing the background with a similar color more accurately than existing techniques. Experimental results show that the proposed object detection and tracking algorithms achieve higher detection capability with less computational complexity.

Carbonization Characteristics of Phenolic Resin Deteriorated by Tracking (트래킹에 의해 열화된 페놀수지의 탄화 특성)

  • 송길목;최충석;노영수;곽희로
    • The Transactions of the Korean Institute of Electrical Engineers C
    • /
    • v.53 no.1
    • /
    • pp.1-7
    • /
    • 2004
  • This paper describes the carbonization characteristics of a phenolic resin deteriorated by tracking under the environment of a fire. In the experiment, a liquids droplet of 1[%] NaCl was dripped on the phenolic resin to cause a tracking with 110[V], 220[V] voltages applied. It can be addressed from the experimental results that when an insulator is carbonized by an external fire, its structure is amorphous. If an insulator is carbonized by electrical cause, on the other hand, its structure would be crystalline. In order to observe the surface change of the phenolic resin, the tracking process was analyzed by using SEM. In the case that the materials are carbonized under heat or fire, the exothermic peak appears around 500[$^{\circ}C$]. This is one of the important factors to determine the cause of fires. As a result of DTA, the exothermic peaks of an untreated sample showed at 333.4[$^{\circ}C$], 495.7[$^{\circ}C$] but those of a sample deteriorated by tracking appeared at 430.6[$^{\circ}C$], 457.6[$^{\circ}C$] in a voltage of 110[V], and at 456.2[$^{\circ}C$], 619.7[$^{\circ}C$] in a voltage of 220[V]. It is possible, therefore, to distinguish a virgin sample from carbonized samples(graphite) by the exothermic peak.