• Title/Summary/Keyword: Head tracking

Search Result 245, Processing Time 0.025 seconds

Real-Time Automatic Tracking of Facial Feature (얼굴 특징 실시간 자동 추적)

  • 박호식;배철수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.6
    • /
    • pp.1182-1187
    • /
    • 2004
  • Robust, real-time, fully automatic tracking of facial features is required for many computer vision and graphics applications. In this paper, we describe a fully automatic system that tracks eyes and eyebrows in real time. The pupils are tracked using the red eye effect by an infrared sensitive camera equipped with infrared LEDs. Templates are used to parameterize the facial features. For each new frame, the pupil coordinates are used to extract cropped images of eyes and eyebrows. The template parameters are recovered by PCA analysis on these extracted images using a PCA basis, which was constructed during the training phase with some example images. The system runs at 30 fps and requires no manual initialization or calibration. The system is shown to work well on sequences with considerable head motions and occlusions.

Secure and Robust Clustering for Quantized Target Tracking in Wireless Sensor Networks

  • Mansouri, Majdi;Khoukhi, Lyes;Nounou, Hazem;Nounou, Mohamed
    • Journal of Communications and Networks
    • /
    • v.15 no.2
    • /
    • pp.164-172
    • /
    • 2013
  • We consider the problem of secure and robust clustering for quantized target tracking in wireless sensor networks (WSN) where the observed system is assumed to evolve according to a probabilistic state space model. We propose a new method for jointly activating the best group of candidate sensors that participate in data aggregation, detecting the malicious sensors and estimating the target position. Firstly, we select the appropriate group in order to balance the energy dissipation and to provide the required data of the target in the WSN. This selection is also based on the transmission power between a sensor node and a cluster head. Secondly, we detect the malicious sensor nodes based on the information relevance of their measurements. Then, we estimate the target position using quantized variational filtering (QVF) algorithm. The selection of the candidate sensors group is based on multi-criteria function, which is computed by using the predicted target position provided by the QVF algorithm, while the malicious sensor nodes detection is based on Kullback-Leibler distance between the current target position distribution and the predicted sensor observation. The performance of the proposed method is validated by simulation results in target tracking for WSN.

Real-Time Eye Tracking Using IR Stereo Camera for Indoor and Outdoor Environments

  • Lim, Sungsoo;Lee, Daeho
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.8
    • /
    • pp.3965-3983
    • /
    • 2017
  • We propose a novel eye tracking method that can estimate 3D world coordinates using an infrared (IR) stereo camera for indoor and outdoor environments. This method first detects dark evidences such as eyes, eyebrows and mouths by fast multi-level thresholding. Among these evidences, eye pair evidences are detected by evidential reasoning and geometrical rules. For robust accuracy, two classifiers based on multiple layer perceptron (MLP) using gradient local binary patterns (GLBPs) verify whether the detected evidences are real eye pairs or not. Finally, the 3D world coordinates of detected eyes are calculated by region-based stereo matching. Compared with other eye detection methods, the proposed method can detect the eyes of people wearing sunglasses due to the use of the IR spectrum. Especially, when people are in dark environments such as driving at nighttime, driving in an indoor carpark, or passing through a tunnel, human eyes can be robustly detected because we use active IR illuminators. In the experimental results, it is shown that the proposed method can detect eye pairs with high performance in real-time under variable illumination conditions. Therefore, the proposed method can contribute to human-computer interactions (HCIs) and intelligent transportation systems (ITSs) applications such as gaze tracking, windshield head-up display and drowsiness detection.

A Precise Tracking System for Dynamic Object using IR sensor for Spatial Augmented Reality (공간증강현실 구현을 위한 적외선 센서 기반 동적 물체 정밀 추적 시스템)

  • Oh, JiSoo;Park, Jinho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.3
    • /
    • pp.115-122
    • /
    • 2017
  • As the era of the fourth industrial revolution began, augmented reality showed infinite possibilities throughout society. However, current augmented reality systems such as head-mount display and hand-held display systems suffer from various problems such as weariness and nausea, and thus space-augmented reality, which is a projector-based augmented reality technology, is attracting attention. Spacial augmented reality requires precise tracking of dynamic objects to project virtual images in order to increase realism of augmented reality and induce user 's immersion. The infrared sensor-based precision tracking algorithm developed in this paper demonstrates very robust tracking performance with an average error rate of less than 1.5% and technically opens the way towards advanced augmented reality technologies such as tracking for arbitrary objects, and Socially, by easy-to-use tracking algorithms for non-specialists, it allows designers, students, and children to easily create and enjoy their own augmented reality content.

Glint Reconstruction Algorithm Using Homography in Gaze Tracking System (시선 추적 시스템에서의 호모그래피를 이용한 글린트 복원 알고리즘)

  • Ko, Eun-Ji;Kim, Myoung-Jun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.10
    • /
    • pp.2417-2426
    • /
    • 2014
  • Remote gaze tracking system calculates the gaze from captured images that reflect infra-red LEDs in cornea. Glint is the point that reflect infra-red LEDs to cornea. Recently, remote gaze tracking system uses a number of IR-LEDs to make the system less prone to head movement and eliminate calibration procedure. However, in some cases, some of glints are unable to spot. In this case, it is impossible to calculate gaze. This study examines patterns of glints that are difficult to detect in remote gaze tracking system. Afterward, we propose an algorithm to reconstruct positions of missing glints that are difficult to detect using other detected glints. Based on this algorithm, we increased the number of valid image frames in gaze tracking experiments, and reduce errors of gaze tracking results by correcting glint's distortion in the reconstruction phase.

Human Spatial Cognition Using Visual and Auditory Stimulation

  • Yu, Mi;Piao, Yong-Jun;Kim, Yong-Yook;Kwon, Tae-Kyu;Hong, Chul-Un;Kim, Nam-Gyun
    • International Journal of Precision Engineering and Manufacturing
    • /
    • v.7 no.2
    • /
    • pp.41-45
    • /
    • 2006
  • This paper deals with human spatial cognition using visual and auditory stimulation. More specially, this investigation is to observe the relationship between the head and the eye motor system for the localization of visual target direction in space and to try to describe what is the role of right-side versus left-side pinna. In the experiment of visual stimulation, nineteen red LEDs (Luminescent Diodes, Brightness: $210\;cd/^2$) arrayed in the horizontal plane of the surrounding panel are used. Here the LEDs are located 10 degrees apart from each other. Physiological parameters such as EOG (Electro-Oculography), head movement, and their synergic control are measured by BIOPAC system and 3SPACE FASTRAK. In the experiment of auditory stimulation, one side of the pinna function was distorted intentionally by inserting a short tube in the ear canal. The localization error caused by right and left side pinna distortion was investigated as well. Since a laser pointer showed much less error (0.5%) in localizing target position than FASTRAK (30%) that has been generally used, a laser pointer was used for the pointing task. It was found that harmonic components were not essential for auditory target localization. However, non-harmonic nearby frequency components was found to be more important in localizing the target direction of sound. We have found that the right pinna carries out one of the most important functions in localizing target direction and pure tone with only one frequency component is confusing to be localized. It was also found that the latency time is shorter in self moved tracking (SMT) than eye alone tracking (EAT) and eye hand tracking (EHT). These results can be used in further study on the characterization of human spatial cognition.

Head Detection based on Foreground Pixel Histogram Analysis (전경픽셀 히스토그램 분석 기반의 머리영역 검출 기법)

  • Choi, Yoo-Joo;Son, Hyang-Kyoung;Park, Jung-Min;Moon, Nam-Mee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.11
    • /
    • pp.179-186
    • /
    • 2009
  • In this paper, we propose a head detection method based on vertical and horizontal pixel histogram analysis in order to overcome drawbacks of the previous head detection approach using Haar-like feature-based face detection. In the proposed method, we create the vertical and horizontal foreground pixel histogram images from the background subtraction image, which represent the number of foreground pixels in the same vertical or horizontal position. Then we extract feature points of a head region by applying Harris corner detection method to the foreground pixel histogram images and by analyzing corner points. The proposal method shows robust head detection results even in the face image covering forelock by hairs or the back view image in which the previous approaches cannot detect the head regions.

Determination of radius of edge round cut of loading head for deformation strength test (변형강도 시험용 하중봉의 원형절삭반경 선정연구)

  • Park, Tae-W.;Doh, Young-S.;Kim, Kwang-W.
    • International Journal of Highway Engineering
    • /
    • v.10 no.2
    • /
    • pp.183-191
    • /
    • 2008
  • This study evaluated influence of the loading head dimension on characteristics of deformation strength ($S_D$) of asphalt mixtures. Kim test and Wheel tracking (WT) test were conducted to evaluate $S_D$ characteristics with relation to WT results for various mixtures. The $S_D$ values and coefficient of variation of $S_D$ values of r=10mm were smaller than those of r=10.5mm. It was also found that $S_D$ values obtained using r=10mm loading head showed high correlations with rut parameters of WT test. It was indicated that the aggregate size and radius (r) of round cut were statistically significant variables on $S_D$ at = 0.05 level in the analysis of variance. However, in interaction of r and aggregate size showed no significance within $10{\sim}19mm$ aggregate size at the same level. Therefore, it was concluded that the diameter (D) of 40mm and the bottom edge radius (r) of 10mm was suitable dimension of loading head for deformation strength test.

  • PDF

Localizing Head and Shoulder Line Using Statistical Learning (통계학적 학습을 이용한 머리와 어깨선의 위치 찾기)

  • Kwon, Mu-Sik
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.2C
    • /
    • pp.141-149
    • /
    • 2007
  • Associating the shoulder line with head location of the human body is useful in verifying, localizing and tracking persons in an image. Since the head line and the shoulder line, what we call ${\Omega}$-shape, move together in a consistent way within a limited range of deformation, we can build a statistical shape model using Active Shape Model (ASM). However, when the conventional ASM is applied to ${\Omega}$-shape fitting, it is very sensitive to background edges and clutter because it relies only on the local edge or gradient. Even though appearance is a good alternative feature for matching the target object to image, it is difficult to learn the appearance of the ${\Omega}$-shape because of the significant difference between people's skin, hair and clothes, and because appearance does not remain the same throughout the entire video. Therefore, instead of teaming appearance or updating appearance as it changes, we model the discriminative appearance where each pixel is classified into head, torso and background classes, and update the classifier to obtain the appropriate discriminative appearance in the current frame. Accordingly, we make use of two features in fitting ${\Omega}$-shape, edge gradient which is used for localization, and discriminative appearance which contributes to stability of the tracker. The simulation results show that the proposed method is very robust to pose change, occlusion, and illumination change in tracking the head and shoulder line of people. Another advantage is that the proposed method operates in real time.

Implementation of Virtual Reality Engine Using Patriot Tracking Device (Patriot Tracking Device를 이용한 가상현실 엔진 구현)

  • Kim Eun-Ju;Lee Yong-Woog;Song Chang-Geun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2006.05a
    • /
    • pp.143-146
    • /
    • 2006
  • 본 연구는 개인용 PC 에 장착할 수 있는 저가의 가상현실게임 엔진을 설계하고 구현한다. 가상현실 엔진구현에서는 주요한 입출력 장치인 Tracker 와 HMD(Head Mounted Display) 그리고 조이스틱과 마우스의 장착이 필수적이다. 가상현실 엔진을 연동하기 위한 입출력 클래스를 설계하고 입력장치로 마우스와 조이스틱, 출력장치로 HMD 를 장착하였으며 Tracker 의 구현은 상업용 제품인 Polhemus의 Patriot tracker를 이용하였다.

  • PDF