• Title/Summary/Keyword: Gaze Recognition

Search Result 47, Processing Time 0.022 seconds

Gaze Direction Estimation Method Using Support Vector Machines (SVMs) (Support Vector Machines을 이용한 시선 방향 추정방법)

  • Liu, Jing;Woo, Kyung-Haeng;Choi, Won-Ho
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.4
    • /
    • pp.379-384
    • /
    • 2009
  • A human gaze detection and tracing method is importantly required for HMI(Human-Machine-Interface) like a Human-Serving robot. This paper proposed a novel three-dimension (3D) human gaze estimation method by using a face recognition, an orientation estimation and SVMs (Support Vector Machines). 2,400 images with the pan orientation range of $-90^{\circ}{\sim}90^{\circ}$ and tilt range of $-40^{\circ}{\sim}70^{\circ}$ with intervals unit of $10^{\circ}$ were used. A stereo camera was used to obtain the global coordinate of the center point between eyes and Gabor filter banks of horizontal and vertical orientation with 4 scales were used to extract the facial features. The experiment result shows that the error rate of proposed method is much improved than Liddell's.

Autonomous Wheelchair System Using Gaze Recognition (시선 인식을 이용한 자율 주행 휠체어 시스템)

  • Kim, Tae-Ui;Lee, Sang-Yoon;Kwon, Kyung-Su;Park, Se-Hyun
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.14 no.4
    • /
    • pp.91-100
    • /
    • 2009
  • In this paper, we propose autonomous intelligent wheelchair system which recognize the commands using the gaze recognition and avoid the detected obstacles by sensing the distance through range sensors on the way to driving. The user's commands are recognized by the gaze recognizer which use a centroid of eye pupil and two reflection points extracted using a camera with infrared filter and two infrared LEDs. These are used to control the wheelchair through the user interface. Then wheelchair system detects the obstacles using 10 ultrasonic sensors and assists that it avoid collision with obstacles. The proposed intelligent wheelchair system consists of gaze recognizor, autonomous driving module, sensor control board and motor control board. The gaze recognizer cognize user's commands through user interface, then the wheelchair is controled by the motor control board using recognized commands. Thereafter obstacle information detected by ultrasonic sensors is transferred to the sensor control board, and this transferred to the autonomous driving module. In the autonomous driving module, the obstacles are detected. For generating commands to avoid these obstacles, there are transferred to the motor control board. The experimental results confirmed that the proposed system can improve the efficiency of obstacle avoidance and provide the convenient user interface to user.

Influence of Endorser's Gaze Direction on Consumer's Visual Attention, Attitude and Recognition: Focused on the Eye Movement (광고 모델의 위치와 시선 방향이소비자의 시각적 주의, 태도 및재인에 미치는 효과: 안구운동추적기법을 중심으로)

  • Chung, Hyenyeong;Lee, Ji-Yeon;Nam, Yun-Ju
    • (The) Korean Journal of Advertising
    • /
    • v.29 no.7
    • /
    • pp.29-53
    • /
    • 2018
  • In our study, we investigated the effects of position of endorser and endorser's gaze direction(direct/averted_image/averted_text) on advertising attitude, purchase intent and brand recognition using eye-tracking method. Focusing on the printed cosmetic ads which the role of endorser is important and indirect persuade route is relatively is emphasized, we conducted experiment on 36 participants in 20s. As prior studies, our results shows that participants paid attention to more and faster on specific element which the endorser is gazing at. But it was not reflected to ad attitude and purchase intent directly. When the endorser is positioned in left the side, the highest purchase intent was shown in direct gaze condition, while when the endorser is on the right side, the highest ad attitude was shown in gazing image condition. Additionally, the brand recognition task following eye-tracking experiment shows that recognition accuracy was higher only in condition which the endorser is in the left side looking at the product image. These results demonstrated that the gaze direction of endorser plays a role as attentional guidance, which means it can lead customer's attention to particular region in the printed ad, but the effect can be varied depending on the position of endorser and which type of information the endorser is gazing at. Therefore, ultimately, to increase customer's ad attitude and purchase intent, complex consideration of not only the gazing direction of the endorser, but the position of endorser and other diverse elements is necessary.

A New Ergonomic Interface System for the Disabled Person (장애인을 위한 새로운 감성 인터페이스 연구)

  • Heo, Hwan;Lee, Ji-Woo;Lee, Won-Oh;Lee, Eui-Chul;Park, Kang-Ryoung
    • Journal of the Ergonomics Society of Korea
    • /
    • v.30 no.1
    • /
    • pp.229-235
    • /
    • 2011
  • Objective: Making a new ergonomic interface system based on camera vision system, which helps the handicapped in home environment. Background: Enabling the handicapped to manipulate the consumer electronics by the proposed interface system. Method: A wearable device for capturing the eye image using a near-infrared(NIR) camera and illuminators is proposed for tracking eye gaze position(Heo et al., 2011). A frontal viewing camera is attached to the wearable device, which can recognize the consumer electronics to be controlled(Heo et al., 2011). And the amount of user's eye fatigue can be measured based on eye blink rate, and in case that the user's fatigue exceeds in the predetermined level, the proposed system can automatically change the mode of gaze based interface into that of manual selection. Results: The experimental results showed that the gaze estimation error of the proposed method was 1.98 degrees with the successful recognition of the object by the frontal viewing camera(Heo et al., 2011). Conclusion: We made a new ergonomic interface system based on gaze tracking and object recognition Application: The proposed system can be used for helping the handicapped in home environment.

Gaze Tracking with Low-cost EOG Measuring Device (저가형 EOG 계측장치를 이용한 시선추적)

  • Jang, Seung-Tae;Lee, Jung-Hwan;Jang, Jae-Young;Chang, Won-Du
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.11
    • /
    • pp.53-60
    • /
    • 2018
  • This paper describes the experiments of gaze tracking utilizing a low-cost electrooculogram measuring device. The goal of the experiments is to verify whether the low-cost device can be used for a complicated human-computer interaction tool, such as the eye-writing. Two experiments are conducted for this goal: a simple gaze tracking of four directional eye-movements, and eye-writing-which is to draw letters or shapes in a virtual space. Eye-written alphabets were obtained by two PSL-iEOGs and an Arduino Uno; they were classified by dynamic positional warping after preprocessed by a wavelet function. The results show that the expected recognition accuracy of the four-directional recognition is close to 90% when noises are controlled, and the similar median accuracy (90.00%) was achieved for the eye-writing when the number of writing patterns are limited to five. In future works, additional algorithms for stabilizing the signal need to be developed.

Robust Real-time Tracking of Facial Features with Application to Emotion Recognition (안정적인 실시간 얼굴 특징점 추적과 감정인식 응용)

  • Ahn, Byungtae;Kim, Eung-Hee;Sohn, Jin-Hun;Kweon, In So
    • The Journal of Korea Robotics Society
    • /
    • v.8 no.4
    • /
    • pp.266-272
    • /
    • 2013
  • Facial feature extraction and tracking are essential steps in human-robot-interaction (HRI) field such as face recognition, gaze estimation, and emotion recognition. Active shape model (ASM) is one of the successful generative models that extract the facial features. However, applying only ASM is not adequate for modeling a face in actual applications, because positions of facial features are unstably extracted due to limitation of the number of iterations in the ASM fitting algorithm. The unaccurate positions of facial features decrease the performance of the emotion recognition. In this paper, we propose real-time facial feature extraction and tracking framework using ASM and LK optical flow for emotion recognition. LK optical flow is desirable to estimate time-varying geometric parameters in sequential face images. In addition, we introduce a straightforward method to avoid tracking failure caused by partial occlusions that can be a serious problem for tracking based algorithm. Emotion recognition experiments with k-NN and SVM classifier shows over 95% classification accuracy for three emotions: "joy", "anger", and "disgust".

Explosion Casting: An Efficient Selection Method for Overlapped Virtual Objects in Immersive Virtual Environments (몰입 가상현실 환경에서 겹쳐진 가상객체들의 효율적인 선택을 위한 펼침 시각화를 통한 객체 선택 방법)

  • Oh, JuYoung;Lee, Jun
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.3
    • /
    • pp.11-18
    • /
    • 2018
  • To interact with a virtual object in immersive virtual environment, the target object should be selected quickly and accurately. Conventional 3D ray casting method using a direction of user's hand or head allows the user to select an object quickly. However, accuracy problem occurs when selecting an object using conventional methods among occlusion of objects. In this paper, we propose a region of interest based selection method that enables to select an object among occlusion of objects using a combination of gaze tracking and hand gesture recognition. When a user looks at a group of occlusion of objects, the proposed method recognizes user's gaze input, and then region of interest is set by gaze input. If the user wants to select an object among them, the user gives an activation hand gesture. Then, the proposed system relocates and visualizes all objects on a virtual active window. The user can select an object by a selecting hand gesture. Our experiment verified that the user can select an object correctly and accurately.

Object Recognition Face Detection With 3D Imaging Parameters A Research on Measurement Technology (3D영상 객체인식을 통한 얼굴검출 파라미터 측정기술에 대한 연구)

  • Choi, Byung-Kwan;Moon, Nam-Mee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.10
    • /
    • pp.53-62
    • /
    • 2011
  • In this paper, high-tech IT Convergence, to the development of complex technology, special technology, video object recognition technology was considered only as a smart - phone technology with the development of personal portable terminal has been developed crossroads. Technology-based detection of 3D face recognition technology that recognizes objects detected through the intelligent video recognition technology has been evolving technologies based on image recognition, face detection technology with through the development speed is booming. In this paper, based on human face recognition technology to detect the object recognition image processing technology is applied through the face recognition technology applied to the IP camera is the party of the mouth, and allowed the ability to identify and apply the human face recognition, measurement techniques applied research is suggested. Study plan: 1) face model based face tracking technology was developed and applied 2) algorithm developed by PC-based measurement of human perception through the CPU load in the face value of their basic parameters can be tracked, and 3) bilateral distance and the angle of gaze can be tracked in real time, proved effective.

Effective real-time identification using Bayesian statistical methods gaze Network (베이지안 통계적 방안 네트워크를 이용한 효과적인 실시간 시선 식별)

  • Kim, Sung-Hong;Seok, Gyeong-Hyu
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.11 no.3
    • /
    • pp.331-338
    • /
    • 2016
  • In this paper, we propose a GRNN(: Generalized Regression Neural Network) algorithms for new eyes and face recognition identification system to solve the points that need corrective action in accordance with the existing problems of facial movements gaze upon it difficult to identify the user and. Using a Kalman filter structural information elements of a face feature to determine the authenticity of the face was estimated future location using the location information of the current head and the treatment time is relatively fast horizontal and vertical elements of the face using a histogram analysis the detected. And the light obtained by configuring the infrared illuminator pupil effects in real-time detection of the pupil, the pupil tracking was - to extract the text print vector.

Facial Behavior Recognition for Driver's Fatigue Detection (운전자 피로 감지를 위한 얼굴 동작 인식)

  • Park, Ho-Sik;Bae, Cheol-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.9C
    • /
    • pp.756-760
    • /
    • 2010
  • This paper is proposed to an novel facial behavior recognition system for driver's fatigue detection. Facial behavior is shown in various facial feature such as head expression, head pose, gaze, wrinkles. But it is very difficult to clearly discriminate a certain behavior by the obtained facial feature. Because, the behavior of a person is complicated and the face representing behavior is vague in providing enough information. The proposed system for facial behavior recognition first performs detection facial feature such as eye tracking, facial feature tracking, furrow detection, head orientation estimation, head motion detection and indicates the obtained feature by AU of FACS. On the basis of the obtained AU, it infers probability each state occur through Bayesian network.