• Title/Summary/Keyword: 3D 시선

Search Result 73, Processing Time 0.03 seconds

3D Gaze Estimation and Interaction Technique (3차원 시선 추출 및 상호작용 기법)

  • Ki, Jeong-Seok;Jeon, Kyeong-Won;Kim, Sung-Kyu;Sohn, Kwang-Hoon;Kwon, Yong-Moo
    • Journal of Broadcast Engineering
    • /
    • v.11 no.4 s.33
    • /
    • pp.431-440
    • /
    • 2006
  • There are several researches on 2D gaze tracking techniques for the 2D screen for the Human-Computer Interaction. However, the researches for the gaze-based interaction to the stereo images or contents are not reported. The 3D display techniques are emerging now for the reality service. Moreover, the 3D interaction techniques are much more needed in the 3D contents service environments. This paper addresses gaze-based 3D interaction techniques on stereo display, such as parallax barrier or lenticular stereo display. This paper presents our researches on 3D gaze estimation and gaze-based interaction to stereo display.

Gaze Detection System by IR-LED based Camera (적외선 조명 카메라를 이용한 시선 위치 추적 시스템)

  • 박강령
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.4C
    • /
    • pp.494-504
    • /
    • 2004
  • The researches about gaze detection have been much developed with many applications. Most previous researches only rely on image processing algorithm, so they take much processing time and have many constraints. In our work, we implement it with a computer vision system setting a IR-LED based single camera. To detect the gaze position, we locate facial features, which is effectively performed with IR-LED based camera and SVM(Support Vector Machine). When a user gazes at a position of monitor, we can compute the 3D positions of those features based on 3D rotation and translation estimation and affine transform. Finally, the gaze position by the facial movements is computed from the normal vector of the plane determined by those computed 3D positions of features. In addition, we use a trained neural network to detect the gaze position by eye's movement. As experimental results, we can obtain the facial and eye gaze position on a monitor and the gaze position accuracy between the computed positions and the real ones is about 4.2 cm of RMS error.

Facial Gaze Detection by Estimating Three Dimensional Positional Movements (얼굴의 3차원 위치 및 움직임 추정에 의한 시선 위치 추적)

  • Park, Gang-Ryeong;Kim, Jae-Hui
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.3
    • /
    • pp.23-35
    • /
    • 2002
  • Gaze detection is to locate the position on a monitor screen where a user is looking. In our work, we implement it with a computer vision system setting a single camera above a monitor and a user moves (rotates and/or translates) his face to gaze at a different position on the monitor. To detect the gaze position, we locate facial region and facial features(both eyes, nostrils and lip corners) automatically in 2D camera images. From the movement of feature points detected in starting images, we can compute the initial 3D positions of those features by camera calibration and parameter estimation algorithm. Then, when a user moves(rotates and/or translates) his face in order to gaze at one position on a monitor, the moved 3D positions of those features can be computed from 3D rotation and translation estimation and affine transform. Finally, the gaze position on a monitor is computed from the normal vector of the plane determined by those moved 3D positions of features. As experimental results, we can obtain the gaze position on a monitor(19inches) and the gaze position accuracy between the computed positions and the real ones is about 2.01 inches of RMS error.

A Study on Manipulating Method of 3D Game in HMD Environment by using Eye Tracking (HMD(Head Mounted Display)에서 시선 추적을 통한 3차원 게임 조작 방법 연구)

  • Park, Kang-Ryoung;Lee, Eui-Chul
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.2
    • /
    • pp.49-64
    • /
    • 2008
  • Recently, many researches about making more comfortable input device based on gaze detection technology have been done in human computer interface. However, the system cost becomes high due to the complicated hardware and there is difficulty to use the gaze detection system due to the complicated user calibration procedure. In this paper, we propose a new gaze detection method based on the 2D analysis and a simple user calibration. Our method used a small USB (Universal Serial Bus) camera attached on a HMD (Head-Mounted Display), hot-mirror and IR (Infra-Red) light illuminator. Because the HMD is moved according to user's facial movement, we can implement the gaze detection system of which performance is not affected by facial movement. In addition, we apply our gaze detection system to 3D first person shooting game. From that, the gaze direction of game character is controlled by our gaze detection method and it can target the enemy character and shoot, which can increase the immersion and interest of game. Experimental results showed that the game and gaze detection system could be operated at real-time speed in one desktop computer and we could obtain the gaze detection accuracy of 0.88 degrees. In addition, we could know our gaze detection technology could replace the conventional mouse in the 3D first person shooting game.

Gaze Detection by Computing Facial and Eye Movement (얼굴 및 눈동자 움직임에 의한 시선 위치 추적)

  • 박강령
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.2
    • /
    • pp.79-88
    • /
    • 2004
  • Gaze detection is to locate the position on a monitor screen where a user is looking by computer vision. Gaze detection systems have numerous fields of application. They are applicable to the man-machine interface for helping the handicapped to use computers and the view control in three dimensional simulation programs. In our work, we implement it with a computer vision system setting a IR-LED based single camera. To detect the gaze position, we locate facial features, which is effectively performed with IR-LED based camera and SVM(Support Vector Machine). When a user gazes at a position of monitor, we can compute the 3D positions of those features based on 3D rotation and translation estimation and affine transform. Finally, the gaze position by the facial movements is computed from the normal vector of the plane determined by those computed 3D positions of features. In addition, we use a trained neural network to detect the gaze position by eye's movement. As experimental results, we can obtain the facial and eye gaze position on a monitor and the gaze position accuracy between the computed positions and the real ones is about 4.8 cm of RMS error.

Evaluation of Gaze Depth Estimation using a Wearable Binocular Eye tracker and Machine Learning (착용형 양안 시선추적기와 기계학습을 이용한 시선 초점 거리 추정방법 평가)

  • Shin, Choonsung;Lee, Gun;Kim, Youngmin;Hong, Jisoo;Hong, Sung-Hee;Kang, Hoonjong;Lee, Youngho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.1
    • /
    • pp.19-26
    • /
    • 2018
  • In this paper, we propose a gaze depth estimation method based on a binocular eye tracker for virtual reality and augmented reality applications. The proposed gaze depth estimation method collects a wide range information of each eye from the eye tracker such as the pupil center, gaze direction, inter pupil distance. It then builds gaze estimation models using Multilayer perceptron which infers gaze depth with respect to the eye tracking information. Finally, we evaluated the gaze depth estimation method with 13 participants in two ways: the performance based on their individual models and the performance based on the generalized model. Through the evaluation, we found that the proposed estimation method recognized gaze depth with 90.1% accuracy for 13 individual participants and with 89.7% accuracy for including all participants.

A Study on Gaze Tracking Based on Pupil Movement, Corneal Specular Reflections and Kalman Filter (동공 움직임, 각막 반사광 및 Kalman Filter 기반 시선 추적에 관한 연구)

  • Park, Kang-Ryoung;Ko, You-Jin;Lee, Eui-Chul
    • The KIPS Transactions:PartB
    • /
    • v.16B no.3
    • /
    • pp.203-214
    • /
    • 2009
  • In this paper, we could simply compute the user's gaze position based on 2D relations between the pupil center and four corneal specular reflections formed by four IR-illuminators attached on each corner of a monitor, without considering the complex 3D relations among the camera, the monitor, and the pupil coordinates. Therefore, the objectives of our paper are to detect the pupil center and four corneal specular reflections exactly and to compensate for error factors which affect the gaze accuracy. In our method, we compensated for the kappa error between the calculated gaze position through the pupil center and actual gaze vector. We performed one time user calibration to compensate when the system started. Also, we robustly detected four corneal specular reflections that were important to calculate gaze position based on Kalman filter irrespective of the abrupt change of eye movement. Experimental results showed that the gaze detection error was about 1.0 degrees though there was the abrupt change of eye movement.

Vision based 3D Position and Attitude Determination using Landmarks' Line-of-Sight Measurements (랜드마크 시선각 측정값을 이용한 3D 비전항법해 결정)

  • Kim, Young-Sun;Ji, Hyun-Min;Hwang, Dong-Hwan
    • Proceedings of the KIEE Conference
    • /
    • 2011.07a
    • /
    • pp.1938-1939
    • /
    • 2011
  • 본 논문에서는 랜드마크의 시선각 측정값을 이용하여 3D 비전항법해를 계산하기 위한 항법방정식을 유도하고 항법해 결정 방법을 보여준다. 먼저, 카메라좌표계에서 측정한 시선각과 항법좌표계의 관계를 이용하여 3차원 항법방정식을 유도하였으며 항법해를 계산하기 위해서는 최소한 3개 이상의 랜드마크를 관측해야함을 보였다. 또한 논문에서는 항법방정식과 기하학을 이용하여 항체의 위치 및 자세를 계산하는 과정을 상세하게 기술한다.

  • PDF

Maneuvering Target Tracking With 3D Variable Turn Model and Kinematic Constraint (3D 가변 선회 모델 및 기구학적 구속조건을 사용한 기동표적 추적)

  • Kim, Lamsu;Lee, Dongwoo;Bang, Hyochoong
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.48 no.11
    • /
    • pp.881-888
    • /
    • 2020
  • In this paper, research on estimation of states of a target of interest using Line Of Sight(LOS) angle measurement is performed. Target's position, velocity, and acceleration are chosen to be the states of interests. The LOS measurement is known to be highly non-linear, making target dynamic modeling hard to be implemented into a filter. To solve this issue, the Pseudomeasurement equation was applied to the LOS measurement equation. With the help of this equation, 3D variable turn target dynamic model is applied to the filter model. For better performance, Kinematic Constraint is also implemented into the filter model. As for the filter, Bias Compensation Pseudomeasurement Filter (BCPMF) is used which is known for its robustness to initial conditions. Moreover, Two-Stage Kalman Filter (TSKF) form was also implemented to benefit from the parallel computation. As a result, TBCPMF 3DVT-KC is proposed and simulated to assess performance.

A Study on Function of the Delineation System by Pattern for Safety Audit on Road Exit Ramp (국도유출부 안전진단을 위한 시선유도시설의 유형별 기능검토)

  • Kum, Ki-Jung;Kim, Hong-Sang;Min, Kyeong-Tae;Yang, Gye-Seung
    • International Journal of Highway Engineering
    • /
    • v.8 no.4 s.30
    • /
    • pp.1-11
    • /
    • 2006
  • Currently, road mobility improved from the National Road Improvement. Nevertheless delineation system is facility that enhanced driver's safety, that was set up often inconsistent or nonexistent over the road exit ramp So it judged functional investigation will be necessary. This study suggested setting type of the delineation system. That was based on a field study and reviews the legal standard of it and considering driver's cognition behavior. For the study, make a 3D-simulation and so could objectively a comparative test. Comparison variable between delineation setting type is selected conspicuity and visibility. Cased that illustrated characteristics of driver's visual cognition behavior. The experiment was used Eye Marker Recorder for measure the gaze frequency more quantitatively and objectively. And used the ANOVA analysis for significance testing between delineation setting type. A significant percent of the conspicuity analyzed types(Safe mark, Obstacle Sign, Warning Light, and Tubular Maker) in road exit ramp for recognize. And gaze frequency that measure of effectiveness of visibility are measured. On the analysis result, the visibility was significance difference between delineation setting type and visibility of types was best.

  • PDF