• Title/Summary/Keyword: eye-position detection

Search Result 49, Processing Time 0.034 seconds

3D View Controlling by Using Eye Gaze Tracking in First Person Shooting Game (1 인칭 슈팅 게임에서 눈동자 시선 추적에 의한 3차원 화면 조정)

  • Lee, Eui-Chul;Cho, Yong-Joo;Park, Kang-Ryoung
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.10
    • /
    • pp.1293-1305
    • /
    • 2005
  • In this paper, we propose the method of manipulating the gaze direction of 3D FPS game's character by using eye gaze detection from the successive images captured by USB camera, which is attached beneath HMD. The proposed method is composed of 3 parts. In the first fart, we detect user's pupil center by real-time image processing algorithm from the successive input images. In the second part of calibration, the geometric relationship is determined between the monitor gazing position and the detected eye position gazing at the monitor position. In the last fart, the final gaze position on the HMB monitor is tracked and the 3D view in game is control]ed by the gaze position based on the calibration information. Experimental results show that our method can be used for the handicapped game player who cannot use his (or her) hand. Also, it can increase the interest and immersion by synchronizing the gaze direction of game player and that of game character.

  • PDF

Automatic face detection using chromaticity space and deformable templates

  • Lee, Kwansu;Lee, Sung-Oh;Lee, Byung-Ju;Park, Gwi-Tae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.28.1-28
    • /
    • 2001
  • An automatic face recognition(AFR) of individuals is a significant problem in the development of computer vision. An AFR consists of two major parts which are detection of face region and recognition process, and the overall performance of AFR is determined by each. In this paper, the face region is acquired using chromaticity space, but this face region is a simple rectangle which doesn´t consider the shape information. By applying deformable templates to the face region, we can locate the position of the eyes in images. With the face region and the eye location information, more precise face region can be extract from the image. Because processing time is critical in real-time system, we use simplified eye templates and the modified energy function for the efficiency. We can get a good detection performance in experiments.

  • PDF

Forward Vehicle Detection Algorithm Using Column Detection and Bird's-Eye View Mapping Based on Stereo Vision (스테레오 비전기반의 컬럼 검출과 조감도 맵핑을 이용한 전방 차량 검출 알고리즘)

  • Lee, Chung-Hee;Lim, Young-Chul;Kwon, Soon;Kim, Jong-Hwan
    • The KIPS Transactions:PartB
    • /
    • v.18B no.5
    • /
    • pp.255-264
    • /
    • 2011
  • In this paper, we propose a forward vehicle detection algorithm using column detection and bird's-eye view mapping based on stereo vision. The algorithm can detect forward vehicles robustly in real complex traffic situations. The algorithm consists of the three steps, namely road feature-based column detection, bird's-eye view mapping-based obstacle segmentation, obstacle area remerging and vehicle verification. First, we extract a road feature using maximum frequent values in v-disparity map. And we perform a column detection using the road feature as a new criterion. The road feature is more appropriate criterion than the median value because it is not affected by a road traffic situation, for example the changing of obstacle size or the number of obstacles. But there are still multiple obstacles in the obstacle areas. Thus, we perform a bird's-eye view mapping-based obstacle segmentation to divide obstacle accurately. We can segment obstacle easily because a bird's-eye view mapping can represent the position of obstacle on planar plane using depth map and camera information. Additionally, we perform obstacle area remerging processing because a segmented obstacle area may be same obstacle. Finally, we verify the obstacles whether those are vehicles or not using a depth map and gray image. We conduct experiments to prove the vehicle detection performance by applying our algorithm to real complex traffic situations.

Gaze Detection System by IR-LED based Camera (적외선 조명 카메라를 이용한 시선 위치 추적 시스템)

  • 박강령
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.4C
    • /
    • pp.494-504
    • /
    • 2004
  • The researches about gaze detection have been much developed with many applications. Most previous researches only rely on image processing algorithm, so they take much processing time and have many constraints. In our work, we implement it with a computer vision system setting a IR-LED based single camera. To detect the gaze position, we locate facial features, which is effectively performed with IR-LED based camera and SVM(Support Vector Machine). When a user gazes at a position of monitor, we can compute the 3D positions of those features based on 3D rotation and translation estimation and affine transform. Finally, the gaze position by the facial movements is computed from the normal vector of the plane determined by those computed 3D positions of features. In addition, we use a trained neural network to detect the gaze position by eye's movement. As experimental results, we can obtain the facial and eye gaze position on a monitor and the gaze position accuracy between the computed positions and the real ones is about 4.2 cm of RMS error.

Position Detection and Gathering Swimming Control of Fish Robot Using Color Detection Algorithm (색상 검출 알고리즘을 활용한 물고기로봇의 위치인식과 군집 유영제어)

  • Akbar, Muhammad;Shin, Kyoo Jae
    • Annual Conference of KIPS
    • /
    • 2016.10a
    • /
    • pp.510-513
    • /
    • 2016
  • Detecting of the object in image processing is substantial but it depends on the object itself and the environment. An object can be detected either by its shape or color. Color is an essential for pattern recognition and computer vision. It is an attractive feature because of its simplicity and its robustness to scale changes and to detect the positions of the object. Generally, color of an object depends on its characteristics of the perceiving eye and brain. Physically, objects can be said to have color because of the light leaving their surfaces. Here, we conducted experiment in the aquarium fish tank. Different color of fish robots are mimic the natural swim of fish. Unfortunately, in the underwater medium, the colors are modified by attenuation and difficult to identify the color for moving objects. We consider the fish motion as a moving object and coordinates are found at every instinct of the aquarium to detect the position of the fish robot using OpenCV color detection. In this paper, we proposed to identify the position of the fish robot by their color and use the position data to control the fish robot gathering in one point in the fish tank through serial communication using RF module. It was verified by the performance test of detecting the position of the fish robot.

Deep Learning-based Gaze Direction Vector Estimation Network Integrated with Eye Landmark Localization (딥 러닝 기반의 눈 랜드마크 위치 검출이 통합된 시선 방향 벡터 추정 네트워크)

  • Joo, Heeyoung;Ko, Min-Soo;Song, Hyok
    • Journal of Broadcast Engineering
    • /
    • v.26 no.6
    • /
    • pp.748-757
    • /
    • 2021
  • In this paper, we propose a gaze estimation network in which eye landmark position detection and gaze direction vector estimation are integrated into one deep learning network. The proposed network uses the Stacked Hourglass Network as a backbone structure and is largely composed of three parts: a landmark detector, a feature map extractor, and a gaze direction estimator. The landmark detector estimates the coordinates of 50 eye landmarks, and the feature map extractor generates a feature map of the eye image for estimating the gaze direction. And the gaze direction estimator estimates the final gaze direction vector by combining each output result. The proposed network was trained using virtual synthetic eye images and landmark coordinate data generated through the UnityEyes dataset, and the MPIIGaze dataset consisting of real human eye images was used for performance evaluation. Through the experiment, the gaze estimation error showed a performance of 3.9, and the estimation speed of the network was 42 FPS (Frames per second).

Gaze Detection System using Real-time Active Vision Camera (실시간 능동 비전 카메라를 이용한 시선 위치 추적 시스템)

  • 박강령
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.12
    • /
    • pp.1228-1238
    • /
    • 2003
  • This paper presents a new and practical method based on computer vision for detecting the monitor position where the user is looking. In general, the user tends to move both his face and eyes in order to gaze at certain monitor position. Previous researches use only one wide view camera, which can capture a whole user's face. In such a case, the image resolution is too low and the fine movements of user's eye cannot be exactly detected. So, we implement the gaze detection system with dual camera systems(a wide and a narrow view camera). In order to locate the user's eye position accurately, the narrow view camera has the functionalities of auto focusing and auto panning/tilting based on the detected 3D facial feature positions from the wide view camera. In addition, we use dual R-LED illuminators in order to detect facial features and especially eye features. As experimental results, we can implement the real-time gaze detection system and the gaze position accuracy between the computed positions and the real ones is about 3.44 cm of RMS error.

Intelligent Drowsiness Drive Warning System (지능형 졸음 운전 경고 시스템)

  • Joo, Young-Hoon;Kim, Jin-Kyu;Ra, In-Ho
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.2
    • /
    • pp.223-229
    • /
    • 2008
  • In this paper. we propose the real-time vision system which judges drowsiness driving based on levels of drivers' fatigue. The proposed system is to prevent traffic accidents by warning the drowsiness and carelessness using face-image analysis and fuzzy logic algorithm. We find the face position and eye areas by using fuzzy skin filter and virtual face model in order to develop the real-time face detection algorithm, and we measure the eye blinking frequency and eye closure duration by using their informations. And then we propose the method for estimating the levels of drivel's fatigue based on measured data by using the fuzzy logic and for deciding whether drowsiness driving is or not. Finally, we show the effectiveness and feasibility of the proposed method through some experiments.

Computer Interface Using Head-Gaze Tracking (응시 위치 추적 기술을 이용한 인터페이스 시스템 개발)

  • 이정준;박강령;김재희
    • Proceedings of the IEEK Conference
    • /
    • 1999.06a
    • /
    • pp.516-519
    • /
    • 1999
  • Gaze detection is to find out the position on a monitor screen where a user is looking at, using the image processing and computer vision technology, We developed a computer interface system using the gaze detection technology, This system enables a user to control the computer system without using their hands. So this system will help the handicapped to use a computer and is also useful for the man whose hands are busy doing another job, especially in tasks in factory. For the practical use, command signal like mouse clicking is necessary and we used eye winking to give this command signal to the system.

  • PDF

3D First Person Shooting Game by Using Eye Gaze Tracking (눈동자 시선 추적에 의한 3차원 1인칭 슈팅 게임)

  • Lee, Eui-Chul;Park, Kang-Ryoung
    • The KIPS Transactions:PartB
    • /
    • v.12B no.4 s.100
    • /
    • pp.465-472
    • /
    • 2005
  • In this paper, we propose the method of manipulating the gaze direction of 3D FPS game's character by using eye gaze detection from the successive images captured by USB camera, which is attached beneath HMB. The proposed method is composed of 3 parts. At first, we detect user's pupil center by real-time image processing algorithm from the successive input images. In the second part of calibration, when the user gaze on the monitor plane, the geometric relationship between the gazing position of monitor and the detected position of pupil center is determined. In the last part, the final gaze position on the HMD monitor is tracked and the 3D view in game is controlled by the gaze position based on the calibration information. Experimental results show that our method can be used for the handicapped game player who cannot use his(or her) hand. Also, it can Increase the interest and the immersion by synchronizing the gaze direction of game player and the view direction of game character.