• Title/Summary/Keyword: eye-position tracking

Search Result 52, Processing Time 0.03 seconds

Detection of Face Direction by Using Inter-Frame Difference

  • Jang, Bongseog;Bae, Sang-Hyun
    • Journal of Integrative Natural Science
    • /
    • v.9 no.2
    • /
    • pp.155-160
    • /
    • 2016
  • Applying image processing techniques to education, the face of the learner is photographed, and expression and movement are detected from video, and the system which estimates degree of concentration of the learner is developed. For one learner, the measuring system is designed in terms of estimating a degree of concentration from direction of line of learner's sight and condition of the eye. In case of multiple learners, it must need to measure each concentration level of all learners in the classroom. But it is inefficient because one camera per each learner is required. In this paper, position in the face region is estimated from video which photographs the learner in the class by the difference between frames within the motion direction. And the system which detects the face direction by the face part detection by template matching is proposed. From the result of the difference between frames in the first image of the video, frontal face detection by Viola-Jones method is performed. Also the direction of the motion which arose in the face region is estimated with the migration length and the face region is tracked. Then the face parts are detected to tracking. Finally, the direction of the face is estimated from the result of face tracking and face parts detection.

Processing of syntactic dependency in Korean relative clauses: Evidence from an eye-tracking study (안구이동추적을 통해 살펴본 관계절의 통사처리 과정)

  • Lee, Mi-Seon;Yong, Nam-Seok
    • Korean Journal of Cognitive Science
    • /
    • v.20 no.4
    • /
    • pp.507-533
    • /
    • 2009
  • This paper examines the time course and processing patterns of filler-gap dependencies in Korean relative clauses, using an eyetracking method. Participants listened to a short story while viewing four pictures of entities mentioned in the story. Each story is followed by an auditorily presented question involving a relative clause (subject relative or dative relative). Participants' eye movements in response to the question were recorded. Results showed that the proportion of looks to the picture corresponding to a filler noun significantly increased at the relative verb affixed with a relativizer, and was largest at the filler where the fixation duration on the filler picture significantly increased. These results suggest that online resolution of the filler-gap dependency only starts at the relative verb marked with a relativiser and is finally completed at the filler position. Accordingly, they partly support the filler-driven parsing strategy for Korean, as for head-initial languages. In addition, the different patterns of eye movements between subject relatives and dative relatives indicate the role of case markers in parsing Korean sentences.

  • PDF

A Study of Contents Arrangement in Conning Display (선박항법기기 화면의 배치에 관한 연구)

  • Yoon, Hoon-Yong;Kim, Kyung-Hoon
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.33 no.2
    • /
    • pp.154-161
    • /
    • 2010
  • The conning display which is located in the ship bridge shows the various important information such as ship position, ship speed, track data, rate of turn, thruster rpm so on, and is one of the IBSs(Integrated Bridge Systems). In this study, the survey was conducted for ten officers to find the importance and using frequency of the information which were displayed in the conning display. The results showed that the information of drift speed, ship speed, wind direction and wind force, rate of turn, sea water depth, ship position, heading, thrust rpm, alarm, rudder command and angle got high scores and it meant that these information were very important and high frequency of use during the navigation. The optimized contents arrangement in conning display was suggested based on importance and using frequency of information. The experiment using eye-tracking system was conducted to compare the performance time and error rate of nine different scenarios for suggested arrangement display and three other existing displays. The results showed that the suggested arrangement was the best in performance time and error rate. The scenario concerning the direction and speed of wind showed faster performance time and lower error rate than other scenarios. The movement of subject's eye tended to search from the center and to avoid the comer, called 'the comer effect.' It is expected that the results of this study could help for the bridge staff to grasp the sailing information easily and to cope with the given situations promptly.

Visual servo control of robots using fuzzy-neural-network (퍼지신경망을 이용한 로보트의 비쥬얼서보제어)

  • 서은택;정진현
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1994.10a
    • /
    • pp.566-571
    • /
    • 1994
  • This paper presents in image-based visual servo control scheme for tracking a workpiece with a hand-eye coordinated robotic system using the fuzzy-neural-network. The goal is to control the relative position and orientation between the end-effector and a moving workpiece using a single camera mounted on the end-effector of robot manipulator. We developed a fuzzy-neural-network that consists of a network-model fuzzy system and supervised learning rules. Fuzzy-neural-network is applied to approximate the nonlinear mapping which transforms the features and theire change into the desired camera motion. In addition a control strategy for real-time relative motion control based on this approximation is presented. Computer simulation results are illustrated to show the effectiveness of the fuzzy-neural-network method for visual servoing of robot manipulator.

  • PDF

The General Analysis of an Active Stereo Vision with Hand-Eye Calibration (핸드-아이 보정과 능동 스테레오 비젼의 일반적 해석)

  • Kim, Jin Dae;Lee, Jae Won;Sin, Chan Bae
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.21 no.5
    • /
    • pp.83-83
    • /
    • 2004
  • The analysis of relative pose(position and rotation) between stereo cameras is very important to determine the solution that provides three-dimensional information for an arbitrary moving target with respect to robot-end. In the space of free camera-model, the rotational parameters act on non-linear factors acquiring a kinematical solution. In this paper the general solution of active stereo that gives a three-dimensional pose of moving object is presented. The focus is to achieve a derivation of linear equation between a robot′s end and active stereo cameras. The equation is consistently derived from the vector of quaternion space. The calibration of cameras is also derived in this space. Computer simulation and the results of error-sensitivity demonstrate the successful operation of the solution. The suggested solution can also be applied to the more complex real time tracking and quite general and are applicable in various stereo fields.

The General Analysis of an Active Stereo Vision with Hand-Eye Calibration (핸드-아이 보정과 능동 스테레오 비젼의 일반적 해석)

  • 김진대;이재원;신찬배
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.21 no.5
    • /
    • pp.89-90
    • /
    • 2004
  • The analysis of relative pose(position and rotation) between stereo cameras is very important to determine the solution that provides three-dimensional information for an arbitrary moving target with respect to robot-end. In the space of free camera-model, the rotational parameters act on non-linear factors acquiring a kinematical solution. In this paper the general solution of active stereo that gives a three-dimensional pose of moving object is presented. The focus is to achieve a derivation of linear equation between a robot's end and active stereo cameras. The equation is consistently derived from the vector of quaternion space. The calibration of cameras is also derived in this space. Computer simulation and the results of error-sensitivity demonstrate the successful operation of the solution. The suggested solution can also be applied to the more complex real time tracking and quite general and are applicable in various stereo fields.

Implementation to eye motion tracking system using convolutional neural network (Convolutional neural network를 이용한 눈동자 모션인식 시스템 구현)

  • Lee, Seung Jun;Heo, Seung Won;Lee, Hee Bin;Yu, Yun Seop
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.05a
    • /
    • pp.703-704
    • /
    • 2018
  • An artificial neural network design that traces the pupil for the disables suffering from Lou Gehrig disease is introduced. It grasps the position of the pupil required for the communication system. Tensorflow is used for generating and learning the neural network, and the pupil position is determined through the learned neural network. Convolution neural network(CNN) which consists of 2 stages of convolution layer and 2 layers of complete connection layer is implemented for the system.

  • PDF

Effect of Sensory Integration Therapy Combined with Eye Tracker on Sensory Processing and Visual Perception of Children with Developmental Disabilities (아이트래커를 병행한 감각통합치료가 발달장애아동의 감각처리 및 시지각에 미치는 영향)

  • Kwon, So-Hyun;Ahn, Si-Nae
    • The Journal of Korean Academy of Sensory Integration
    • /
    • v.21 no.3
    • /
    • pp.39-53
    • /
    • 2023
  • Objective : The purpose was the effect of sensory integration therapy combined with an eye tracker on the sensory processing and visual perception of children with developmental disabilities. Methods : It was a single-subject study with a multiple baseline design between subjects, and the intervention applied sensory integration therapy combined with an eye tracker. Visual-motor speed and saccadic eye movements were assessed at each session of baseline and intervention periods. As pre- and post-evaluation, sensory profile, Korean-Developmental Test of Visual Perception and Trail Making Test were conducted. The results of each session evaluation and pre- and post-evaluation researched the effectiveness of the intervention through visual analysis and trend line analysis. Results : As a result of the evaluation for each session, the slope of the trend line for all children in visual-motor speed and saccadic eye movement increased sharply during the intervention compared to the baseline. As a result of the pre- and post-evaluation, the sensory processing of movement, body position, and visual changed from more than that of peers to a level similar to that of peers. In visual perception, all children's ability of Visual Closure increased. As a result of Trail Making Test conducted to confirm the improvement of children's visual tracking and visual-motor abilities, all children showed a decrease in performance time after the test compared to before. Conclusion : It was confirmed that sensory integration therapy combined with an eye tracker for developmental disabilities has effect on sensory processing and visual perception. It is expected to play an important role clinically as it can stimulate children's interest and motivation in line with recent technological improvements and the spread of smart devices.

Estimating Location in Real-world of a Observer for Adaptive Parallax Barrier (적응적 패럴랙스 베리어를 위한 사용자 위치 추적 방법)

  • Kang, Seok-Hoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.12
    • /
    • pp.1492-1499
    • /
    • 2019
  • This paper propose how to track the position of the observer to control the viewing zone using an adaptive parallax barrier. The pose is estimated using a Constrained Local Model based on the shape model and Landmark for robust eye-distance measurement in the face pose. Camera's correlation converts distance and horizontal location to centimeter. The pixel pitch of the adaptive parallax barrier is adjusted according to the position of the observer's eyes, and the barrier is moved to adjust the viewing area. This paper propose a method for tracking the observer in the range of 60cm to 490cm, and measure the error, measurable range, and fps according to the resolution of the camera image. As a result, the observer can be measured within the absolute error range of 3.1642cm on average, and it was able to measure about 278cm at 320×240, about 488cm at 640×480, and about 493cm at 1280×960 depending on the resolution of the image.

Mega Irises: Per-Pixel Projection Illumination Compensation for the moving participant in projector-based visual system (Mega Irises: 프로젝터 기반의 영상 시스템상에서 이동하는 체험자를 위한 화소 단위의 스크린 투사 밝기 보정)

  • Jin, Jong-Wook;Wohn, Kwang-Yun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.17 no.4
    • /
    • pp.31-40
    • /
    • 2011
  • Projector-based visual systems are widely used for VR and experience display applications. But the illumination irregularity on the screen surface due to the screen material and its light reflection properties sometimes deteriorates the user experience. This phenomenon is particularly troublesome when the participants of the head tracking VR system such as CAVE or the motion generation experience system continually move around the system. One of reason to illumination irregularity is projector-screen specular reflection component to participant's eye's position and it's analysis needs high computation complexity. Similar to calculate specular lighting term using GPU's programmable shader, Our research adjusts every pixel's brightness in runtime with given 3D screen space model to reduce illumination irregularity. For doing that, Angle-based brightness compensate function are considered for specific screen installation and modified it for GPU-friendly compute and access. Two aspects are implemented, One is function access transformation from angular form to product and the other is piecewise linear interpolate approximation.