• Title/Summary/Keyword: gaze control

Search Result 62, Processing Time 0.019 seconds

Resolution Estimation Technique in Gaze Tracking System for HCI (HCI를 위한 시선추적 시스템에서 분해능의 추정기법)

  • Kim, Ki-Bong;Choi, Hyun-Ho
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.1
    • /
    • pp.20-27
    • /
    • 2021
  • Eye tracking is one of the NUI technologies, and it finds out where the user is gazing. This technology allows users to input text or control GUI, and further analyzes the user's gaze so that it can be applied to commercial advertisements. In the eye tracking system, the allowable range varies depending on the quality of the image and the degree of freedom of movement of the user. Therefore, there is a need for a method of estimating the accuracy of eye tracking in advance. The accuracy of eye tracking is greatly affected by how the eye tracking algorithm is implemented in addition to hardware variables. Accordingly, in this paper, we propose a method to estimate how many degrees of gaze changes when the pupil center moves by one pixel by estimating the maximum possible movement distance of the pupil center in the image.

Style Synthesis of Speech Videos Through Generative Adversarial Neural Networks (적대적 생성 신경망을 통한 얼굴 비디오 스타일 합성 연구)

  • Choi, Hee Jo;Park, Goo Man
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.11
    • /
    • pp.465-472
    • /
    • 2022
  • In this paper, the style synthesis network is trained to generate style-synthesized video through the style synthesis through training Stylegan and the video synthesis network for video synthesis. In order to improve the point that the gaze or expression does not transfer stably, 3D face restoration technology is applied to control important features such as the pose, gaze, and expression of the head using 3D face information. In addition, by training the discriminators for the dynamics, mouth shape, image, and gaze of the Head2head network, it is possible to create a stable style synthesis video that maintains more probabilities and consistency. Using the FaceForensic dataset and the MetFace dataset, it was confirmed that the performance was increased by converting one video into another video while maintaining the consistent movement of the target face, and generating natural data through video synthesis using 3D face information from the source video's face.

A Study on The Screen Cursor Control using Gaze Tracking (응시 위치 추적을 이용한 스크린 커서 제어)

  • Jang, Dong-Hyun;Kim, Chung-Kyue
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2006.11a
    • /
    • pp.113-116
    • /
    • 2006
  • 컴퓨터의 급속한 발전 속도와 맞물려 사용자의 위지나 몸짓, 감성을 인지하는 보다 편리하고 자연스러운 휴먼 인터페이스에 대한 요구가 늘어나고 있다. 휴면 인터페이스 가운데 응시 위치 추적은 현재 사용자가 쳐다보고 있는 위치를 컴퓨터 시각 인식 방법을 통하여 파악하는 연구이다. 본 논문에서는 여러 감성 컴퓨터 인터페이스 중 눈동자를 통해 컴퓨터의 입력장치를 간접적으로 제어하는 방법에 대해 기술한다. 웹 카메라를 통해 입력 받은 영상을 이용하여 눈동자의 위치 이동을 탐색하여 마우스를 제어한다.

  • PDF

Disparity compensation for vergence control of active stereo camera (배경시차 보정을 이용한 스테레오 시각장치의 주시각제어)

  • 박순용;이용범;진성일
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.34S no.9
    • /
    • pp.67-76
    • /
    • 1997
  • This ppaer describes the development of the stereo camera system(KASS-1) and the control of the vergence of the stereo camera to fix a gaze on a moving object in real-time using a stereo disparity. The motion energy and the stereo disparity of a moving object from the stereo image are used to control the vergence of stereo camera to keep stereo disparity constant. The disparity from the rotating stereo camera is introduced not only from the moving object but also from the background. In this paper, the background disparity error due to the vergence control of the stereo camera is eliminated by compensation algoithm, and the vergence of steereo camera system can be controlled continuously using the disparity of a moving object only.

  • PDF

The Characteristics of Driving Behavior and Eye-Movement According to Driving Speed and Navigation-Position while Operation of the Navigation in Driving (주행 중 네비게이션 조작 상황에서 주행속도와 네비게이션 위치에 따른 운전행동 및 안구운동 특성)

  • Hong, Seung-Hee;Kang, Jin-Kyu;Kim, Bo-Seong;Min, Cheol-Kee;Chung, Soon-Cheol;Doi, Shun'ich;Min, Byung-Chan
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.34 no.4
    • /
    • pp.35-41
    • /
    • 2011
  • The purpose of this study was to examine drivers' driving behaviors and eye-movements according to driving speed and navigation- position while operation of the navigation in driving. For this purpose, two driving conditions (low-speed and high-speed) and two navigation-positions (top and bottom location of the center console) were set. Drivers' driving behaviors (speed, speed variation, coefficient of variation, and the number of collisions) and eye-movements (overall eye pattern, the average scanning time of navigation, and the number of gaze-out on the road for more 2 seconds) were measured. As a result, when the navigation was located at the bottom of the console, difficulties of lateral control was appeared in low-speed driving condition, and the that of longitudinal control was appeared in high-speed driving condition. In addition, above situation made the drivers' scanning times of navigation long, increased the number of gaze-out on the road for more 2 seconds, and made overall eye pattern monotonous. These results could be interpreted that the manipulation of the navigation at the bottom of console cause reduced attention capacity due to the cognitive load.

Enhancing the Awareness of Decentralized Cooperative Mobile Robots through Active Perceptual Anchoring

  • Guirnaldo, Sherwin A.;Watanabe, Keigo;Izumi, Kiyotaka
    • International Journal of Control, Automation, and Systems
    • /
    • v.2 no.4
    • /
    • pp.450-462
    • /
    • 2004
  • In this paper, we describe a system for controlling the perceptual processes of two cooperative mobile robots that addresses the issue of enhancing perceptual awareness. We define awareness here as knowing the location of other robots in the environment. The proposed system benefits from a formalism called perceptual anchoring. Here, perceptual anchoring enhances the awareness of the system by employing an anchor-based active gaze control strategy or active perceptual anchoring to control the perceptual effort according to what is important at a given time. By anchoring we extend the notion of awareness as knowing what the symbols in the control module represent to by connecting them to the objects or features in the environment. We demonstrate the present system through a simulation of two nonholonomic mobile robots performing a cooperative transportation by carrying a cargo to a target location where there are two other robots moving about. The system is able to efficiently focus the perceptual effort and thus able to safely carry the cargo to the target position.

Research on Content Control Technology using Hand Gestures to Improve the Usability of Holographic Realistic Content

  • Sangwon LEE;Hyun Chang LEE
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.1
    • /
    • pp.163-168
    • /
    • 2024
  • Technologies that are considered to be a part of the fourth industrial revolution include holograms, augmented reality, and virtual reality. As technology advances, the industry's scale is growing quickly as well. While the development of technology for direct use is moving slowly, awareness of floating holograms-which are considered realistic content-is growing as the industry's scale and rate of technological advancement continue to accelerate. Specifically, holograms that have been incorporated into museums and exhibition spaces are static forms of content that viewers gaze at inertly. Additionally, their use in educational fields is very passive and has a low rate of utilization. Therefore, in order to improve usability from the viewpoint of viewers of realistic content, such as exhibition halls or museums, we introduce realistic content control technology in this study using a machine learning framework to recognize hands. It is anticipated that using the study's findings, manipulating realistic content independently will enhance comprehension of objects presented as realistic content and boost its applicability in the industrial and educational domains.

Interface Modeling for Digital Device Control According to Disability Type in Web

  • Park, Joo Hyun;Lee, Jongwoo;Lim, Soon-Bum
    • Journal of Multimedia Information System
    • /
    • v.7 no.4
    • /
    • pp.249-256
    • /
    • 2020
  • Learning methods using various assistive and smart devices have been developed to enable independent learning of the disabled. Pointer control is the most important consideration for the disabled when controlling a device and the contents of an existing graphical user interface (GUI) environment; however, difficulties can be encountered when using a pointer, depending on the disability type; Although there are individual differences depending on the blind, low vision, and upper limb disability, problems arise in the accuracy of object selection and execution in common. A multimodal interface pilot solution is presented that enables people with various disability types to control web interactions more easily. First, we classify web interaction types using digital devices and derive essential web interactions among them. Second, to solve problems that occur when performing web interactions considering the disability type, the necessary technology according to the characteristics of each disability type is presented. Finally, a pilot solution for the multimodal interface for each disability type is proposed. We identified three disability types and developed solutions for each type. We developed a remote-control operation voice interface for blind people and a voice output interface applying the selective focusing technique for low-vision people. Finally, we developed a gaze-tracking and voice-command interface for GUI operations for people with upper-limb disability.

Clinical Convergence Study on Attention Processing of Individuals with Social Anxiety Tendency : Focusing on Positive Stimulation in Emotional Context (사회불안성향자의 주의 과정에 관한 임상 융합 연구 : 정서맥락에서 긍정 자극을 중심으로)

  • Park, Ji-Yoon;Yoon, Hyae-Young
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.3
    • /
    • pp.79-90
    • /
    • 2018
  • The purpose of this study was to investigate the difference of individuals with social anxiety tendency and normal people according to existence of emotional context in attention processing for positive facial stimulation. To do this, we investigated attentional processing for positive face stimuli in a condition without/with emotional context. SADS and CES-D were administered to 800 undergraduate students in D city and the social anxiety group (SA, n=24) and the normal control group (NC, n=24) were selected. In order to measure the two factors of attention process (attention engagement and attention disengagement), first gaze direction and first gaze time were measured through eye-movement tracking. The results show that the SA group exhibited faster attention disengagement from positive face stimuli compared to the NC group in the condition without context. But, when the positive context presented with positive face stimuli, there is no difference between SA and NC. This result suggests that the positive background affects emotional processing of social anxiety disorder.

A Study on Visibility Evaluation for Cabin Type Combine (캐빈형 콤바인의 시계성 평가에 관한 연구)

  • Choi, C.H.;Kim, J.D.;Kim, T.H.;Mun, J.H.;Kim, Y.J.
    • Journal of Biosystems Engineering
    • /
    • v.34 no.2
    • /
    • pp.120-126
    • /
    • 2009
  • The purpose of this study was to develop a visibility evaluation system for cabin type combine. Human's field of view was classified into five levels (perceptive, effective, stable gaze, induced, and auxiliary) depending on rotation of human's head and eye. Divider, reaper lever, gearshift, dashboard, and conveying part were considered as major viewpoints of combine. Visibilities of combine was evaluated quantitatively using the viewpoints and the human's field of view levels. The visibility evaluation system for cabin type combine was consisted of a laser pointer, stepping motors to control the direction of view, gyro sensors to measure horizontal and vertical angle, and I/O interface to acquire the signals. Tests were conducted with different postures ('sitting straight', 'sitting with $15^{\circ}$ tilt', 'standing straight', and 'standing with $15^{\circ}$ tilt'). The LSD (least significant difference) multiple comparison tests showed that the visibilities of viewpoints were different significantly as the operator's postures were changed. The results showed that the posture at standing with $15^{\circ}$ tilt provided the best visibility for operators. The divider of the combine was invisible due to blocking with the cabin frame at many postures. The reaper lever showed good visibilities at the postures of sitting or standing with $15^{\circ}$ tilt. The gearshift, the dashboard, and the conveying part had reasonable visibilities at the posture of sitting with $15^{\circ}$ tilt. However, most viewpoints of the combine were out of the stable gaze field of view level. Modifications of the combine design will be required to enhance the visibility during harvesting operation for farmers' safety and convenience.