• Title/Summary/Keyword: Gaze recognition

Search Result 47, Processing Time 0.027 seconds

Difference in visual attention during the assessment of facial attractiveness and trustworthiness (얼굴 매력도와 신뢰성 평가에서 시각적 주의의 차이)

  • Sung, Young-Shin;Cho, Kyung-Jin;Kim, Do-Yeon;Kim, Hack-Jin
    • Science of Emotion and Sensibility
    • /
    • v.13 no.3
    • /
    • pp.533-540
    • /
    • 2010
  • This study was designed to examine the difference in visual attention between the evaluations of facial attractiveness and facial trustworthiness, both of which may be the two most fundamental social evaluation for forming first impressions under various types of social interactions. In study 1, participants were asked to evaluate the attractiveness and trustworthiness of 40 new faces while their gaze directions being recorded using an eye-tracker. The analysis revealed that participants spent significantly longer gaze fixation time while examining certain facial features such as eyes and nose during the evaluation of facial trustworthiness, as compared to facial attractiveness. In study 2, participants performed the same face evaluation tasks, except that a word was briefly displayed on a certain facial feature in each face trial, which were then followed by unexpected recall tests of the previously viewed words. The analysis demonstrated that the recognition rate of the words that had been presented on the nose was significantly higher for the task of facial trustworthiness vs. facial attractiveness evaluation. These findings suggest that the evaluation of facial trustworthiness may be distinguished by that of facial attractiveness in terms of the allocation of attentional resources.

  • PDF

Development of a Web-based Presentation Attitude Correction Program Centered on Analyzing Facial Features of Videos through Coordinate Calculation (좌표계산을 통해 동영상의 안면 특징점 분석을 중심으로 한 웹 기반 발표 태도 교정 프로그램 개발)

  • Kwon, Kihyeon;An, Suho;Park, Chan Jung
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.2
    • /
    • pp.10-21
    • /
    • 2022
  • In order to improve formal presentation attitudes such as presentation of job interviews and presentation of project results at the company, there are few automated methods other than observation by colleagues or professors. In previous studies, it was reported that the speaker's stable speech and gaze processing affect the delivery power in the presentation. Also, there are studies that show that proper feedback on one's presentation has the effect of increasing the presenter's ability to present. In this paper, considering the positive aspects of correction, we developed a program that intelligently corrects the wrong presentation habits and attitudes of college students through facial analysis of videos and analyzed the proposed program's performance. The proposed program was developed through web-based verification of the use of redundant words and facial recognition and textualization of the presentation contents. To this end, an artificial intelligence model for classification was developed, and after extracting the video object, facial feature points were recognized based on the coordinates. Then, using 4000 facial data, the performance of the algorithm in this paper was compared and analyzed with the case of facial recognition using a Teachable Machine. Use the program to help presenters by correcting their presentation attitude.

Comparative Performance Evaluations of Eye Detection algorithm (눈 검출 알고리즘에 대한 성능 비교 연구)

  • Gwon, Su-Yeong;Cho, Chul-Woo;Lee, Won-Oh;Lee, Hyeon-Chang;Park, Kang-Ryoung;Lee, Hee-Kyung;Cha, Ji-Hun
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.6
    • /
    • pp.722-730
    • /
    • 2012
  • Recently, eye image information has been widely used for iris recognition or gaze detection in biometrics or human computer interaction. According as long distance camera-based system is increasing for user's convenience, the noises such as eyebrow, forehead and skin areas which can degrade the accuracy of eye detection are included in the captured image. And fast processing speed is also required in this system in addition to the high accuracy of eye detection. So, we compared the most widely used algorithms for eye detection such as AdaBoost eye detection algorithm, adaptive template matching+AdaBoost algorithm, CAMShift+AdaBoost algorithm and rapid eye detection method. And these methods were compared with images including light changes, naive eye and the cases wearing contact lens or eyeglasses in terms of accuracy and processing speed.

A Study for Detecting a Gazing Point Based on Reference Points (참조점을 이용한 응시점 추출에 관한 연구)

  • Kim, S.I.;Lim, J.H.;Cho, J.M.;Kim, S.H.;Nam, T.W.
    • Journal of Biomedical Engineering Research
    • /
    • v.27 no.5
    • /
    • pp.250-259
    • /
    • 2006
  • The information of eye movement is used in various fields such as psychology, ophthalmology, physiology, rehabilitation medicine, web design, HMI(human-machine interface), and so on. Various devices to detect the eye movement have been developed but they are too expensive. The general methods of eye movement tracking are EOG(electro-oculograph), Purkinje image tracker, scleral search coil technique, and video-oculograph(VOG). The purpose of this study is to embody the algorithm which tracks the location of the gazing point at a pupil. Two kinds of location data were compared to track the gazing point. One is the reference points(infrared LEDs) which is effected from the globe. Another is the center point of the pupil which is gained with a CCD camera. The reference point was captured with the CCD camera and infrared lights which were not recognized by human eyes. Both of images which were thrown and were not thrown an infrared light on the globe were captured and saved. The reflected reference points were detected with the brightness difference between the two saved images. In conclusion, the circumcenter theory of a triangle was used to look for the center of the pupil. The location of the gazing point was relatively indicated with the each center of the pupil and the reference point.

A Study on Acting Approaches based on Characteristics of Zoom Theater - Focused on the Production Process of Project, Hong-Do 2020 (줌(Zoom)연극의 특성에 따른 배우의 연기 접근 방법 연구 - 프로젝트, 홍도(2020)의 제작 과정을 중심으로)

  • Jung, Eunyoung
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.12
    • /
    • pp.842-854
    • /
    • 2021
  • Performing industries in Korea and abroad have been attempting a wide range of artistic experiments utilizing online platforms ever since the Covid-19 pandemic. Accordingly, this study will shed light on the functional characteristics of Zoom, which was used as a creative tool for theater performances. At first, after examining theater performances presented in Korea and abroad using Zoom and their characteristics, the production stage of the Zoom play will be analyzed by dividing it into following stages; a research-based pre-production stage, a scene workshop stage that composes each scene based on the script, a recording stage filming each scene on Zoom, and Streaming stage for presenting the show. Furthermore, the actor's approaches to acting in this production process was presumed to be separation of gaze, re-recognition of space, utilization of expressive gestures, and reaction as an active action. As a result, it proposes the possibility of ongoing development of theatrical work using Zoom and the evolutionary aspect of actor's acting approaches in accordance with theatrical work via Zoom.

The process of estimating user response to training stimuli of joint attention using a robot (로봇활용 공동 주의 훈련자극에 대한 사용자 반응상태를 추정하는 프로세스)

  • Kim, Da-Young;Yun, Sang-Seok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.10
    • /
    • pp.1427-1434
    • /
    • 2021
  • In this paper, we propose a psychological state estimation process that computes children's attention and tension in response to training stimuli. Joint attention was adopted as the training stimulus required for behavioral intervention, and the Discrete trial training (DTT) technique was applied as the training protocol. Three types of training stimulation contents are composed to check the user's attention and tension level and provided mounted on a character-shaped tabletop robot. Then, the gaze response to the user's training stimulus is estimated with the vision-based head pose recognition and geometrical calculation model, and the nervous system response is analyzed using the PPG and GSR bio-signals using heart rate variability(HRV) and histogram techniques. Through experiments using robots, it was confirmed that the psychological response of users to training contents on joint attention could be quantified.

Inspection of guided missiles applied with parallel processing algorithm (병렬처리 알고리즘 적용 유도탄 점검)

  • Jung, Eui-Jae;Koh, Sang-Hoon;Lee, You-Sang;Kim, Young-Sung
    • Journal of Advanced Navigation Technology
    • /
    • v.25 no.4
    • /
    • pp.293-298
    • /
    • 2021
  • In general, the guided weapon seeker and the guided control device process the target, search, recognition, and capture information to indicate the state of the guided missile, and play a role in controlling the operation and control of the guided weapon. The signals required for guided weapons are gaze change rate, visual signal, and end-stage fuselage orientation signal. In order to process the complex and difficult-to-process missile signals of recent missiles in real time, it is necessary to increase the data processing speed of the missiles. This study showed the processing speed after applying the stop and go and inverse enumeration algorithm among the parallel algorithm methods of PINQ and comparing the processing speed of the signal data required for the guided missile in real time using the guided missile inspection program. Based on the derived data processing results, we propose an effective method for processing missile data when applying a parallel processing algorithm by comparing the processing speed of the multi-core processing method and the single-core processing method, and the CPU core utilization rate.