• Title/Summary/Keyword: Head movements

Search Result 186, Processing Time 0.029 seconds

Development of a computer mouse by tracking head movements and eyeblink (머리 움직임과 눈 깜박임을 이용한 컴퓨터 마우스 개발)

  • Park, Min-Je;Kang, Shin-Wook;Kim, Soo-Chan
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.1107-1108
    • /
    • 2008
  • The purpose of this study is to develope a computer mouse using the head movements and eye blink in order to help the disability persons who can't move the hands or foot because of car accident or cerebral apoplexy. The mouse is composed of two gyro-sensors and photo sensor. The gryo-sensors detect the head horizontal and vertical angular velocities, respectively. The photo sensor detect the eye blink to perform click, double click, and to reset the head position. In the results, we could control the mouse points in real time using the proposed system.

  • PDF

Man-machine interface using eyeball movement

  • Takami, Osamu;Morimoto, Kazuaki;Ochiai, Tsumoru;Ishimatsu, Takakazu
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1995.10a
    • /
    • pp.195-198
    • /
    • 1995
  • In this paper We propose one computer interface device for handicapped people. Input signals of the interface device are movements of eyeballs and head of handicapped. The movements of the eyeballs and head are detected by an image processing system. One feature of our system is that the operator is not obliged to wear any burdensome device like glasses and a helmet. The sensing performance of the image processing of the eyeballs and head is evaluated through experiments. Experimental results reveal the applicability of our system.

  • PDF

The Movements of Vocal Folds during Voice Onset Time of Korean Stops

  • Hong, Ki-Hwan;Kim, Hyun-Ki;Yang, Yoon-Soo;Kim, Bum-Kyu;Lee, Sang-Heon
    • Speech Sciences
    • /
    • v.9 no.1
    • /
    • pp.17-26
    • /
    • 2002
  • Voice onset time (VOT) is defined as the time interval from the oral release of a stop consonant to the onset of glottal pulsing in the following vowel. VOT is a temporal characteristic of stop consonants that reflects the complex timing of glottal articulation relative to supraglottal articulation. There have been many reports on efforts to clarify the acoustical and physiological properties that differentiate the three types of Korean stops, including acoustic, fiberscopic, aerodynamic and electromyographic studies. In the acoustic and fiberscopic studies for stop consonants, the voice onset time and glottal width during the production of stops has been known as the longest and largest in the heavily aspirated type followed by the slightly aspirated type and unaspirated types. The thyroarytenoid and posterior cricoarytenoid muscles were physiologically inter-correlated for differentiating these types of stops. However, a review of the English literature shows that the fine movement of the mucosal edges of the vocal folds during the production of stops has not been well documented. In recent. years, a new method for high-speed recording of laryngeal dynamics by use of a digital recording system allows us to observe with fine time resolution. The movements of the vocal fold edges were documented during the period of stop production using a fiberscopic system of high speed digital images. By observing the glottal width and the visual vibratory movements of the vocal folds before voice onset, the heavily aspirated stop was characterized as being more prominent and dynamic than the slightly aspirated and unaspirated stops.

  • PDF

Several imageries classification with EEG

  • Choi, Kyoung-Ho;Jung, Sung-Jae;Kim, Il-Hwan
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.450-452
    • /
    • 2004
  • Every movement, perception and thought we perform is associated with distinct neural activation patterns. Neurons in the brain communicate with each other by sending electrical impulses that produce currents. These currents give rise to electrical fields that can be measured outside the head. It shows some variation on the electroencephalographic signals. In recent devices, the EEG signals measured from head surface are a sum of all the momentary brain activation. With these EEG signals, it is difficult to distinguish the patterns correlated with a certain event from the signals. However, the system must discriminate some patterns with some events especially for any kind of device as a brain control interface system. In this experiment, the sensory-motor cortex of humans has been extensively studied. Activation related to several movements on both sides of the sensory-motor cortices in imaginary. The activation patterns during imagination of several movements resemble the activation patterns during preparation of movements. The result represents the system based on the optimal filters discriminated at least 60% of mental imageries.

  • PDF

Head tracking system using image processing (영상처리를 이용한 머리의 움직임 추적 시스템)

  • 박경수;임창주;반영환;장필식
    • Journal of the Ergonomics Society of Korea
    • /
    • v.16 no.3
    • /
    • pp.1-10
    • /
    • 1997
  • This paper is concerned with the development and evaluation of the camera calibration method for a real-time head tracking system. Tracking of head movements is important in the design of an eye-controlled human/computer interface and the area of virtual environment. We proposed a video-based head tracking system. A camera was mounted on the subject's head and it took the front view containing eight 3-dimensional reference points(passive retr0-reflecting markers) fixed at the known position(computer monitor). The reference points were captured by image processing board. These points were used to calculate the position (3-dimensional) and orientation of the camera. A suitable camera calibration method for providing accurate extrinsic camera parameters was proposed. The method has three steps. In the first step, the image center was calibrated using the method of varying focal length. In the second step, the focal length and the scale factor were calibrated from the Direct Linear Transformation (DLT) matrix obtained from the known position and orientation of the camera. In the third step, the position and orientation of the camera was calculated from the DLT matrix, using the calibrated intrinsic camera parameters. Experimental results showed that the average error of camera positions (3- dimensional) is about $0.53^{\circ}C$, the angular errors of camera orientations are less than $0.55^{\circ}C$and the data aquisition rate is about 10Hz. The results of this study can be applied to the tracking of head movements related to the eye-controlled human/computer interface and the virtual environment.

  • PDF

Standardization Trend of 3DoF+ Video for Immersive Media (이머시브미디어를 3DoF+ 비디오 부호화 표준 동향)

  • Lee, G.S.;Jeong, J.Y.;Shin, H.C.;Seo, J.I.
    • Electronics and Telecommunications Trends
    • /
    • v.34 no.6
    • /
    • pp.156-163
    • /
    • 2019
  • As a primitive immersive video technology, a three degrees of freedom (3DoF) $360^{\circ}$ video can currently render viewport images that are dependent on the rotational movements of the viewer. However, rendering a flat $360^{\circ}$ video, that is supporting head rotations only, may generate visual discomfort especially when objects close to the viewer are rendered. 3DoF+ enables head movements for a seated person adding horizontal, vertical, and depth translations. The 3DoF+ $360^{\circ}$ video is positioned between 3DoF and six degrees of freedom, which can realize the motion parallax with relatively simple virtual reality software in head-mounted displays. This article introduces the standardization trends for the 3DoF+ video in the MPEG-I visual group.

Eye Gaze Tracking System Under Natural Head Movements (머리 움직임이 자유로운 안구 응시 추정 시스템)

  • ;Matthew, Sked;Qiang, Ji
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.41 no.5
    • /
    • pp.57-64
    • /
    • 2004
  • We proposed the eye gaze tracking system under natural head movements, which consists of one narrow-view field CCD camera, two mirrors which of reflective angles are controlled and active infra-red illumination. The mirrors' angles were computed by geometric and linear algebra calculations to put the pupil images on the optical axis of the camera. Our system allowed the subjects head to move 90cm horizontally and 60cm vertically, and the spatial resolutions were about 6$^{\circ}$ and 7$^{\circ}$, respectively. The frame rate for estimating gaze points was 10~15 frames/sec. As gaze mapping function, we used the hierarchical generalized regression neural networks (H-GRNN) based on the two-pass GRNN. The gaze accuracy showed 94% by H-GRNN improved 9% more than 85% of GRNN even though the head or face was a little rotated. Our system does not have a high spatial gaze resolution, but it allows natural head movements, robust and accurate gaze tracking. In addition there is no need to re-calibrate the system when subjects are changed.

Mild Bradykinesia Due to an Injury of Corticofugal-Tract from Secondary Motor Area in a Patient with Traumatic Brain Injury

  • Lee, Han Do;Seo, Jeong Pyo
    • The Journal of Korean Physical Therapy
    • /
    • v.33 no.6
    • /
    • pp.304-306
    • /
    • 2021
  • Objectives: We report on a patient who showed mild bradykinesia due to injury of the corticofugal tract (CFT) from the secondary motor area following direct head trauma, which was demonstrated on diffusion tensor tractography (DTT). Case summary: A 58-year-old male patient underwent conservative management for subarachnoid hemorrhages caused by direct head trauma resulting from a fall from six-meter height at the department of neurosurgery of a local hospital. His Glasgow Coma Scale score was 3. He developed mildly slow movements following the head trauma and visited the rehabilitation department of a university hospital at ten weeks after the fall. The patient exhibited mild bradykinesia during walking and arm movements with mild weakness in all four extremities (G/G-). Results: On ten-week DTT, narrowing of the right CFT from the supplementary motor area (SMA-CFT), and partial tearing of the left SMA-CFT, left CFTs from the dorsal premotor cortex (dPMC-CFT) and both corticospinal tracts (CSTs) at the subcortical white matter were observed. Conclusion: This case demonstrated abnormalities in both CSTs (partial tearing at the subcortical white matter and narrowing), both SMA-CFTs (narrowing and partial tearing) and left dPMC-CFT. We believe our findings suggest the necessity of assessment of the CFTs from the secondary motor area for patients with unexplained bradykinesia following direct head trauma.

Gaze Detection by Wearable Eye-Tracking and NIR LED-Based Head-Tracking Device Based on SVR

  • Cho, Chul Woo;Lee, Ji Woo;Shin, Kwang Yong;Lee, Eui Chul;Park, Kang Ryoung;Lee, Heekyung;Cha, Jihun
    • ETRI Journal
    • /
    • v.34 no.4
    • /
    • pp.542-552
    • /
    • 2012
  • In this paper, a gaze estimation method is proposed for use with a large-sized display at a distance. Our research has the following four novelties: this is the first study on gaze-tracking for large-sized displays and large Z (viewing) distances; our gaze-tracking accuracy is not affected by head movements since the proposed method tracks the head by using a near infrared camera and an infrared light-emitting diode; the threshold for local binarization of the pupil area is adaptively determined by using a p-tile method based on circular edge detection irrespective of the eyelid or eyelash shadows; and accurate gaze position is calculated by using two support vector regressions without complicated calibrations for the camera, display, and user's eyes, in which the gaze positions and head movements are used as feature values. The root mean square error of gaze detection is calculated as $0.79^{\circ}$ for a 30-inch screen.

Driving behavior Analysis to Verify the Criteria of a Driver Monitoring System in a Conditional Autonomous Vehicle - Part I - (부분 자율주행자동차의 운전자 모니터링 시스템 안전기준 검증을 위한 운전 행동 분석 -1부-)

  • Son, Joonwoo;Park, Myoungouk
    • Journal of Auto-vehicle Safety Association
    • /
    • v.13 no.1
    • /
    • pp.38-44
    • /
    • 2021
  • This study aimed to verify the criteria of the driver monitoring systems proposed by UNECE ACSF informal working group and the ministry of land, infrastructure, and transport of South Korea using driving behavior data. In order to verify the criteria, we investigated the safety regulations of driver monitoring systems in a conditional autonomous vehicle and found that the driver monitoring measures were related to eye blinks times, head movements, and eye closed duration. Thus, we took two different experimental data including real-world driving and simulator-based drowsy driving behaviors in previous studies. The real-world driving data were used for analyzing blink times and head movement intervals, and the drowsiness data were used for eye closed duration. In the real-world driving study, 52 drivers drove approximately 11.0 km of rural road (about 20 min), 7.9 km of urban road (about 25 min), and 20.8 km of highway (about 20 min). The results suggested that the appropriate number of blinks during the last 60 seconds was 4 times, and the head movement interval was 35 seconds. The results from drowsy driving data will be presented in another paper - part 2.