• Title/Summary/Keyword: Human Computer Interface (HCI)

Search Result 117, Processing Time 0.023 seconds

Study about Windows System Control Using Gesture and Speech Recognition (제스처 및 음성 인식을 이용한 윈도우 시스템 제어에 관한 연구)

  • 김주홍;진성일이남호이용범
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.1289-1292
    • /
    • 1998
  • HCI(human computer interface) technologies have been often implemented using mouse, keyboard and joystick. Because mouse and keyboard are used only in limited situation, More natural HCI methods such as speech based method and gesture based method recently attract wide attention. In this paper, we present multi-modal input system to control Windows system for practical use of multi-media computer. Our multi-modal input system consists of three parts. First one is virtual-hand mouse part. This part is to replace mouse control with a set of gestures. Second one is Windows control system using speech recognition. Third one is Windows control system using gesture recognition. We introduce neural network and HMM methods to recognize speeches and gestures. The results of three parts interface directly to CPU and through Windows.

  • PDF

Design of Korean eye-typing interfaces based on multilevel input system (단계식 입력 체계를 이용한 시선 추적 기반의 한글 입력 인터페이스 설계)

  • Kim, Hojoong;Woo, Sung-kyung;Lee, Kunwoo
    • Journal of the HCI Society of Korea
    • /
    • v.12 no.4
    • /
    • pp.37-44
    • /
    • 2017
  • Eye-typing is one kind of human-computer interactive input system which is implemented by location data of gaze. It is widely used as an input system for paralytics because it does not require physical motions other than the eye movement. However, eye-typing interface based on Korean character has not been suggested yet. Thus, this research aims to implement the eye-typing interface optimized for Korean. To begin with, design objectives were established based on the features of eye-typing: significant noise and Midas touch problem. Multilevel input system was introduced to deal with noise, and an area free from input button was applied to solve Midas touch problem. Then, two types of eye-typing interfaces were suggested on phonological consideration of Korean where each syllable is generated from combination of several phonemes. Named as consonant-vowel integrated interface and separated interface, the two interfaces are designed to input Korean in phases through grouped phonemes. Finally, evaluation procedures composed of comparative experiments against the conventional Double-Korean keyboard interface, and analysis on flow of gaze were conducted. As a result, newly designed interfaces showed potential to be applied as practical tools for eye-typing.

MPEG-U based Advanced User Interaction Interface System Using Hand Posture Recognition (손 자세 인식을 이용한 MPEG-U 기반 향상된 사용자 상호작용 인터페이스 시스템)

  • Han, Gukhee;Lee, Injae;Choi, Haechul
    • Journal of Broadcast Engineering
    • /
    • v.19 no.1
    • /
    • pp.83-95
    • /
    • 2014
  • Hand posture recognition is an important technique to enable a natural and familiar interface in HCI(human computer interaction) field. In this paper, we introduce a hand posture recognition method by using a depth camera. Moreover, the hand posture recognition method is incorporated with MPEG-U based advanced user interaction (AUI) interface system, which can provide a natural interface with a variety of devices. The proposed method initially detects positions and lengths of all fingers opened and then it recognizes hand posture from pose of one or two hands and the number of fingers folded when user takes a gesture representing a pattern of AUI data format specified in the MPEG-U part 2. The AUI interface system represents user's hand posture as compliant MPEG-U schema structure. Experimental results show performance of the hand posture recognition and it is verified that the AUI interface system is compatible with the MPEG-U standard.

Double Threshold Method for EMG-based Human-Computer Interface (근전도 기반 휴먼-컴퓨터 인터페이스를 위한 이중 문턱치 기법)

  • Lee Myungjoon;Moon Inhyuk;Mun Museong
    • Journal of Biomedical Engineering Research
    • /
    • v.25 no.6
    • /
    • pp.471-478
    • /
    • 2004
  • Electromyogram (EMC) signal generated by voluntary contraction of muscles is often used in a rehabilitation devices such as an upper limb prosthesis because of its distinct output characteristics compared to other bio-signals. This paper proposes an EMG-based human-computer interface (HCI) for the control of the above-elbow prosthesis or the wheelchair. To control such rehabilitation devices, user generates four commands by combining voluntary contraction of two different muscles such as levator scapulae muscles and flexor-extensor carpi ulnaris muscles. The muscle contraction is detected by comparing the mean absolute value of the EMG signal with a preset threshold value. However. since the time difference in muscle firing can occur when the patient tries simultaneous co-contraction of two muscles, it is difficult to determine whether the patient's intention is co-contraction. Hence, the use of the comparison method using a single threshold value is not feasible for recognizing such co-contraction motion. Here, we propose a novel method using double threshold values composed of a primary threshold and an auxiliary threshold. Using the double threshold method, the co-contraction state is easily detected, and diverse interface commands can be used for the EMG-based HCI. The experimental results with real-time EMG processing showed that the double threshold method is feasible for the EMG-based HCI to control the myoelectric prosthetic hand and the powered wheelchair.

Unconstrained e-Book Control Program by Detecting Facial Characteristic Point and Tracking in Real-time (얼굴의 특이점 검출 및 실시간 추적을 이용한 e-Book 제어)

  • Kim, Hyun-Woo;Park, Joo-Yong;Lee, Jeong-Jick;Yoon, Young-Ro
    • Journal of Biomedical Engineering Research
    • /
    • v.35 no.2
    • /
    • pp.14-18
    • /
    • 2014
  • This study is about e-Book program based on human-computer interaction(HCI) system for physically handicapped person. By acquiring background knowledge of HCI, we know that if we use vision-based interface we can replace current computer input devices by extracting any characteristic point and tracing it. We decided betweeneyes as a characteristic point by analyzing facial input image using webcam. But because of three-dimensional structure of glasses, the person who is wearing glasses wasn't suitable for tracing between-eyes. So we changed characteristic point to the bridge of the nose after detecting between-eyes. By using this technique, we could trace rotation of head in real-time regardless of glasses. To test this program's usefulness, we conducted an experiment to analyze the test result on actual application. Consequently, we got 96.5% rate of success for controlling e-Book under proper condition by analyzing the test result of 20 subjects.

The Design of Knowledge-Emotional Reaction Model considering Personality (개인성을 고려한 지식-감정 반응 모델의 설계)

  • Shim, Jeong-Yon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.1
    • /
    • pp.116-122
    • /
    • 2010
  • As the importance of HCI(Human-Computer Interface) caused by dramatically developed computer technology is getting high, the requirement for the design of human friendly systems is also getting high. First of all, the personality and Emotional factor should be considered for implementing more human friendly systems. Many studies on Knowledge, Emotion and personality have been made, but the combined methods connecting these three factors is not so many investigated yet. It is known that memorizing process includes not only knowledge but also the emotion and the emotion state has much effects on the process of reasoning and decision making step. Accordingly, for implementing more human friendly efficient sophisticated intelligent system, the system considering these three factors should be modeled and designed. In this paper, knowledge-emotion reaction model was designed. Five types are defined for representing the personality and emotion reaction mechanism calculating emotion vector based on the extracted Thought threads by Type matching selection was proposed. This system is applied to the virtual memory and its emotional reactions are simulated.

Extraction or gaze point on display based on EOG for general paralysis patient (전신마비 환자를 위한 EOG 기반 디스플레이 상의 응시 좌표 산출)

  • Lee, D.H.;Yu, J.H.;Kim, D.H.
    • Journal of rehabilitation welfare engineering & assistive technology
    • /
    • v.5 no.1
    • /
    • pp.87-93
    • /
    • 2011
  • This paper proposes a method for extraction of the gaze point on display using EOG(Electrooculography) signal. Based on the linear property of EOG signal, the proposed method corrects scaling difference, rotation difference and origin difference between coordinate of using EOG signal and coordinate on display, without adjustment using the head movement. The performance of the proposed method was evaluated by measuring the difference between extracted gaze point and displayed circle point on the monitor with 1680*1050 resolution. Experimental results show that the average distance errors at the gaze points are 3%(56pixel) on x-axis, 4%(47pixel) on y-axis, respectively. This method can be used to human computer interface of pointing device for general paralysis patients or HCI for VR game application.

A Study on the LED Button Guide to improve the IPTV's Usability (IPTV 사용성 향상을 위한 LED 버튼 가이드)

  • Kim, Sung-Hee;Kim, You-Min;Jung, Jae-Wook;Lee, Dong-Wook;Ryu, Won;Hahn, Min-Soo
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.933-937
    • /
    • 2009
  • The IPTV which was commercialized and is being serviced to customers at present has a complicated GUI (Graphical User Interface) to provide two-way services and a remote control containing more than 40 buttons unlike the conventional TV. Accordingly, the remote control becomes one of the causes that make the usability of the IPTV worsen. In this paper, we suggest a LED button guide system as a solution to improve a usability of the IPTV, and analyze the effects of the interface obtained from the user evaluation on the user action.

  • PDF

A Study on New Gameplay Based on Brain-Computer Interface (BCI 기반의 새로운 게임 플레이 연구)

  • Ko, Min-Jin;Bae, Kyoung-Woo;Oh, Gyu-Hwan
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.749-755
    • /
    • 2009
  • BCI (Brain-Computer Interface) is a way to control computer by using the human brain waves. As the hardware using brain wave technologies has developed, former expensive and big sized brain wave measuring devices have recently become much smaller and cheaper in their prices, making it possible for the individuals to buy them. This predicts them to be applied in various fields of multimedia industry. This thesis is to give an insight into whether the BCI device can be used as a new gaming device approaching it in a game designing point of view. At first, we propose game play elements that can effectively utilizing the BCI devices and produce a game prototype adopting the BCI device based on such game play elements. Next, we verify it with statistical data analysis to show that the prototype adopting the BCI device gives more clear and efficient controls in its game play than a game of only adopting keyboard & mouse devices. The results will give a guideline for effective game design methodology for making BCI based games.

  • PDF