• Title/Summary/Keyword: Human-Computer Interaction Analysis

Search Result 133, Processing Time 0.025 seconds

Morphological Hand-Gesture Recognition Algorithm (형태론적 손짓 인식 알고리즘)

  • Choi Jong-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.8
    • /
    • pp.1725-1731
    • /
    • 2004
  • The use of gestures provides an attractive alternate to cumbersome interface devices for human-computer interaction. This has motivated a very active research area concerned with computer vision-based analysis and interpretation of hand gestures. The most important issues in gesture recognition are the simplification of algorithm and the reduction of processing time. The mathematical morphology based on geometrical set theory is best used to perform the processing. A key idea of proposed algorithm in this paper is to apply morphological shape decomposition. The primitive elements extracted to a hand gesture include in very important information on the directivity of the hand gestures. Based on this characteristic, we proposed the morphological gesture recognition algorithm using feature vectors calculated to lines connecting the center points of a main-primitive element and sub-primitive elements. Through the experiment, we demonstrated the efficiency of proposed algorithm. Coupling natural interactions such as hand gesture with an appropriately designed interface is a valuable and powerful component in the building of TV switch navigating and video contents browsing system.

An Agent-based System for Character Motion Animation Control (캐릭터 동작 애니메이션 제어를 위한 에이전트 시스템)

  • Kim, Ki-Hyun;Kim, Sang-Wook
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.7 no.5
    • /
    • pp.467-474
    • /
    • 2001
  • When user wants to animate more than one character, some unexpected motion animation like a collision between characters may occur. Therefore, this problem must be resolved using a proper control mechanism. Therefore, this problem must be resolved using a proper control mechanism. This paper proposes an agent-based system that controls the motion animation of the character for representing animation scenario reflecting user\`s intention. This system provides a method that coordinates a type of motion and avoids collision between characters according to the moving path of a character in three-dimensional space. Agent communicates with others for motion synchronization. Agent is extended into several intelligent agents that coordinate character\`s motion. Agent system enables not only an intended motion animation, but also the scheduling of motion to an entire character animation. It designs automata model using Petri-net analysis tool for the agent\`s interaction as a method that passes the agent\`s information and infers the current state of agents. We implement this agent system to control the motion of character using agent technology and show an example of controlling the motion of human character model to prove the possiblity of motion control.

  • PDF

The effects of the usability of products on user's emotions - with emphasis on suggestion of methods for measuring user's emotions expressed while using a product -

  • Jeong, Sang-Hoon
    • Archives of design research
    • /
    • v.20 no.2 s.70
    • /
    • pp.5-16
    • /
    • 2007
  • The main objective of our research is analyzing user's emotional changes while using a product, to reveal the influence of usability on human emotions. In this study we have extracted some emotional words that can come up during user interaction with a product and reveal emotional changes through three methods. Finally, we extracted 88 emotional words for measuring user's emotions expressed while using products. And we categorized the 88 words to form 6 groups by using factor analysis. The 6 categories that were extracted as a result of this study were found to be user's representative emotions expressed while using products. It is expected that emotional words and user's representative emotions extracted in this study will be used as subjective evaluation data that is required to measure user's emotional changes while using a product. Also, we proposed the effective methods for measuring user's emotion expressed while using a product in the environment which is natural and accessible for the field of design, by using the emotion mouse and the Eyegaze. An examinee performs several tasks with the emotion mouse through the mobile phone simulator on the computer monitor connected to the Eyegaze. While testing, the emotion mouse senses user's EDA and PPG and transmits the data to the computer. In addition, the Eyegaze can observe the change of pupil size. And a video camera records user's facial expression while testing. After each testing, a subjective evaluation on the emotional changes expressed by the user is performed by the user him/herself using the emotional words extracted from the above study. We aim to evaluate the satisfaction level of usability of the product and compare it with the actual experiment results. Through continuous studies based on these researches, we hope to supply a basic framework for the development of interface with consideration to the user's emotions.

  • PDF

Automatic Facial Expression Recognition using Tree Structures for Human Computer Interaction (HCI를 위한 트리 구조 기반의 자동 얼굴 표정 인식)

  • Shin, Yun-Hee;Ju, Jin-Sun;Kim, Eun-Yi;Kurata, Takeshi;Jain, Anil K.;Park, Se-Hyun;Jung, Kee-Chul
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.12 no.3
    • /
    • pp.60-68
    • /
    • 2007
  • In this paper, we propose an automatic facial expressions recognition system to analyze facial expressions (happiness, disgust, surprise and neutral) using tree structures based on heuristic rules. The facial region is first obtained using skin-color model and connected-component analysis (CCs). Thereafter the origins of user's eyes are localized using neural network (NN)-based texture classifier, then the facial features using some heuristics are localized. After detection of facial features, the facial expression recognition are performed using decision tree. To assess the validity of the proposed system, we tested the proposed system using 180 facial image in the MMI, JAFFE, VAK DB. The results show that our system have the accuracy of 93%.

  • PDF

Inexpensive Visual Motion Data Glove for Human-Computer Interface Via Hand Gesture Recognition (손 동작 인식을 통한 인간 - 컴퓨터 인터페이스용 저가형 비주얼 모션 데이터 글러브)

  • Han, Young-Mo
    • The KIPS Transactions:PartB
    • /
    • v.16B no.5
    • /
    • pp.341-346
    • /
    • 2009
  • The motion data glove is a representative human-computer interaction tool that inputs human hand gestures to computers by measuring their motions. The motion data glove is essential equipment used for new computer technologiesincluding home automation, virtual reality, biometrics, motion capture. For its popular usage, this paper attempts to develop an inexpensive visual.type motion data glove that can be used without any special equipment. The proposed approach has the special feature; it can be developed as a low-cost one becauseof not using high-cost motion-sensing fibers that were used in the conventional approaches. That makes its easy production and popular use possible. This approach adopts a visual method that is obtained by improving conventional optic motion capture technology, instead of mechanical method using motion-sensing fibers. Compared to conventional visual methods, the proposed method has the following advantages and originalities Firstly, conventional visual methods use many cameras and equipments to reconstruct 3D pose with eliminating occlusions But the proposed method adopts a mono vision approachthat makes simple and low cost equipments possible. Secondly, conventional mono vision methods have difficulty in reconstructing 3D pose of occluded parts in images because they have weak points about occlusions. But the proposed approach can reconstruct occluded parts in images by using originally designed thin-bar-shaped optic indicators. Thirdly, many cases of conventional methods use nonlinear numerical computation image analysis algorithm, so they have inconvenience about their initialization and computation times. But the proposed method improves these inconveniences by using a closed-form image analysis algorithm that is obtained from original formulation. Fourthly, many cases of conventional closed-form algorithms use approximations in their formulations processes, so they have disadvantages of low accuracy and confined applications due to singularities. But the proposed method improves these disadvantages by original formulation techniques where a closed-form algorithm is derived by using exponential-form twist coordinates, instead of using approximations or local parameterizations such as Euler angels.

Eye Tracking Using Neural Network and Mean-shift (신경망과 Mean-shift를 이용한 눈 추적)

  • Kang, Sin-Kuk;Kim, Kyung-Tai;Shin, Yun-Hee;Kim, Na-Yeon;Kim, Eun-Yi
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.44 no.1
    • /
    • pp.56-63
    • /
    • 2007
  • In this paper, an eye tracking method is presented using a neural network (NN) and mean-shift algorithm that can accurately detect and track user's eyes under the cluttered background. In the proposed method, to deal with the rigid head motion, the facial region is first obtained using skin-color model and con-nected-component analysis. Thereafter the eye regions are localized using neural network (NN)-based tex-ture classifier that discriminates the facial region into eye class and non-eye class, which enables our method to accurately detect users' eyes even if they put on glasses. Once the eye region is localized, they are continuously and correctly tracking by mean-shift algorithm. To assess the validity of the proposed method, it is applied to the interface system using eye movement and is tested with a group of 25 users through playing a 'aligns games.' The results show that the system process more than 30 frames/sec on PC for the $320{\times}240$ size input image and supply a user-friendly and convenient access to a computer in real-time operation.

Affordance in Consideration of a Feature of Platform Action Game (플랫폼 액션 게임의 특징을 고려한 어포던스)

  • Song, Seung-Keun
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.12
    • /
    • pp.62-69
    • /
    • 2013
  • A great many researches on affordance in HCI(Human Computer Interaction), product design, and cognitive science has been done investigated currently. In addition, the concept of affordances has been applied to games in the incremental trying to understand the relationship between gamers and systems. However, there are some problems to apply them to games because many researchers take ease to use, consistency, and usefulness to handle mainly in HCI rather than the property of the game into account. Consequently, the objective of the study is to investigate affordances in consideration of the features of the game, such as fantasy, variety, and fun based on the concept of them suggested in ecological psychology. A protocol analysis was conducted through the think-aloud method on the full gameplay session to platform action game as the basic genre of many game. The result of this research reveals that a static and movable affordances as a fixed state are discovered and transforming continously, appearing, and disappearing affordances as variable states are uncovered, and physical and cognitive affordances are observed. The result of this research is expected to propose the essential design guideline on the methodology of game design.

Emotion Recognition using Facial Thermal Images

  • Eom, Jin-Sup;Sohn, Jin-Hun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.3
    • /
    • pp.427-435
    • /
    • 2012
  • The aim of this study is to investigate facial temperature changes induced by facial expression and emotional state in order to recognize a persons emotion using facial thermal images. Background: Facial thermal images have two advantages compared to visual images. Firstly, facial temperature measured by thermal camera does not depend on skin color, darkness, and lighting condition. Secondly, facial thermal images are changed not only by facial expression but also emotional state. To our knowledge, there is no study to concurrently investigate these two sources of facial temperature changes. Method: 231 students participated in the experiment. Four kinds of stimuli inducing anger, fear, boredom, and neutral were presented to participants and the facial temperatures were measured by an infrared camera. Each stimulus consisted of baseline and emotion period. Baseline period lasted during 1min and emotion period 1~3min. In the data analysis, the temperature differences between the baseline and emotion state were analyzed. Eyes, mouth, and glabella were selected for facial expression features, and forehead, nose, cheeks were selected for emotional state features. Results: The temperatures of eyes, mouth, glanella, forehead, and nose area were significantly decreased during the emotional experience and the changes were significantly different by the kind of emotion. The result of linear discriminant analysis for emotion recognition showed that the correct classification percentage in four emotions was 62.7% when using both facial expression features and emotional state features. The accuracy was slightly but significantly decreased at 56.7% when using only facial expression features, and the accuracy was 40.2% when using only emotional state features. Conclusion: Facial expression features are essential in emotion recognition, but emotion state features are also important to classify the emotion. Application: The results of this study can be applied to human-computer interaction system in the work places or the automobiles.

Design and Performance Analysis of ML Techniques for Finger Motion Recognition (손가락 움직임 인식을 위한 웨어러블 디바이스 설계 및 ML 기법별 성능 분석)

  • Jung, Woosoon;Lee, Hyung Gyu
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.25 no.2
    • /
    • pp.129-136
    • /
    • 2020
  • Recognizing finger movements have been used as a intuitive way of human-computer interaction. In this study, we implement an wearable device for finger motion recognition and evaluate the accuracy of several ML (Machine learning) techniques. Not only HMM (Hidden markov model) and DTW (Dynamic time warping) techniques that have been traditionally used as time series data analysis, but also NN (Neural network) technique are applied to compare and analyze the accuracy of each technique. In order to minimize the computational requirement, we also apply the pre-processing to each ML techniques. Our extensive evaluations demonstrate that the NN-based gesture recognition system achieves 99.1% recognition accuracy while the HMM and DTW achieve 96.6% and 95.9% recognition accuracy, respectively.

Design requirements of mediating device for total physical response - A protocol analysis of preschool children's behavioral patterns (체감형 학습을 위한 매개 디바이스의 디자인 요구사항 - 프로토콜 분석법을 통한 미취학 아동의 행동 패턴 분석)

  • Kim, Yun-Kyung;Kim, Hyun-Jeong;Kim, Myung-Suk
    • Science of Emotion and Sensibility
    • /
    • v.13 no.1
    • /
    • pp.103-110
    • /
    • 2010
  • TPR(Total Physical Response) is a new representative learning method for children's education. Today's approach to TPR has focused on signals from a user which becomes input data in a human-computer interaction, but the accuracy of sensing from body signals(e. g. motion and voice) isn't so perfect that it seems difficult to apply on an education system. To overcome these limits, we suggest a mediating interface device which can detect the user's motion using correct numerical values such as acceleration and angular speed. In addition, we suggest new design requirements for the mediating device through analyzing children's behavior as human factors by ethnography research and protocol analysis. As a result, we found that; children are unskilled in physical control when they use objects; tend to lean on an object unconsciously with touch. Also their behaviors are restricted, when they use objects. Therefore a mediating device should satisfy new design requirements which are make up for unskilled handling, support familiar and natural physical activity.

  • PDF