• 제목/요약/키워드: gesture

검색결과 933건 처리시간 0.021초

The Effect of Visual Feedback on One-hand Gesture Performance in Vision-based Gesture Recognition System

  • Kim, Jun-Ho;Lim, Ji-Hyoun;Moon, Sung-Hyun
    • 대한인간공학회지
    • /
    • 제31권4호
    • /
    • pp.551-556
    • /
    • 2012
  • Objective: This study presents the effect of visual feedback on one-hand gesture performance in vision-based gesture recognition system when people use gestures to control a screen device remotely. Backgroud: gesture interaction receives growing attention because it uses advanced sensor technology and it allows users natural interaction using their own body motion. In generating motion, visual feedback has been to considered critical factor affect speed and accuracy. Method: three types of visual feedback(arrow, star, and animation) were selected and 20 gestures were listed. 12 participants perform each 20 gestures while given 3 types of visual feedback in turn. Results: People made longer hand trace and take longer time to make a gesture when they were given arrow shape feedback than star-shape feedback. The animation type feedback was most preferred. Conclusion: The type of visual feedback showed statistically significant effect on the length of hand trace, elapsed time, and speed of motion in performing a gesture. Application: This study could be applied to any device that needs visual feedback for device control. A big feedback generate shorter length of motion trace, less time, faster than smaller one when people performs gestures to control a device. So the big size of visual feedback would be recommended for a situation requiring fast actions. On the other hand, the smaller visual feedback would be recommended for a situation requiring elaborated actions.

A Study on Developmental Direction of Interface Design for Gesture Recognition Technology

  • Lee, Dong-Min;Lee, Jeong-Ju
    • 대한인간공학회지
    • /
    • 제31권4호
    • /
    • pp.499-505
    • /
    • 2012
  • Objective: Research on the transformation of interaction between mobile machines and users through analysis on current gesture interface technology development trend. Background: For smooth interaction between machines and users, interface technology has evolved from "command line" to "mouse", and now "touch" and "gesture recognition" have been researched and being used. In the future, the technology is destined to evolve into "multi-modal", the fusion of the visual and auditory senses and "3D multi-modal", where three dimensional virtual world and brain waves are being used. Method: Within the development of computer interface, which follows the evolution of mobile machines, actively researching gesture interface and related technologies' trend and development will be studied comprehensively. Through investigation based on gesture based information gathering techniques, they will be separated in four categories: sensor, touch, visual, and multi-modal gesture interfaces. Each category will be researched through technology trend and existing actual examples. Through this methods, the transformation of mobile machine and human interaction will be studied. Conclusion: Gesture based interface technology realizes intelligent communication skill on interaction relation ship between existing static machines and users. Thus, this technology is important element technology that will transform the interaction between a man and a machine more dynamic. Application: The result of this study may help to develop gesture interface design currently in use.

Hand Gesture Recognition Suitable for Wearable Devices using Flexible Epidermal Tactile Sensor Array

  • Byun, Sung-Woo;Lee, Seok-Pil
    • Journal of Electrical Engineering and Technology
    • /
    • 제13권4호
    • /
    • pp.1732-1739
    • /
    • 2018
  • With the explosion of digital devices, interaction technologies between human and devices are required more than ever. Especially, hand gesture recognition is advantageous in that it can be easily used. It is divided into the two groups: the contact sensor and the non-contact sensor. Compared with non-contact gesture recognition, the advantage of contact gesture recognition is that it is able to classify gestures that disappear from the sensor's sight. Also, since there is direct contacted with the user, relatively accurate information can be acquired. Electromyography (EMG) and force-sensitive resistors (FSRs) are the typical methods used for contact gesture recognition based on muscle activities. The sensors, however, are generally too sensitive to environmental disturbances such as electrical noises, electromagnetic signals and so on. In this paper, we propose a novel contact gesture recognition method based on Flexible Epidermal Tactile Sensor Array (FETSA) that is used to measure electrical signals according to movements of the wrist. To recognize gestures using FETSA, we extracted feature sets, and the gestures were subsequently classified using the support vector machine. The performance of the proposed gesture recognition method is very promising in comparison with two previous non-contact and contact gesture recognition studies.

The Effects of Visual Stimulation and Body Gesture on Language Learning Achievement and Course Interest

  • CHOI, Dongyeon;KIM, Minjeong
    • Educational Technology International
    • /
    • 제16권2호
    • /
    • pp.141-166
    • /
    • 2015
  • The purpose of this study was to examine the effects of using visual stimulation and gesture, namely embodied language learning, on learning achievement and learner's course interest in the EFL classroom. To investigate the effectiveness of the proposed purpose, thirty two third-grade elementary school students participated and were assigned into four English learning class conditions (i.e., using animated graphic and gestures condition, using only animated graphic condition, using still pictures and gesture condition, and control condition). The research questions for this study are addressed below: (1) What differences are there in post and delayed learning achievement between imitating gesture group and non-imitating one and between animated graphic group and still picture one? (2) What differences are there in course interest between imitating gesture group and non-imitating one and between animated graphic group and still picture one? The Embodiment-based English learning system for this study was designed by using Microsoft's Kinect sensing devices. The results of this study revealed that students of imitating gesture group memorized and retained better words and sentence structure than those of the other groups. As for learner's course interest measurement, imitating gesture group showed a highly positive response to attention, relevance, and satisfaction for curriculum and using animated graphic influenced satisfaction as well. This finding can be attributed to the embodied cognition, which proposes that the body and the mind are inseparable in the constitution of cognition and thus students using visual simulation and imitating related gesture regard the embodied language learning approach more satisfactory and acceptable than the conventional ones.

A Structure and Framework for Sign Language Interaction

  • Kim, Soyoung;Pan, Younghwan
    • 대한인간공학회지
    • /
    • 제34권5호
    • /
    • pp.411-426
    • /
    • 2015
  • Objective: The goal of this thesis is to design the interaction structure and framework of system to recognize sign language. Background: The sign language of meaningful individual gestures is combined to construct a sentence, so it is difficult to interpret and recognize the meaning of hand gesture for system, because of the sequence of continuous gestures. This being so, in order to interpret the meaning of individual gesture correctly, the interaction structure and framework are needed so that they can segment the indication of individual gesture. Method: We analyze 700 sign language words to structuralize the sign language gesture interaction. First of all, we analyze the transformational patterns of the hand gesture. Second, we analyze the movement of the transformational patterns of the hand gesture. Third, we analyze the type of other gestures except hands. Based on this, we design a framework for sign language interaction. Results: We elicited 8 patterns of hand gesture on the basis of the fact on whether the gesture has a change from starting point to ending point. And then, we analyzed the hand movement based on 3 elements: patterns of movement, direction, and whether hand movement is repeating or not. Moreover, we defined 11 movements of other gestures except hands and classified 8 types of interaction. The framework for sign language interaction, which was designed based on this mentioned above, applies to more than 700 individual gestures of the sign language, and can be classified as an individual gesture in spite of situation which has continuous gestures. Conclusion: This study has structuralized in 3 aspects defined to analyze the transformational patterns of the starting point and the ending point of hand shape, hand movement, and other gestures except hands for sign language interaction. Based on this, we designed the framework that can recognize the individual gestures and interpret the meaning more accurately, when meaningful individual gesture is input sequence of continuous gestures. Application: When we develop the system of sign language recognition, we can apply interaction framework to it. Structuralized gesture can be used for using database of sign language, inventing an automatic recognition system, and studying on the action gestures in other areas.

A Dynamic Hand Gesture Recognition System Incorporating Orientation-based Linear Extrapolation Predictor and Velocity-assisted Longest Common Subsequence Algorithm

  • Yuan, Min;Yao, Heng;Qin, Chuan;Tian, Ying
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제11권9호
    • /
    • pp.4491-4509
    • /
    • 2017
  • The present paper proposes a novel dynamic system for hand gesture recognition. The approach involved is comprised of three main steps: detection, tracking and recognition. First, the gesture contour captured by a 2D-camera is detected by combining the three-frame difference method and skin-color elliptic boundary model. Then, the trajectory of the hand gesture is extracted via a gesture-tracking algorithm based on an occlusion-direction oriented linear extrapolation predictor, where the gesture coordinate in next frame is predicted by the judgment of current occlusion direction. Finally, to overcome the interference of insignificant trajectory segments, the longest common subsequence (LCS) is employed with the aid of velocity information. Besides, to tackle the subgesture problem, i.e., some gestures may also be a part of others, the most probable gesture category is identified through comparison of the relative LCS length of each gesture, i.e., the proportion between the LCS length and the total length of each template, rather than the length of LCS for each gesture. The gesture dataset for system performance test contains digits ranged from 0 to 9, and experimental results demonstrate the robustness and effectiveness of the proposed approach.

제스처 할당 모드를 이용한 마리오네트 조정 시스템 (Marionette Control System using Gesture Mode Change)

  • 천경민;곽수희;류근호
    • 제어로봇시스템학회논문지
    • /
    • 제21권2호
    • /
    • pp.150-156
    • /
    • 2015
  • In this paper, a marionette control system using wrist and finger gestures through an IMU sensor is studied. The signals from the sensor device are conditioned and recognized, then the commands are sent to the 8 motors of the marionette via Bluetooth (5 motors control the motion of the marionette, and 3 motors control the location of the marionette). It is revealed that the degree of freedom of fingers are not independent from each other, therefore, some gestures are hardly made. Gesture mode changes for difficult postures of the fingers in cases of a lack of finger DOF are proposed. Therefore, the gesture mode change switches the assignment of gesture as required. Experimental results show that gesture mode change is successful for appropriate postures of a marionette.

연속 영상에서 학습 효과를 이용한 제스처 인식 (Gesture Recognition using Training-effect on image sequences)

  • 이현주;이칠우
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2000년도 하계종합학술대회 논문집(4)
    • /
    • pp.222-225
    • /
    • 2000
  • Human frequently communicate non-linguistic information with gesture. So, we must develop efficient and fast gesture recognition algorithms for more natural human-computer interaction. However, it is difficult to recognize gesture automatically because human's body is three dimensional object with very complex structure. In this paper, we suggest a method which is able to detect key frames and frame changes, and to classify image sequence into some gesture groups. Gesture is classifiable according to moving part of body. First, we detect some frames that motion areas are changed abruptly and save those frames as key frames, and then use the frames to classify sequences. We symbolize each image of classified sequence using Principal Component Analysis(PCA) and clustering algorithm since it is better to use fewer components for representation of gestures. Symbols are used as the input symbols for the Hidden Markov Model(HMM) and recognized as a gesture with probability calculation.

  • PDF

신경회로망을 이용한 동적 손 제스처 인식에 관한 연구 (A Study on Dynamic Hand Gesture Recognition Using Neural Networks)

  • 조인석;박진현;최영규
    • 대한전기학회논문지:시스템및제어부문D
    • /
    • 제53권1호
    • /
    • pp.22-31
    • /
    • 2004
  • This paper deals with the dynamic hand gesture recognition based on computer vision using neural networks. This paper proposes a global search method and a local search method to recognize the hand gesture. The global search recognizes a hand among the hand candidates through the entire image search, and the local search recognizes and tracks only the hand through the block search. Dynamic hand gesture recognition method is based on the skin-color and shape analysis with the invariant moment and direction information. Starting point and ending point of the dynamic hand gesture are obtained from hand shape. Experiments have been conducted for hand extraction, hand recognition and dynamic hand gesture recognition. Experimental results show the validity of the proposed method.

PC User Authentication using Hand Gesture Recognition and Challenge-Response

  • Shin, Sang-Min;Kim, Minsoo
    • 한국정보기술학회 영문논문지
    • /
    • 제8권2호
    • /
    • pp.79-87
    • /
    • 2018
  • The current PC user authentication uses character password based on user's knowledge. However, this can easily be exploited by password cracking or key-logging programs. In addition, the use of a difficult password and the periodic change of the password make it easy for the user to mistake exposing the password around the PC because it is difficult for the user to remember the password. In order to overcome this, we propose user gesture recognition and challenge-response authentication. We apply user's hand gesture instead of character password. In the challenge-response method, authentication is performed in the form of responding to a quiz, rather than using the same password every time. To apply the hand gesture to challenge-response authentication, the gesture is recognized and symbolized to be used in the quiz response. So we show that this method can be applied to PC user authentication.