• Title/Summary/Keyword: Gesture Analysis

Search Result 140, Processing Time 0.022 seconds

Speech-Oriented Multimodal Usage Pattern Analysis for TV Guide Application Scenarios (TV 가이드 영역에서의 음성기반 멀티모달 사용 유형 분석)

  • Kim Ji-Young;Lee Kyong-Nim;Hong Ki-Hyung
    • MALSORI
    • /
    • no.58
    • /
    • pp.101-117
    • /
    • 2006
  • The development of efficient multimodal interfaces and fusion algorithms requires knowledge of usage patterns that show how people use multiple modalities. We analyzed multimodal usage patterns for TV-guide application scenarios (or tasks). In order to collect usage patterns, we implemented a multimodal usage pattern collection system having two input modalities: speech and touch-gesture. Fifty-four subjects participated in our study. Analysis of the collected usage patterns shows a positive correlation between the task type and multimodal usage patterns. In addition, we analyzed the timing between speech-utterances and their corresponding touch-gestures that shows the touch-gesture occurring time interval relative to the duration of speech utterance. We believe that, for developing efficient multimodal fusion algorithms on an application, the multimodal usage pattern analysis for the given application, similar to our work for TV guide application, have to be done in advance.

  • PDF

Gesture Recognition Using Higher Correlation Feature Information and PCA

  • Kim, Jong-Min;Lee, Kee-Jun
    • Journal of Integrative Natural Science
    • /
    • v.5 no.2
    • /
    • pp.120-126
    • /
    • 2012
  • This paper describes the algorithm that lowers the dimension, maintains the gesture recognition and significantly reduces the eigenspace configuration time by combining the higher correlation feature information and Principle Component Analysis. Since the suggested method doesn't require a lot of computation than the method using existing geometric information or stereo image, the fact that it is very suitable for building the real-time system has been proved through the experiment. In addition, since the existing point to point method which is a simple distance calculation has many errors, in this paper to improve recognition rate the recognition error could be reduced by using several successive input images as a unit of recognition with K-Nearest Neighbor which is the improved Class to Class method.

On-line Korean Sing Language(KSL) Recognition using Fuzzy Min-Max Neural Network and feature Analysis

  • zeungnam Bien;Kim, Jong-Sung
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1995.10b
    • /
    • pp.85-91
    • /
    • 1995
  • This paper presents a system which recognizes the Korean Sign Language(KSL) and translates into normal Korean speech. A sign language is a method of communication for the deaf-mute who uses gestures, especially both hands and fingers. Since the human hands and fingers are not the same in physical dimension, the same form of a gesture produced by two signers with their hands may not produce the same numerical values when obtained through electronic sensors. In this paper, we propose a dynamic gesture recognition method based on feature analysis for efficient classification of hand motions, and on a fuzzy min-max neural network for on-line pattern recognition.

  • PDF

An Implementation of Dynamic Gesture Recognizer Based on WPS and Data Glove (WPS와 장갑 장치 기반의 동적 제스처 인식기의 구현)

  • Kim, Jung-Hyun;Roh, Yong-Wan;Hong, Kwang-Seok
    • The KIPS Transactions:PartB
    • /
    • v.13B no.5 s.108
    • /
    • pp.561-568
    • /
    • 2006
  • WPS(Wearable Personal Station) for next generation PC can define as a core terminal of 'Ubiquitous Computing' that include information processing and network function and overcome spatial limitation in acquisition of new information. As a way to acquire significant dynamic gesture data of user from haptic devices, traditional gesture recognizer based on desktop-PC using wire communication module has several restrictions such as conditionality on space, complexity between transmission mediums(cable elements), limitation of motion and incommodiousness on use. Accordingly, in this paper, in order to overcome these problems, we implement hand gesture recognition system using fuzzy algorithm and neural network for Post PC(the embedded-ubiquitous environment using blue-tooth module and WPS). Also, we propose most efficient and reasonable hand gesture recognition interface for Post PC through evaluation and analysis of performance about each gesture recognition system. The proposed gesture recognition system consists of three modules: 1) gesture input module that processes motion of dynamic hand to input data 2) Relational Database Management System(hereafter, RDBMS) module to segment significant gestures from input data and 3) 2 each different recognition modulo: fuzzy max-min and neural network recognition module to recognize significant gesture of continuous / dynamic gestures. Experimental result shows the average recognition rate of 98.8% in fuzzy min-nin module and 96.7% in neural network recognition module about significantly dynamic gestures.

A Research for Interface Based on EMG Pattern Combinations of Commercial Gesture Controller (상용 제스처 컨트롤러의 근전도 패턴 조합에 따른 인터페이스 연구)

  • Kim, Ki-Chang;Kang, Min-Sung;Ji, Chang-Uk;Ha, Ji-Woo;Sun, Dong-Ik;Xue, Gang;Shin, Kyoo-Sik
    • Journal of Engineering Education Research
    • /
    • v.19 no.1
    • /
    • pp.31-36
    • /
    • 2016
  • These days, ICT-related products are pouring out due to development of mobile technology and increase of smart phones. Among the ICT-related products, wearable devices are being spotlighted with the advent of hyper-connected society. In this paper, a body-attached type wearable device using EMG(electromyography) sensors is studied. The research field of EMG sensors is divided into two parts. One is medical area and another is control device area. This study corresponds to the latter that is a method of transmitting user's manipulation intention to robots, games or computers through the measurement of EMG. We used commercial device MYO developed by Thalmic Labs in Canada and matched up EMG of arm muscles with gesture controller. In the experiment part, first of all, various arm motions for controlling devices are defined. Finally, we drew several distinguishing kinds of motions through analysis of the EMG signals and substituted a joystick with the motions.

Hand Gesture Sequence Recognition using Morphological Chain Code Edge Vector (형태론적 체인코드 에지벡터를 이용한 핸드 제스처 시퀀스 인식)

  • Lee Kang-Ho;Choi Jong-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.9 no.4 s.32
    • /
    • pp.85-91
    • /
    • 2004
  • The use of gestures provides an attractive alternate to cumbersome interface devices for human-computer interaction. This has motivated a very active research area concerned with computer vision-based analysis and interpretation of hand gestures The most important issues in gesture recognition are the simplification of algorithm and the reduction of processing time. The mathematical morphology based on geometrical set theory is best used to perform the processing. The key idea of proposed algorithm is to track a trajectory of center points in primitive elements extracted by morphological shape decomposition. The trajectory of morphological center points includes the information on shape orientation. Based on this characteristic we proposed the morphological gesture sequence recognition algorithm using feature vectors calculated to the trajectory of morphological center points. Through the experiment, we demonstrated the efficiency of proposed algorithm.

  • PDF

Analysis of movement in (2013) (<셜리에 관한 모든 것>(2013)에 나타난 움직임 분석)

  • Moon, Jae-Cheol;Lee, Jin-Young
    • Journal of Korea Entertainment Industry Association
    • /
    • v.14 no.6
    • /
    • pp.43-52
    • /
    • 2020
  • This paper is a study of Gustav Deutsch's film (2013). The film transformed the painting of Edward Hopper into an homage film. So it gives the impression that the picture is moving. In this regard, it raises the issue of 'remediation' between film and pictures. In this study, We ask how (2013) dealt with the movement in turning Hopper's paintings into movies. To that end, To this end, we look at two aspects of movement: the actor's movement and the screen's movement. The concepts of "tableau vivant," Agamben's gesture and mediation were used in the process. The actor's movement in the film is not an act of making and developing events. It is a gesture that moves a person's body and expression itself. It is not a story-oriented acting, but a gesture that Giorgio Agamben said. Editing and camera movements are used while maintaining frontality. This suggests that the movement of the screen is the eye of the audience. At first glance, it embodies the voyeuristic gaze of the original work. However, But the audience isn't looking at the image unilaterally, as in mainstream fiction films, but they are also being seen by that image. Also, the camera's movement to take a closer look at the details of the screen shows the movement itself rather than the means to reveal the details. The 'vision of reality' in a film is made through movement. The film questions the vision of reality between painting and film, between words and images. The move is a means of mediating reality, but the film is regaining the "lost gesture" that Giorgio Agamben once said by revealing its mediated nature. This tells us that the vision of reality appears when it obscures its mediated nature.

The Relationship between the Mental Model and the Depictive Gestures Observed in the Explanations of Elementary School Students about the Reason Why Seasons change (계절의 변화 원인에 대한 초등학생들의 설명에서 확인된 정신 모델과 묘사적 몸짓의 관계 분석)

  • Kim, Na-Young;Yang, Il-Ho;Ko, Min-Seok
    • Journal of the Korean Society of Earth Science Education
    • /
    • v.7 no.3
    • /
    • pp.358-370
    • /
    • 2014
  • The purpose of this study is to analyze the relationship between the mental model and the depictive gestures observed in the explanations of elementary school students about the reason why seasons change. As a result of analysis in gestures of each mental model, mental model was remembered as "motion" in case of CM-type, and showed more "Exphoric" gestures that expressed gesture as a language. CF type is remembered in "writings or pictures," and metaphoric gestures were used when explaining some alternative concepts. CF-UM type explained with language in detail, and showed a number of gestures with "Lexical." Analyzing depictive gestures, even with sub-categories such as rotation, revolution and meridian altitude, etc., a great many types of gestures were expressed such as indicating with fingers, palms, arms, ball-point pens, and fists, etc., or drawing, spinning and indicating them. We could check up concept understandings of the students through this. In addition, as we analyzed inconsistencies among external representations such as verbal language and gesture, writing and gesture, and picture and gesture, we realized that gestures can help understanding mental models of the students, and sometimes, we could know that information that cannot be shown by linguistic explanations or pictures was expressed in gestures. Additionally, we looked into two research participants that showed conspicuous differences. One participant seemed to be wrong as he used his own expressions, but he expressed with gestures precisely, while the other participant seemed to be accurate, but when he analyzed gestures, he had whimsical concepts.

A Study on Hand Gesture Recognition with Low-Resolution Hand Images (저해상도 손 제스처 영상 인식에 대한 연구)

  • Ahn, Jung-Ho
    • Journal of Satellite, Information and Communications
    • /
    • v.9 no.1
    • /
    • pp.57-64
    • /
    • 2014
  • Recently, many human-friendly communication methods have been studied for human-machine interface(HMI) without using any physical devices. One of them is the vision-based gesture recognition that this paper deals with. In this paper, we define some gestures for interaction with objects in a predefined virtual world, and propose an efficient method to recognize them. For preprocessing, we detect and track the both hands, and extract their silhouettes from the low-resolution hand images captured by a webcam. We modeled skin color by two Gaussian distributions in RGB color space and use blob-matching method to detect and track the hands. Applying the foodfill algorithm we extracted hand silhouettes and recognize the hand shapes of Thumb-Up, Palm and Cross by detecting and analyzing their modes. Then, with analyzing the context of hand movement, we recognized five predefined one-hand or both-hand gestures. Assuming that one main user shows up for accurate hand detection, the proposed gesture recognition method has been proved its efficiency and accuracy in many real-time demos.