• Title/Summary/Keyword: Gesture Input

Search Result 148, Processing Time 0.027 seconds

Recognizing Human Facial Expressions and Gesture from Image Sequence (연속 영상에서의 얼굴표정 및 제스처 인식)

  • 한영환;홍승홍
    • Journal of Biomedical Engineering Research
    • /
    • v.20 no.4
    • /
    • pp.419-425
    • /
    • 1999
  • In this paper, we present an algorithm of real time facial expression and gesture recognition for image sequence on the gray level. A mixture algorithm of a template matching and knowledge based geometrical consideration of a face were adapted to locate the face area in input image. And optical flow method applied on the area to recognize facial expressions. Also, we suggest hand area detection algorithm form a background image by analyzing entropy in an image. With modified hand area detection algorithm, it was possible to recognize hand gestures from it. As a results, the experiments showed that the suggested algorithm was good at recognizing one's facial expression and hand gesture by detecting a dominant motion area on images without getting any limits from the background image.

  • PDF

Stroke Based Hand Gesture Recognition by Analyzing a Trajectory of Polhemus Sensor (Polhemus 센서의 궤적 정보 해석을 이용한 스트로크 기반의 손 제스처 인식)

  • Kim, In-Cheol;Lee, Nam-Ho;Lee, Yong-Bum;Chien, Sung-Il
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.36C no.8
    • /
    • pp.46-53
    • /
    • 1999
  • We have developed glove based hand gesture recognition system for recognizing 3D gesture of operators in remote work environment. Polhemus sensor attached to the PinchGlove is employed to obtain the sequence of 3D positions of a hand trajectory. These 3D data are then encoded as the input to our recognition system. We propose the use of the strokes to be modeled by HMMs as basic units. The gesture models are constructed by concatenating stroke HMMs and thereby the HMMs for the newly defined gestures can be created without retraining their parameters. Thus, by using stroke models rather than gesture models, we can raise the system extensibility. The experiment results for 16 different gestures show that our stroke based composite HMM performs better than the conventional gesture based HMM.

  • PDF

Design and Implementation for Korean Character and Pen-gesture Recognition System using Stroke Information (획 정보를 이용한 한글문자와 펜 제스처 인식 시스템의 설계 및 구현)

  • Oh, Jun-Taek;Kim, Wook-Hyun
    • The KIPS Transactions:PartB
    • /
    • v.9B no.6
    • /
    • pp.765-774
    • /
    • 2002
  • The purpose of this paper is a design and implementation for korean character and pen-gesture recognition system in multimedia terminal, PDA and etc, which demand both a fast process and a high recognition rate. To recognize writing-types which are written by various users, the korean character recognition system uses a database which is based on the characteristic information of korean and the stroke information Which composes a phoneme, etc. In addition. it has a fast speed by the phoneme segmentation which uses the successive process or the backtracking process. The pen-gesture recognition system is performed by a matching process between the classification features extracted from an input pen-gesture and the classification features of 15 pen-gestures types defined in the gesture model. The classification feature is using the insensitive stroke information. i.e., the positional relation between two strokes. the crossing number, the direction transition, the direction vector, the number of direction code. and the distance ratio between starting and ending point in each stroke. In the experiment, we acquired a high recognition rate and a fart speed.

Mobile Game Control using Gesture Recognition (제스처 인식을 활용한 모바일 게임 제어)

  • Lee, Yong-Cheol;Oh, Chi-Min;Lee, Chil-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.12
    • /
    • pp.629-638
    • /
    • 2011
  • Mobile game have an advantage of mobility, portability, and simple interface. These advantages are useful for gesture recognition based game which should not have much content quantity and complex interface. This paper suggests gesture recognition based mobile game content with user movement could be applied directly to the mobile game wherever recognition system is equipped. Gesture is recognized by obtaining user area in image from the depth image of TOF camera and going through SVM(Support Vectorn Machine) using EOH(Edge Of Histogram) features of user area. And we confirmed that gesture recognition can be utilized to user input of mobile game content. Proposed technique can be applied to a variety of content, but this paper shows a simple way of game contents which is consisted of moving and jumping newly.

Gesture Interface for Controlling Intelligent Humanoid Robot (지능형 로봇 제어를 위한 제스처 인터페이스)

  • Bae Ki Tae;Kim Man Jin;Lee Chil Woo;Oh Jae Yong
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.10
    • /
    • pp.1337-1346
    • /
    • 2005
  • In this paper, we describe an algorithm which can automatically recognize human gesture for Human-Robot interaction. In early works, many systems for recognizing human gestures work under many restricted conditions. To eliminate these restrictions, we have proposed the method that can represent 3D and 2D gesture information simultaneously, APM. This method is less sensitive to noise or appearance characteristic. First, the feature vectors are extracted using APM. The next step is constructing a gesture space by analyzing the statistical information of training images with PCA. And then, input images are compared to the model and individually symbolized to one portion of the model space. In the last step, the symbolized images are recognized with HMM as one of model gestures. The experimental results indicate that the proposed algorithm is efficient on gesture recognition, and it is very convenient to apply to humanoid robot or intelligent interface systems.

  • PDF

Dynamic Hand Gesture Recognition using Guide Lines (가이드라인을 이용한 동적 손동작 인식)

  • Kim, Kun-Woo;Lee, Won-Joo;Jeon, Chang-Ho
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.5
    • /
    • pp.1-9
    • /
    • 2010
  • Generally, dynamic hand gesture recognition is formed through preprocessing step, hand tracking step and hand shape detection step. In this paper, we present advanced dynamic hand gesture recognizing method that improves performance in preprocessing step and hand shape detection step. In preprocessing step, we remove noise fast by using dynamic table and detect skin color exactly on complex background for controling skin color range in skin color detection method using YCbCr color space. Especially, we increase recognizing speed in hand shape detection step through detecting Start Image and Stop Image, that are elements of dynamic hand gesture recognizing, using Guideline. Guideline is edge of input hand image and hand shape for comparing. We perform various experiments with nine web-cam video clips that are separated to complex background and simple background for dynamic hand gesture recognition method in the paper. The result of experiment shows similar recognition ratio but high recognition speed, low cpu usage, low memory usage than recognition method using learning exercise.

3D Virtual Reality Game with Deep Learning-based Hand Gesture Recognition (딥러닝 기반 손 제스처 인식을 통한 3D 가상현실 게임)

  • Lee, Byeong-Hee;Oh, Dong-Han;Kim, Tae-Young
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.5
    • /
    • pp.41-48
    • /
    • 2018
  • The most natural way to increase immersion and provide free interaction in a virtual environment is to provide a gesture interface using the user's hand. However, most studies about hand gesture recognition require specialized sensors or equipment, or show low recognition rates. This paper proposes a three-dimensional DenseNet Convolutional Neural Network that enables recognition of hand gestures with no sensors or equipment other than an RGB camera for hand gesture input and introduces a virtual reality game based on it. Experimental results on 4 static hand gestures and 6 dynamic hand gestures showed that they could be used as real-time user interfaces for virtual reality games with an average recognition rate of 94.2% at 50ms. Results of this research can be used as a hand gesture interface not only for games but also for education, medicine, and shopping.

An Efficient Hand Gesture Recognition Method using Two-Stream 3D Convolutional Neural Network Structure (이중흐름 3차원 합성곱 신경망 구조를 이용한 효율적인 손 제스처 인식 방법)

  • Choi, Hyeon-Jong;Noh, Dae-Cheol;Kim, Tae-Young
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.14 no.6
    • /
    • pp.66-74
    • /
    • 2018
  • Recently, there has been active studies on hand gesture recognition to increase immersion and provide user-friendly interaction in a virtual reality environment. However, most studies require specialized sensors or equipment, or show low recognition rates. This paper proposes a hand gesture recognition method using Deep Learning technology without separate sensors or equipment other than camera to recognize static and dynamic hand gestures. First, a series of hand gesture input images are converted into high-frequency images, then each of the hand gestures RGB images and their high-frequency images is learned through the DenseNet three-dimensional Convolutional Neural Network. Experimental results on 6 static hand gestures and 9 dynamic hand gestures showed an average of 92.6% recognition rate and increased 4.6% compared to previous DenseNet. The 3D defense game was implemented to verify the results of our study, and an average speed of 30 ms of gesture recognition was found to be available as a real-time user interface for virtual reality applications.

Effects of Spatio-temporal Features of Dynamic Hand Gestures on Learning Accuracy in 3D-CNN (3D-CNN에서 동적 손 제스처의 시공간적 특징이 학습 정확성에 미치는 영향)

  • Yeongjee Chung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.3
    • /
    • pp.145-151
    • /
    • 2023
  • 3D-CNN is one of the deep learning techniques for learning time series data. Such three-dimensional learning can generate many parameters, so that high-performance machine learning is required or can have a large impact on the learning rate. When learning dynamic hand-gestures in spatiotemporal domain, it is necessary for the improvement of the efficiency of dynamic hand-gesture learning with 3D-CNN to find the optimal conditions of input video data by analyzing the learning accuracy according to the spatiotemporal change of input video data without structural change of the 3D-CNN model. First, the time ratio between dynamic hand-gesture actions is adjusted by setting the learning interval of image frames in the dynamic hand-gesture video data. Second, through 2D cross-correlation analysis between classes, similarity between image frames of input video data is measured and normalized to obtain an average value between frames and analyze learning accuracy. Based on this analysis, this work proposed two methods to effectively select input video data for 3D-CNN deep learning of dynamic hand-gestures. Experimental results showed that the learning interval of image data frames and the similarity of image frames between classes can affect the accuracy of the learning model.

Analyzing Input Patterns of Smartphone Applications in Touch Interfaces

  • Bahn, Hyokyung;Kim, Jisun
    • International journal of advanced smart convergence
    • /
    • v.10 no.4
    • /
    • pp.30-37
    • /
    • 2021
  • Touch sensor interface has become the most useful input device in a smartphone. Unlike keypad/keyboard interfaces used in electronic dictionaries and feature phones, smartphone's touch interfaces allow for the recognition of various gestures that represent distinct features of each application's input. In this paper, we analyze application-specific input patterns that appear in smartphone's touch interfaces. Specifically, we capture touch input patterns from various Android applications, and analyze them. Based on this analysis, we observe a certain unique characteristics of application's touch input patterns. This can be utilized in various useful areas like user authentications, prevention of executing application by illegal users, or digital forensic based on logged touch patterns.