• Title/Summary/Keyword: Gesture Input

Search Result 148, Processing Time 0.022 seconds

Effective Hand Gesture Recognition by Key Frame Selection and 3D Neural Network

  • Hoang, Nguyen Ngoc;Lee, Guee-Sang;Kim, Soo-Hyung;Yang, Hyung-Jeong
    • Smart Media Journal
    • /
    • v.9 no.1
    • /
    • pp.23-29
    • /
    • 2020
  • This paper presents an approach for dynamic hand gesture recognition by using algorithm based on 3D Convolutional Neural Network (3D_CNN), which is later extended to 3D Residual Networks (3D_ResNet), and the neural network based key frame selection. Typically, 3D deep neural network is used to classify gestures from the input of image frames, randomly sampled from a video data. In this work, to improve the classification performance, we employ key frames which represent the overall video, as the input of the classification network. The key frames are extracted by SegNet instead of conventional clustering algorithms for video summarization (VSUMM) which require heavy computation. By using a deep neural network, key frame selection can be performed in a real-time system. Experiments are conducted using 3D convolutional kernels such as 3D_CNN, Inflated 3D_CNN (I3D) and 3D_ResNet for gesture classification. Our algorithm achieved up to 97.8% of classification accuracy on the Cambridge gesture dataset. The experimental results show that the proposed approach is efficient and outperforms existing methods.

A Study on Gesture Recognition Using Principal Factor Analysis (주 인자 분석을 이용한 제스처 인식에 관한 연구)

  • Lee, Yong-Jae;Lee, Chil-Woo
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.8
    • /
    • pp.981-996
    • /
    • 2007
  • In this paper, we describe a method that can recognize gestures by obtaining motion features information with principal factor analysis from sequential gesture images. In the algorithm, firstly, a two dimensional silhouette region including human gesture is segmented and then geometric features are extracted from it. Here, global features information which is selected as some meaningful key feature effectively expressing gestures with principal factor analysis is used. Obtained motion history information representing time variation of gestures from extracted feature construct one gesture subspace. Finally, projected model feature value into the gesture space is transformed as specific state symbols by grouping algorithm to be use as input symbols of HMM and input gesture is recognized as one of the model gesture with high probability. Proposed method has achieved higher recognition rate than others using only shape information of human body as in an appearance-based method or extracting features intuitively from complicated gestures, because this algorithm constructs gesture models with feature factors that have high contribution rate using principal factor analysis.

  • PDF

Gesture Recognition using Training-effect on image sequences (연속 영상에서 학습 효과를 이용한 제스처 인식)

  • 이현주;이칠우
    • Proceedings of the IEEK Conference
    • /
    • 2000.06d
    • /
    • pp.222-225
    • /
    • 2000
  • Human frequently communicate non-linguistic information with gesture. So, we must develop efficient and fast gesture recognition algorithms for more natural human-computer interaction. However, it is difficult to recognize gesture automatically because human's body is three dimensional object with very complex structure. In this paper, we suggest a method which is able to detect key frames and frame changes, and to classify image sequence into some gesture groups. Gesture is classifiable according to moving part of body. First, we detect some frames that motion areas are changed abruptly and save those frames as key frames, and then use the frames to classify sequences. We symbolize each image of classified sequence using Principal Component Analysis(PCA) and clustering algorithm since it is better to use fewer components for representation of gestures. Symbols are used as the input symbols for the Hidden Markov Model(HMM) and recognized as a gesture with probability calculation.

  • PDF

Design and Implementation of a Smartphone-based User-Convenance Home Network Control System using Gesture (제스처를 이용한 스마트폰 기반 사용자 편의 홈 네트워크 제어 시스템의 설계 및 구현)

  • Jeon, Byoungchan;Cha, Siho
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.11 no.2
    • /
    • pp.113-120
    • /
    • 2015
  • Under the penetration of smartphones equipped with a variety of features grows globally, the efficient using of a variety of functions of smartphones has been increased. In accordance with this trend, a lot of researches on the remote control method using the smart phone for consumer products in home networks. Input methods of the current smpartphoes are typically button-based inputs through touching. The button input methods are inconvenient for people who are not familiar touch. Therefore, the researches on the different input schemes to replace the touch methods are required. In this paper, we propose a gesture based input method to replace the touch-sensitive input that of the existing smartphone applications, and a way to apply it to home networks. The proposed method uses three-axis acceleration sensor which is built into smatphones, and it also defines six kinds of gestures patterns that may be applied to home network systems by measuring the recognition rates.

Deep Learning Based 3D Gesture Recognition Using Spatio-Temporal Normalization (시 공간 정규화를 통한 딥 러닝 기반의 3D 제스처 인식)

  • Chae, Ji Hun;Gang, Su Myung;Kim, Hae Sung;Lee, Joon Jae
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.5
    • /
    • pp.626-637
    • /
    • 2018
  • Human exchanges information not only through words, but also through body gesture or hand gesture. And they can be used to build effective interfaces in mobile, virtual reality, and augmented reality. The past 2D gesture recognition research had information loss caused by projecting 3D information in 2D. Since the recognition of the gesture in 3D is higher than 2D space in terms of recognition range, the complexity of gesture recognition increases. In this paper, we proposed a real-time gesture recognition deep learning model and application in 3D space using deep learning technique. First, in order to recognize the gesture in the 3D space, the data collection is performed using the unity game engine to construct and acquire data. Second, input vector normalization for learning 3D gesture recognition model is processed based on deep learning. Thirdly, the SELU(Scaled Exponential Linear Unit) function is applied to the neural network's active function for faster learning and better recognition performance. The proposed system is expected to be applicable to various fields such as rehabilitation cares, game applications, and virtual reality.

A Measurement System for 3D Hand-Drawn Gesture with a PHANToMTM Device

  • Ko, Seong-Young;Bang, Won-Chul;Kim, Sang-Youn
    • Journal of Information Processing Systems
    • /
    • v.6 no.3
    • /
    • pp.347-358
    • /
    • 2010
  • This paper presents a measurement system for 3D hand-drawn gesture motion. Many pen-type input devices with Inertial Measurement Units (IMU) have been developed to estimate 3D hand-drawn gesture using the measured acceleration and/or the angular velocity of the device. The crucial procedure in developing these devices is to measure and to analyze their motion or trajectory. In order to verify the trajectory estimated by an IMU-based input device, it is necessary to compare the estimated trajectory to the real trajectory. For measuring the real trajectory of the pen-type device, a PHANToMTM haptic device is utilized because it allows us to measure the 3D motion of the object in real-time. Even though the PHANToMTM measures the position of the hand gesture well, poor initialization may produce a large amount of error. Therefore, this paper proposes a calibration method which can minimize measurement errors.

Implementation of Pen-Gesture Recognition System for Multimodal User Interface (멀티모달 사용자 인터페이스를 위한 펜 제스처인식기의 구현)

  • 오준택;이우범;김욱현
    • Proceedings of the IEEK Conference
    • /
    • 2000.11c
    • /
    • pp.121-124
    • /
    • 2000
  • In this paper, we propose a pen gesture recognition system for user interface in multimedia terminal which requires fast processing time and high recognition rate. It is realtime and interaction system between graphic and text module. Text editing in recognition system is performed by pen gesture in graphic module or direct editing in text module, and has all 14 editing functions. The pen gesture recognition is performed by searching classification features that extracted from input strokes at pen gesture model. The pen gesture model has been constructed by classification features, ie, cross number, direction change, direction code number, position relation, distance ratio information about defined 15 types. The proposed recognition system has obtained 98% correct recognition rate and 30msec average processing time in a recognition experiment.

  • PDF

A Study of Pattern-based Gesture Interaction in Tabletop Environments (테이블탑 환경에서 패턴 기반의 제스처 인터렉션 방법 연구)

  • Kim, Gun-Hee;Cho, Hyun-Chul;Pei, Wen-Hua;Ha, Sung-Do;Park, Ji-Hyung
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.696-700
    • /
    • 2009
  • In this paper, we present a framework which enables users to interact naturally with hand gestures on a digital table. In general tabletop applications, one gesture is mapped to one function or command. Therefore, users should know these relations, and make predefined gestures as input. In contrast, users can make input gesture without cognitive load in our system. Instead of burdening users, the system possesses knowledge about gesture interaction, and infers proactively users' gestures and intentions. When users make a gesture on the digital surface, the system begins to analyze the gestures and designs the response according to users' intention.

  • PDF

A Study on Gesture Recognition using Edge Orientation Histogram and HMM (에지 방향성 히스토그램과 HMM을 이용한 제스처 인식에 관한 연구)

  • Lee, Kee-Jun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.12
    • /
    • pp.2647-2654
    • /
    • 2011
  • In this paper, the algorithm that recognizes the gesture by configuring the feature information obtained through edge orientation histogram and principal component analysis as low dimensional gesture symbol was described. Since the proposed method doesn't require a lot of computations compared to the existing geometric feature based method or appearance based methods and it can maintain high recognition rate by using the minimum information, it is very well suited for real-time system establishment. In addition, to reduce incorrect recognition or recognition errors that occur during gesture recognition, the model feature values projected in the gesture space is configured as a particular status symbol through clustering algorithm to be used as input symbol of hidden Markov models. By doing so, any input gesture will be recognized as the corresponding gesture model with highest probability.