• Title/Summary/Keyword: Gesture Classification

Search Result 65, Processing Time 0.034 seconds

Effective Hand Gesture Recognition by Key Frame Selection and 3D Neural Network

  • Hoang, Nguyen Ngoc;Lee, Guee-Sang;Kim, Soo-Hyung;Yang, Hyung-Jeong
    • Smart Media Journal
    • /
    • v.9 no.1
    • /
    • pp.23-29
    • /
    • 2020
  • This paper presents an approach for dynamic hand gesture recognition by using algorithm based on 3D Convolutional Neural Network (3D_CNN), which is later extended to 3D Residual Networks (3D_ResNet), and the neural network based key frame selection. Typically, 3D deep neural network is used to classify gestures from the input of image frames, randomly sampled from a video data. In this work, to improve the classification performance, we employ key frames which represent the overall video, as the input of the classification network. The key frames are extracted by SegNet instead of conventional clustering algorithms for video summarization (VSUMM) which require heavy computation. By using a deep neural network, key frame selection can be performed in a real-time system. Experiments are conducted using 3D convolutional kernels such as 3D_CNN, Inflated 3D_CNN (I3D) and 3D_ResNet for gesture classification. Our algorithm achieved up to 97.8% of classification accuracy on the Cambridge gesture dataset. The experimental results show that the proposed approach is efficient and outperforms existing methods.

A Framework for Designing Closed-loop Hand Gesture Interface Incorporating Compatibility between Human and Monocular Device

  • Lee, Hyun-Soo;Kim, Sang-Ho
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.533-540
    • /
    • 2012
  • Objective: This paper targets a framework of a hand gesture based interface design. Background: While a modeling of contact-based interfaces has focused on users' ergonomic interface designs and real-time technologies, an implementation of a contactless interface needs error-free classifications as an essential prior condition. These trends made many research studies concentrate on the designs of feature vectors, learning models and their tests. Even though there have been remarkable advances in this field, the ignorance of ergonomics and users' cognitions result in several problems including a user's uneasy behaviors. Method: In order to incorporate compatibilities considering users' comfortable behaviors and device's classification abilities simultaneously, classification-oriented gestures are extracted using the suggested human-hand model and closed-loop classification procedures. Out of the extracted gestures, the compatibility-oriented gestures are acquired though human's ergonomic and cognitive experiments. Then, the obtained hand gestures are converted into a series of hand behaviors - Handycon - which is mapped into several functions in a mobile device. Results: This Handycon model guarantees users' easy behavior and helps fast understandings as well as the high classification rate. Conclusion and Application: The suggested framework contributes to develop a hand gesture-based contactless interface model considering compatibilities between human and device. The suggested procedures can be applied effectively into other contactless interface designs.

Gesture Classification Based on k-Nearest Neighbors Algorithm for Game Interface (게임 인터페이스를 위한 최근접 이웃알고리즘 기반의 제스처 분류)

  • Chae, Ji Hun;Lim, Jong Heon;Lee, Joon Jae
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.5
    • /
    • pp.874-880
    • /
    • 2016
  • The gesture classification has been applied to many fields. But it is not efficient in the environment for game interface with low specification devices such as mobile and tablet, In this paper, we propose a effective way for realistic game interface using k-nearest neighbors algorithm for gesture classification. It is time consuming by realtime rendering process in game interface. To reduce the process time while preserving the accuracy, a reconstruction method to minimize error between training and test data sets is also proposed. The experimental results show that the proposed method is better than the conventional methods in both accuracy and time.

Hand Gesture Recognition using Multivariate Fuzzy Decision Tree and User Adaptation (다변량 퍼지 의사결정트리와 사용자 적응을 이용한 손동작 인식)

  • Jeon, Moon-Jin;Do, Jun-Hyeong;Lee, Sang-Wan;Park, Kwang-Hyun;Bien, Zeung-Nam
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.2
    • /
    • pp.81-90
    • /
    • 2008
  • While increasing demand of the service for the disabled and the elderly people, assistive technologies have been developed rapidly. The natural signal of human such as voice or gesture has been applied to the system for assisting the disabled and the elderly people. As an example of such kind of human robot interface, the Soft Remote Control System has been developed by HWRS-ERC in $KAIST^[1]$. This system is a vision-based hand gesture recognition system for controlling home appliances such as television, lamp and curtain. One of the most important technologies of the system is the hand gesture recognition algorithm. The frequently occurred problems which lower the recognition rate of hand gesture are inter-person variation and intra-person variation. Intra-person variation can be handled by inducing fuzzy concept. In this paper, we propose multivariate fuzzy decision tree(MFDT) learning and classification algorithm for hand motion recognition. To recognize hand gesture of a new user, the most proper recognition model among several well trained models is selected using model selection algorithm and incrementally adapted to the user's hand gesture. For the general performance of MFDT as a classifier, we show classification rate using the benchmark data of the UCI repository. For the performance of hand gesture recognition, we tested using hand gesture data which is collected from 10 people for 15 days. The experimental results show that the classification and user adaptation performance of proposed algorithm is better than general fuzzy decision tree.

  • PDF

Implementation of Pen-Gesture Recognition System for Multimodal User Interface (멀티모달 사용자 인터페이스를 위한 펜 제스처인식기의 구현)

  • 오준택;이우범;김욱현
    • Proceedings of the IEEK Conference
    • /
    • 2000.11c
    • /
    • pp.121-124
    • /
    • 2000
  • In this paper, we propose a pen gesture recognition system for user interface in multimedia terminal which requires fast processing time and high recognition rate. It is realtime and interaction system between graphic and text module. Text editing in recognition system is performed by pen gesture in graphic module or direct editing in text module, and has all 14 editing functions. The pen gesture recognition is performed by searching classification features that extracted from input strokes at pen gesture model. The pen gesture model has been constructed by classification features, ie, cross number, direction change, direction code number, position relation, distance ratio information about defined 15 types. The proposed recognition system has obtained 98% correct recognition rate and 30msec average processing time in a recognition experiment.

  • PDF

Gesture Recognition Method using Tree Classification and Multiclass SVM (다중 클래스 SVM과 트리 분류를 이용한 제스처 인식 방법)

  • Oh, Juhee;Kim, Taehyub;Hong, Hyunki
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.6
    • /
    • pp.238-245
    • /
    • 2013
  • Gesture recognition has been widely one of the research areas for natural user interface. This paper presents a novel gesture recognition method using tree classification and multiclass SVM(Support Vector Machine). In the learning step, 3D trajectory of human gesture obtained by a Kinect sensor is classified into the tree nodes according to their distributions. The gestures are resampled and we obtain the histogram of the chain code from the normalized data. Then multiclass SVM is applied to the classified gestures in the node. The input gesture classified using the constructed tree is recognized with multiclass SVM.

TextNAS Application to Multivariate Time Series Data and Hand Gesture Recognition (textNAS의 다변수 시계열 데이터로의 적용 및 손동작 인식)

  • Kim, Gi-duk;Kim, Mi-sook;Lee, Hack-man
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.518-520
    • /
    • 2021
  • In this paper, we propose a hand gesture recognition method by modifying the textNAS used for text classification so that it can be applied to multivariate time series data. It can be applied to various fields such as behavior recognition, emotion recognition, and hand gesture recognition through multivariate time series data classification. In addition, it automatically finds a deep learning model suitable for classification through training, thereby reducing the burden on users and obtaining high-performance class classification accuracy. By applying the proposed method to the DHG-14/28 and Shrec'17 datasets, which are hand gesture recognition datasets, it was possible to obtain higher class classification accuracy than the existing models. The classification accuracy was 98.72% and 98.16% for DHG-14/28, and 97.82% and 98.39% for Shrec'17 14 class/28 class.

  • PDF

A Decision Tree based Real-time Hand Gesture Recognition Method using Kinect

  • Chang, Guochao;Park, Jaewan;Oh, Chimin;Lee, Chilwoo
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.12
    • /
    • pp.1393-1402
    • /
    • 2013
  • Hand gesture is one of the most popular communication methods in everyday life. In human-computer interaction applications, hand gesture recognition provides a natural way of communication between humans and computers. There are mainly two methods of hand gesture recognition: glove-based method and vision-based method. In this paper, we propose a vision-based hand gesture recognition method using Kinect. By using the depth information is efficient and robust to achieve the hand detection process. The finger labeling makes the system achieve pose classification according to the finger name and the relationship between each fingers. It also make the classification more effective and accutate. Two kinds of gesture sets can be recognized by our system. According to the experiment, the average accuracy of American Sign Language(ASL) number gesture set is 94.33%, and that of general gestures set is 95.01%. Since our system runs in real-time and has a high recognition rate, we can embed it into various applications.

Optical Flow Orientation Histogram for Hand Gesture Recognition (손 동작 인식을 위한 Optical Flow Orientation Histogram)

  • Aurrahman, Dhi;Setiawan, Nurul Arif;Oh, Chi-Min;Lee, Chil-Woo
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.517-521
    • /
    • 2008
  • Hand motion classification problem is considered as basis for sign or gesture recognition. We promote optical flow as main feature extracted from images sequences to simultaneously segment the motion's area by its magnitude and characterize the motion' s directions by its orientation. We manage the flow orientation histogram as motion descriptor. A motion is encoded by concatenating the flow orientation histogram from several frames. We utilize simple histogram matching to classify the motion sequences. Attempted experiments show the feasibility of our method for hand motion localization and classification.

  • PDF

Dynamic Gesture Recognition using SVM and its Application to an Interactive Storybook (SVM을 이용한 동적 동작인식: 체감형 동화에 적용)

  • Lee, Kyoung-Mi
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.4
    • /
    • pp.64-72
    • /
    • 2013
  • This paper proposes a dynamic gesture recognition algorithm using SVM(Support Vector Machine) which is suitable for multi-dimension classification. First of all, the proposed algorithm locates the beginning and end of the gestures on the video frames at the Kinect camera, spots meaningful gesture frames, and normalizes the number of frames. Then, for gesture recognition, the algorithm extracts gesture features using body parts' positions and relations among the parts based on the human model from the normalized frames. C-SVM for each dynamic gesture is trained using training data which consists of positive data and negative data. The final gesture is chosen with the largest value of C-SVM values. The proposed gesture recognition algorithm can be applied to the interactive storybook as gesture interface.