• Title/Summary/Keyword: Gesture Pattern Recognition

Search Result 54, Processing Time 0.027 seconds

Mobile Gesture Recognition using Dynamic Time Warping with Localized Template (지역화된 템플릿기반 동적 시간정합을 이용한 모바일 제스처인식)

  • Choe, Bong-Whan;Min, Jun-Ki;Jo, Seong-Bae
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.4
    • /
    • pp.482-486
    • /
    • 2010
  • Recently, gesture recognition methods based on dynamic time warping (DTW) have been actively investigated as more mobile devices have equipped the accelerometer. DTW has no additional training step since it uses given samples as the matching templates. However, it is difficult to apply the DTW on mobile environments because of its computational complexity of matching step where the input pattern has to be compared with every templates. In order to address the problem, this paper proposes a gesture recognition method based on DTW that uses localized subset of templates. Here, the k-means clustering algorithm is used to divide each class into subclasses in which the most centered sample in each subclass is employed as the localized template. It increases the recognition speed by reducing the number of matches while it minimizes the errors by preserving the diversities of the training patterns. Experimental results showed that the proposed method was about five times faster than the DTW with all training samples, and more stable than the randomly selected templates.

Three-Dimensional Direction Code Patterns for Hand Gesture Recognition (손동작인식을 위한 3차원 방향 코드 패턴)

  • Park, Jung-Hoo;Kim, Young-Ju
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2013.07a
    • /
    • pp.21-22
    • /
    • 2013
  • 논문에서는 제스처 인식을 하기 위해 필요한 특징 값을 3차원 방향 코드로 구현한 특징 패턴을 검출하는 방법을 제안한다. 검출된 데이터 좌표끼리 직선을 만들고 직선들의 사이각의 합 연산을 이용해서 특징 변곡점을 추출한다. 추출된 변곡점끼리 직선을 생성한 후, 8방향 코드와 깊이 값을 병합시킨 24방향 코드를 맵핑 시켜준다. 맵핑된 방향 코드들을 한 패턴으로 생성한다. 생성된 패턴에서 인식에 불필요한 방향 노이즈를 제거하기 위해 특정 규칙을 적용한 필터링을 적용하여 필터링된 패턴을 추출하게 된다. '배너코드를 이용한 8방향 패턴'과 비교해서 더 효과적인 패턴이 추출됨을 확인하였다.

  • PDF

Research of Gesture Recognition Technology Based on GMM and SVM Hybrid Model Using EPIC Sensor (EPIC 센서를 이용한 GMM, SVM 기반 동작인식기법에 관한 연구)

  • CHEN, CUI;Kim, Young-Chul
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2016.05a
    • /
    • pp.11-12
    • /
    • 2016
  • SVM (Support Vector machine) is powerful machine-learning method, and obtains better performance than traditional methods in the applications of muti-dimension nonlinear pattern classification. For the case of SVM model training and low efficiency in large samples, this paper proposes a combination of statistical parameters of the GMM-UBM (Universal Background Model) model. It is very effective to solve the problem of the large sample for the SVM training. The experiment is carried on four special dynamic hand gestures using the EPIC sensors. And the results show that the improved dynamic hand gesture recognition system has a high recognition rate up to 96.75%.

  • PDF

On-line Korean Sing Language(KSL) Recognition using Fuzzy Min-Max Neural Network and feature Analysis

  • zeungnam Bien;Kim, Jong-Sung
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1995.10b
    • /
    • pp.85-91
    • /
    • 1995
  • This paper presents a system which recognizes the Korean Sign Language(KSL) and translates into normal Korean speech. A sign language is a method of communication for the deaf-mute who uses gestures, especially both hands and fingers. Since the human hands and fingers are not the same in physical dimension, the same form of a gesture produced by two signers with their hands may not produce the same numerical values when obtained through electronic sensors. In this paper, we propose a dynamic gesture recognition method based on feature analysis for efficient classification of hand motions, and on a fuzzy min-max neural network for on-line pattern recognition.

  • PDF

Accelerometer-based Gesture Recognition for Robot Interface (로봇 인터페이스 활용을 위한 가속도 센서 기반 제스처 인식)

  • Jang, Min-Su;Cho, Yong-Suk;Kim, Jae-Hong;Sohn, Joo-Chan
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.53-69
    • /
    • 2011
  • Vision and voice-based technologies are commonly utilized for human-robot interaction. But it is widely recognized that the performance of vision and voice-based interaction systems is deteriorated by a large margin in the real-world situations due to environmental and user variances. Human users need to be very cooperative to get reasonable performance, which significantly limits the usability of the vision and voice-based human-robot interaction technologies. As a result, touch screens are still the major medium of human-robot interaction for the real-world applications. To empower the usability of robots for various services, alternative interaction technologies should be developed to complement the problems of vision and voice-based technologies. In this paper, we propose the use of accelerometer-based gesture interface as one of the alternative technologies, because accelerometers are effective in detecting the movements of human body, while their performance is not limited by environmental contexts such as lighting conditions or camera's field-of-view. Moreover, accelerometers are widely available nowadays in many mobile devices. We tackle the problem of classifying acceleration signal patterns of 26 English alphabets, which is one of the essential repertoires for the realization of education services based on robots. Recognizing 26 English handwriting patterns based on accelerometers is a very difficult task to take over because of its large scale of pattern classes and the complexity of each pattern. The most difficult problem that has been undertaken which is similar to our problem was recognizing acceleration signal patterns of 10 handwritten digits. Most previous studies dealt with pattern sets of 8~10 simple and easily distinguishable gestures that are useful for controlling home appliances, computer applications, robots etc. Good features are essential for the success of pattern recognition. To promote the discriminative power upon complex English alphabet patterns, we extracted 'motion trajectories' out of input acceleration signal and used them as the main feature. Investigative experiments showed that classifiers based on trajectory performed 3%~5% better than those with raw features e.g. acceleration signal itself or statistical figures. To minimize the distortion of trajectories, we applied a simple but effective set of smoothing filters and band-pass filters. It is well known that acceleration patterns for the same gesture is very different among different performers. To tackle the problem, online incremental learning is applied for our system to make it adaptive to the users' distinctive motion properties. Our system is based on instance-based learning (IBL) where each training sample is memorized as a reference pattern. Brute-force incremental learning in IBL continuously accumulates reference patterns, which is a problem because it not only slows down the classification but also downgrades the recall performance. Regarding the latter phenomenon, we observed a tendency that as the number of reference patterns grows, some reference patterns contribute more to the false positive classification. Thus, we devised an algorithm for optimizing the reference pattern set based on the positive and negative contribution of each reference pattern. The algorithm is performed periodically to remove reference patterns that have a very low positive contribution or a high negative contribution. Experiments were performed on 6500 gesture patterns collected from 50 adults of 30~50 years old. Each alphabet was performed 5 times per participant using $Nintendo{(R)}$ $Wii^{TM}$ remote. Acceleration signal was sampled in 100hz on 3 axes. Mean recall rate for all the alphabets was 95.48%. Some alphabets recorded very low recall rate and exhibited very high pairwise confusion rate. Major confusion pairs are D(88%) and P(74%), I(81%) and U(75%), N(88%) and W(100%). Though W was recalled perfectly, it contributed much to the false positive classification of N. By comparison with major previous results from VTT (96% for 8 control gestures), CMU (97% for 10 control gestures) and Samsung Electronics(97% for 10 digits and a control gesture), we could find that the performance of our system is superior regarding the number of pattern classes and the complexity of patterns. Using our gesture interaction system, we conducted 2 case studies of robot-based edutainment services. The services were implemented on various robot platforms and mobile devices including $iPhone^{TM}$. The participating children exhibited improved concentration and active reaction on the service with our gesture interface. To prove the effectiveness of our gesture interface, a test was taken by the children after experiencing an English teaching service. The test result showed that those who played with the gesture interface-based robot content marked 10% better score than those with conventional teaching. We conclude that the accelerometer-based gesture interface is a promising technology for flourishing real-world robot-based services and content by complementing the limits of today's conventional interfaces e.g. touch screen, vision and voice.

EPS Gesture Signal Recognition using Deep Learning Model (심층 학습 모델을 이용한 EPS 동작 신호의 인식)

  • Lee, Yu ra;Kim, Soo Hyung;Kim, Young Chul;Na, In Seop
    • Smart Media Journal
    • /
    • v.5 no.3
    • /
    • pp.35-41
    • /
    • 2016
  • In this paper, we propose hand-gesture signal recognition based on EPS(Electronic Potential Sensor) using Deep learning model. Extracted signals which from Electronic field based sensor, EPS have much of the noise, so it must remove in pre-processing. After the noise are removed with filter using frequency feature, the signals are reconstructed with dimensional transformation to overcome limit which have just one-dimension feature with voltage value for using convolution operation. Then, the reconstructed signal data is finally classified and recognized using multiple learning layers model based on deep learning. Since the statistical model based on probability is sensitive to initial parameters, the result can change after training in modeling phase. Deep learning model can overcome this problem because of several layers in training phase. In experiment, we used two different deep learning structures, Convolutional neural networks and Recurrent Neural Network and compared with statistical model algorithm with four kinds of gestures. The recognition result of method using convolutional neural network is better than other algorithms in EPS gesture signal recognition.

Design and Implementation of e-Commerce User Authentication Interface using the Mouse Gesture (마우스 제스처를 이용한 전자상거래 사용자 인증 인터페이스)

  • 김은영;정옥란;조동섭
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.3
    • /
    • pp.469-480
    • /
    • 2003
  • The accurate user- authentication technology is being raised as one of the most important in this current society, which is, so called, information society. Most authentication technology is used to identify users by using the special characteristics of users. This paper has established an e-commerce shopping mall based on conventional e-commerce systems. It also suggested and established the user authentication interface that uses the mouse gesture, which is the new authentication of what users have. The user authentication interface using the mouse gesture generates the status of recognition directly on the screen by comparing the stored pattern values with the unique pattern values that users entered. When users purchase products through the shopping mall and enter their another signature information together with payment information, security can be more increased. Experimental results show that our mouse gesture interface may be useful to provide more security to e-commerce server.

  • PDF

On-line dyamic hand gesture recognition system for virtual reality using elementary component classifiers (기본 요소분류기를 이용한 가상현실용 실시간 동적 손 제스처 인식 시스템의 구현에 관한 연구)

  • 김종성;이찬수
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.34C no.9
    • /
    • pp.68-76
    • /
    • 1997
  • This paper presents a system which recognizes dynamic hand gestures for virtual reality(VR). A dynamic hand gesture is a method of communication for a computer and human who uses gestures, especially both hands and fingers. Since the human hands and fingers are not the same in physical dimension, the same form of a gestrue produced by two persons with their hands may not have the same numerical values which are obtained through electronic sensors. In this paper, we apply a fuzzy min-max neural network and feature analysis method using fuzzy logic for on-line pattern recognition.

  • PDF

Recognizing Hand Digit Gestures Using Stochastic Models

  • Sin, Bong-Kee
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.6
    • /
    • pp.807-815
    • /
    • 2008
  • A simple efficient method of spotting and recognizing hand gestures in video is presented using a network of hidden Markov models and dynamic programming search algorithm. The description starts from designing a set of isolated trajectory models which are stochastic and robust enough to characterize highly variable patterns like human motion, handwriting, and speech. Those models are interconnected to form a single big network termed a spotting network or a spotter that models a continuous stream of gestures and non-gestures as well. The inference over the model is based on dynamic programming. The proposed model is highly efficient and can readily be extended to a variety of recurrent pattern recognition tasks. The test result without any engineering has shown the potential for practical application. At the end of the paper we add some related experimental result that has been obtained using a different model - dynamic Bayesian network - which is also a type of stochastic model.

  • PDF

8-Straight Line Directions Recognition Algorithm for Hand Gestures Using Coordinate Information (좌표 정보를 이용한 손동작 직선 8 방향 인식 알고리즘)

  • SODGEREL, BYAMBASUREN;Kim, Yong-Ki;Kim, Mi-Hye
    • Journal of Digital Convergence
    • /
    • v.13 no.9
    • /
    • pp.259-267
    • /
    • 2015
  • In this paper, we proposed the straight line determination method and the algorithm for 8 directions determination of straight line using the coordinate information and the property of trigonometric function. We conduct an experiment that is 8 hand gestures are carried out 100 times each, a total of 800 times. And the accuracy for the 8 derection determination algorithm is showed the diagonal direction to the left upper side shows the highest accuracy as 92%, and the direction to the left side, the diagonal direction to the right upper side and the diagonal direction to the right bottom side show the lowest accuracy as 82%. This method with coordinate information through image processing than the existing recognizer and the recognition through learning process is possible using a hand gesture recognition gesture.