• Title/Summary/Keyword: gesture-recognition

Search Result 558, Processing Time 0.028 seconds

A Study on Convergence Development Direction of Gesture Recognition Game (동작 인식 게임의 융합 발전 방향)

  • Lee, MyounJae
    • Journal of the Korea Convergence Society
    • /
    • v.5 no.4
    • /
    • pp.1-7
    • /
    • 2014
  • Gesture recognition provides the ease and immediacy to users in the processing technique for recognizing the gesture. Because of these benefits, gesture recognition technology has been applied and fused in many areas, such as the military, health care, education. In particular, the gesture recognition in game field since it can provide users to play games similar to the actual gesture, it being fused with many areas such as medical, military, and education. This paper is to discuss the future convergence direction of motion recognition games based on this background. In this paper, it looks at technology status and the game of gesture recognition, describe the problem and improvement of gesture recognition game. This paper can help improving the competitiveness of domestic convergence on gesture recognition game.

Proposal of Camera Gesture Recognition System Using Motion Recognition Algorithm

  • Moon, Yu-Sung;Kim, Jung-Won
    • Journal of IKEEE
    • /
    • v.26 no.1
    • /
    • pp.133-136
    • /
    • 2022
  • This paper is about motion gesture recognition system, and proposes the following improvement to the flaws of the current system: a motion gesture recognition system and such algorithm that uses the video image of the entire hand and reading its motion gesture to advance the accuracy of recognition. The motion gesture recognition system includes, an image capturing unit that captures and obtains the images of the area applicable for gesture reading, a motion extraction unit that extracts the motion area of the image, and a hand gesture recognition unit that read the motion gestures of the extracted area. The proposed application of the motion gesture algorithm achieves 20% improvement compared to that of the current system.

Gesture Recognition Algorithm by Analyzing Direction Change of Trajectory (궤적의 방향 변화 분석에 의한 제스처 인식 알고리듬)

  • Park Jahng-Hyon;Kim Minsoo
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.22 no.4
    • /
    • pp.121-127
    • /
    • 2005
  • There is a necessity for the communication between intelligent robots and human beings because of wide spread use of them. Gesture recognition is currently being studied in regards to better conversing. On the basis of previous research, however, the gesture recognition algorithms appear to require not only complicated algorisms but also separate training process for high recognition rates. This study suggests a gesture recognition algorithm based on computer vision system, which is relatively simple and more efficient in recognizing various human gestures. After tracing the hand gesture using a marker, direction changes of the gesture trajectory were analyzed to determine the simple gesture code that has minimal information to recognize. A map is developed to recognize the gestures that can be expressed with different gesture codes. Through the use of numerical and geometrical trajectory, the advantages and disadvantages of the suggested algorithm was determined.

Primitive Body Model Encoding and Selective / Asynchronous Input-Parallel State Machine for Body Gesture Recognition (바디 제스처 인식을 위한 기초적 신체 모델 인코딩과 선택적 / 비동시적 입력을 갖는 병렬 상태 기계)

  • Kim, Juchang;Park, Jeong-Woo;Kim, Woo-Hyun;Lee, Won-Hyong;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.8 no.1
    • /
    • pp.1-7
    • /
    • 2013
  • Body gesture Recognition has been one of the interested research field for Human-Robot Interaction(HRI). Most of the conventional body gesture recognition algorithms used Hidden Markov Model(HMM) for modeling gestures which have spatio-temporal variabilities. However, HMM-based algorithms have difficulties excluding meaningless gestures. Besides, it is necessary for conventional body gesture recognition algorithms to perform gesture segmentation first, then sends the extracted gesture to the HMM for gesture recognition. This separated system causes time delay between two continuing gestures to be recognized, and it makes the system inappropriate for continuous gesture recognition. To overcome these two limitations, this paper suggests primitive body model encoding, which performs spatio/temporal quantization of motions from human body model and encodes them into predefined primitive codes for each link of a body model, and Selective/Asynchronous Input-Parallel State machine(SAI-PSM) for multiple-simultaneous gesture recognition. The experimental results showed that the proposed gesture recognition system using primitive body model encoding and SAI-PSM can exclude meaningless gestures well from the continuous body model data, while performing multiple-simultaneous gesture recognition without losing recognition rates compared to the previous HMM-based work.

Deep Learning Based 3D Gesture Recognition Using Spatio-Temporal Normalization (시 공간 정규화를 통한 딥 러닝 기반의 3D 제스처 인식)

  • Chae, Ji Hun;Gang, Su Myung;Kim, Hae Sung;Lee, Joon Jae
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.5
    • /
    • pp.626-637
    • /
    • 2018
  • Human exchanges information not only through words, but also through body gesture or hand gesture. And they can be used to build effective interfaces in mobile, virtual reality, and augmented reality. The past 2D gesture recognition research had information loss caused by projecting 3D information in 2D. Since the recognition of the gesture in 3D is higher than 2D space in terms of recognition range, the complexity of gesture recognition increases. In this paper, we proposed a real-time gesture recognition deep learning model and application in 3D space using deep learning technique. First, in order to recognize the gesture in the 3D space, the data collection is performed using the unity game engine to construct and acquire data. Second, input vector normalization for learning 3D gesture recognition model is processed based on deep learning. Thirdly, the SELU(Scaled Exponential Linear Unit) function is applied to the neural network's active function for faster learning and better recognition performance. The proposed system is expected to be applicable to various fields such as rehabilitation cares, game applications, and virtual reality.

A Hand Gesture Recognition Method using Inertial Sensor for Rapid Operation on Embedded Device

  • Lee, Sangyub;Lee, Jaekyu;Cho, Hyeonjoong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.2
    • /
    • pp.757-770
    • /
    • 2020
  • We propose a hand gesture recognition method that is compatible with a head-up display (HUD) including small processing resource. For fast link adaptation with HUD, it is necessary to rapidly process gesture recognition and send the minimum amount of driver hand gesture data from the wearable device. Therefore, we use a method that recognizes each hand gesture with an inertial measurement unit (IMU) sensor based on revised correlation matching. The method of gesture recognition is executed by calculating the correlation between every axis of the acquired data set. By classifying pre-defined gesture values and actions, the proposed method enables rapid recognition. Furthermore, we evaluate the performance of the algorithm, which can be implanted within wearable bands, requiring a minimal process load. The experimental results evaluated the feasibility and effectiveness of our decomposed correlation matching method. Furthermore, we tested the proposed algorithm to confirm the effectiveness of the system using pre-defined gestures of specific motions with a wearable platform device. The experimental results validated the feasibility and effectiveness of the proposed hand gesture recognition system. Despite being based on a very simple concept, the proposed algorithm showed good performance in recognition accuracy.

Hand Gesture Recognition Suitable for Wearable Devices using Flexible Epidermal Tactile Sensor Array

  • Byun, Sung-Woo;Lee, Seok-Pil
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.4
    • /
    • pp.1732-1739
    • /
    • 2018
  • With the explosion of digital devices, interaction technologies between human and devices are required more than ever. Especially, hand gesture recognition is advantageous in that it can be easily used. It is divided into the two groups: the contact sensor and the non-contact sensor. Compared with non-contact gesture recognition, the advantage of contact gesture recognition is that it is able to classify gestures that disappear from the sensor's sight. Also, since there is direct contacted with the user, relatively accurate information can be acquired. Electromyography (EMG) and force-sensitive resistors (FSRs) are the typical methods used for contact gesture recognition based on muscle activities. The sensors, however, are generally too sensitive to environmental disturbances such as electrical noises, electromagnetic signals and so on. In this paper, we propose a novel contact gesture recognition method based on Flexible Epidermal Tactile Sensor Array (FETSA) that is used to measure electrical signals according to movements of the wrist. To recognize gestures using FETSA, we extracted feature sets, and the gestures were subsequently classified using the support vector machine. The performance of the proposed gesture recognition method is very promising in comparison with two previous non-contact and contact gesture recognition studies.

CNN-based Gesture Recognition using Motion History Image

  • Koh, Youjin;Kim, Taewon;Hong, Min;Choi, Yoo-Joo
    • Journal of Internet Computing and Services
    • /
    • v.21 no.5
    • /
    • pp.67-73
    • /
    • 2020
  • In this paper, we present a CNN-based gesture recognition approach which reduces the memory burden of input data. Most of the neural network-based gesture recognition methods have used a sequence of frame images as input data, which cause a memory burden problem. We use a motion history image in order to define a meaningful gesture. The motion history image is a grayscale image into which the temporal motion information is collapsed by synthesizing silhouette images of a user during the period of one meaningful gesture. In this paper, we first summarize the previous traditional approaches and neural network-based approaches for gesture recognition. Then we explain the data preprocessing procedure for making the motion history image and the neural network architecture with three convolution layers for recognizing the meaningful gestures. In the experiments, we trained five types of gestures, namely those for charging power, shooting left, shooting right, kicking left, and kicking right. The accuracy of gesture recognition was measured by adjusting the number of filters in each layer in the proposed network. We use a grayscale image with 240 × 320 resolution which defines one meaningful gesture and achieved a gesture recognition accuracy of 98.24%.

Improvement of Gesture Recognition using 2-stage HMM (2단계 히든마코프 모델을 이용한 제스쳐의 성능향상 연구)

  • Jung, Hwon-Jae;Park, Hyeonjun;Kim, Donghan
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.11
    • /
    • pp.1034-1037
    • /
    • 2015
  • In recent years in the field of robotics, various methods have been developed to create an intimate relationship between people and robots. These methods include speech, vision, and biometrics recognition as well as gesture-based interaction. These recognition technologies are used in various wearable devices, smartphones and other electric devices for convenience. Among these technologies, gesture recognition is the most commonly used and appropriate technology for wearable devices. Gesture recognition can be classified as contact or noncontact gesture recognition. This paper proposes contact gesture recognition with IMU and EMG sensors by using the hidden Markov model (HMM) twice. Several simple behaviors make main gestures through the one-stage HMM. It is equal to the Hidden Markov model process, which is well known for pattern recognition. Additionally, the sequence of the main gestures, which comes from the one-stage HMM, creates some higher-order gestures through the two-stage HMM. In this way, more natural and intelligent gestures can be implemented through simple gestures. This advanced process can play a larger role in gesture recognition-based UX for many wearable and smart devices.

Residual Learning Based CNN for Gesture Recognition in Robot Interaction

  • Han, Hua
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.385-398
    • /
    • 2021
  • The complexity of deep learning models affects the real-time performance of gesture recognition, thereby limiting the application of gesture recognition algorithms in actual scenarios. Hence, a residual learning neural network based on a deep convolutional neural network is proposed. First, small convolution kernels are used to extract the local details of gesture images. Subsequently, a shallow residual structure is built to share weights, thereby avoiding gradient disappearance or gradient explosion as the network layer deepens; consequently, the difficulty of model optimisation is simplified. Additional convolutional neural networks are used to accelerate the refinement of deep abstract features based on the spatial importance of the gesture feature distribution. Finally, a fully connected cascade softmax classifier is used to complete the gesture recognition. Compared with the dense connection multiplexing feature information network, the proposed algorithm is optimised in feature multiplexing to avoid performance fluctuations caused by feature redundancy. Experimental results from the ISOGD gesture dataset and Gesture dataset prove that the proposed algorithm affords a fast convergence speed and high accuracy.