• Title/Summary/Keyword: Motion recognition technique

Search Result 60, Processing Time 0.025 seconds

A Study on Motion and Position Recognition Considering VR Environments (VR 환경을 고려한 동작 및 위치 인식에 관한 연구)

  • Oh, Am-suk
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.12
    • /
    • pp.2365-2370
    • /
    • 2017
  • In this paper, we propose a motion and position recognition technique considering an experiential VR environment. Motion recognition attaches a plurality of AHRS devices to a body part and defines a coordinate system based on this. Based on the 9 axis motion information measured from each AHRS device, the user's motion is recognized and the motion angle is corrected by extracting the joint angle between the body segments. The location recognition extracts the walking information from the inertial sensor of the AHRS device, recognizes the relative position, and corrects the cumulative error using the BLE fingerprint. To realize the proposed motion and position recognition technique, AHRS-based position recognition and joint angle extraction test were performed. The average error of the position recognition test was 0.25m and the average error of the joint angle extraction test was $3.2^{\circ}$.

Recognition of Virtual Written Characters Based on Convolutional Neural Network

  • Leem, Seungmin;Kim, Sungyoung
    • Journal of Platform Technology
    • /
    • v.6 no.1
    • /
    • pp.3-8
    • /
    • 2018
  • This paper proposes a technique for recognizing online handwritten cursive data obtained by tracing a motion trajectory while a user is in the 3D space based on a convolution neural network (CNN) algorithm. There is a difficulty in recognizing the virtual character input by the user in the 3D space because it includes both the character stroke and the movement stroke. In this paper, we divide syllable into consonant and vowel units by using labeling technique in addition to the result of localizing letter stroke and movement stroke in the previous study. The coordinate information of the separated consonants and vowels are converted into image data, and Korean handwriting recognition was performed using a convolutional neural network. After learning the neural network using 1,680 syllables written by five hand writers, the accuracy is calculated by using the new hand writers who did not participate in the writing of training data. The accuracy of phoneme-based recognition is 98.9% based on convolutional neural network. The proposed method has the advantage of drastically reducing learning data compared to syllable-based learning.

Image Processing for Video Images of Buoy Motion

  • Kim, Baeck-Oon;Cho, Hong-Yeon
    • Ocean Science Journal
    • /
    • v.40 no.4
    • /
    • pp.213-220
    • /
    • 2005
  • In this paper, image processing technique that reduces video images of buoy motion to yield time series of image coordinates of buoy objects will be investigated. The buoy motion images are noisy due to time-varying brightness as well as non-uniform background illumination. The occurrence of boats, wakes, and wind-induced white caps interferes significantly in recognition of buoy objects. Thus, semi-automated procedures consisting of object recognition and image measurement aspects will be conducted. These offer more satisfactory results than a manual process. Spectral analysis shows that the image coordinates of buoy objects represent wave motion well, indicating its usefulness in the analysis of wave characteristics.

A Study on Taekwondo Training System using Hybrid Sensing Technique

  • Kwon, Doo Young
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.12
    • /
    • pp.1439-1445
    • /
    • 2013
  • We present a Taekwondo training system using a hybrid sensing technique of a body sensor and a visual sensor. Using a body sensor (accelerometer), rotational and inertial motion data are captured which are important for Taekwondo motion detection and evaluation. A visual sensor (camera) captures and records the sequential images of the performance. Motion chunk is proposed to structuralize Taekwondo motions and design HMM (Hidden Markov Model) for motion recognition. Trainees can evaluates their trial motions numerically by computing the distance to the standard motion performed by a trainer. For motion training video, the real-time video images captured by a camera is overlayed with a visualized body sensor data so that users can see how the rotational and inertial motion data flow.

Implementation of DID interface using gesture recognition (제스쳐 인식을 이용한 DID 인터페이스 구현)

  • Lee, Sang-Hun;Kim, Dae-Jin;Choi, Hong-Sub
    • Journal of Digital Contents Society
    • /
    • v.13 no.3
    • /
    • pp.343-352
    • /
    • 2012
  • In this paper, we implemented a touchless interface for DID(Digital Information Display) system using gesture recognition technique which includes both hand motion and hand shape recognition. Especially this touchless interface without extra attachments gives user both easier usage and spatial convenience. For hand motion recognition, two hand-motion's parameters such as a slope and a velocity were measured as a direction-based recognition way. And extraction of hand area image utilizing YCbCr color model and several image processing methods were adopted to recognize a hand shape recognition. These recognition methods are combined to generate various commands, such as, next-page, previous-page, screen-up, screen-down and mouse -click in oder to control DID system. Finally, experimental results showed the performance of 93% command recognition rate which is enough to confirm the possible application to commercial products.

Real-Time Hand Gesture Recognition Based on Deep Learning (딥러닝 기반 실시간 손 제스처 인식)

  • Kim, Gyu-Min;Baek, Joong-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.4
    • /
    • pp.424-431
    • /
    • 2019
  • In this paper, we propose a real-time hand gesture recognition algorithm to eliminate the inconvenience of using hand controllers in VR applications. The user's 3D hand coordinate information is detected by leap motion sensor and then the coordinates are generated into two dimensional image. We classify hand gestures in real-time by learning the imaged 3D hand coordinate information through SSD(Single Shot multibox Detector) model which is one of CNN(Convolutional Neural Networks) models. We propose to use all 3 channels rather than only one channel. A sliding window technique is also proposed to recognize the gesture in real time when the user actually makes a gesture. An experiment was conducted to measure the recognition rate and learning performance of the proposed model. Our proposed model showed 99.88% recognition accuracy and showed higher usability than the existing algorithm.

Representing Human Motions in an Eigenspace Based on Surrounding Cameras

  • Houman, Satoshi;Rahman, M. Masudur;Tan, Joo Kooi;Ishikawa, Seiji
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1808-1813
    • /
    • 2004
  • Recognition of human motions using their 2-D images has various applications. An eigenspace method is employed in this paper for representing and recognizing human motions. An eigenspace is created from the images taken by multiple cameras that surround a human in motion. Image streams obtained from the cameras compose the same number of curved lines in the eigenspace and they are used for recognizing a human motion in a video image. Performance of the proposed technique is shown experimentally.

  • PDF

Combining Dynamic Time Warping and Single Hidden Layer Feedforward Neural Networks for Temporal Sign Language Recognition

  • Thi, Ngoc Anh Nguyen;Yang, Hyung-Jeong;Kim, Sun-Hee;Kim, Soo-Hyung
    • International Journal of Contents
    • /
    • v.7 no.1
    • /
    • pp.14-22
    • /
    • 2011
  • Temporal Sign Language Recognition (TSLR) from hand motion is an active area of gesture recognition research in facilitating efficient communication with deaf people. TSLR systems consist of two stages: a motion sensing step which extracts useful features from signers' motion and a classification process which classifies these features as a performed sign. This work focuses on two of the research problems, namely unknown time varying signal of sign languages in feature extraction stage and computing complexity and time consumption in classification stage due to a very large sign sequences database. In this paper, we propose a combination of Dynamic Time Warping (DTW) and application of the Single hidden Layer Feedforward Neural networks (SLFNs) trained by Extreme Learning Machine (ELM) to cope the limitations. DTW has several advantages over other approaches in that it can align the length of the time series data to a same prior size, while ELM is a useful technique for classifying these warped features. Our experiment demonstrates the efficiency of the proposed method with the recognition accuracy up to 98.67%. The proposed approach can be generalized to more detailed measurements so as to recognize hand gestures, body motion and facial expression.

Design and Performance Analysis of ML Techniques for Finger Motion Recognition (손가락 움직임 인식을 위한 웨어러블 디바이스 설계 및 ML 기법별 성능 분석)

  • Jung, Woosoon;Lee, Hyung Gyu
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.25 no.2
    • /
    • pp.129-136
    • /
    • 2020
  • Recognizing finger movements have been used as a intuitive way of human-computer interaction. In this study, we implement an wearable device for finger motion recognition and evaluate the accuracy of several ML (Machine learning) techniques. Not only HMM (Hidden markov model) and DTW (Dynamic time warping) techniques that have been traditionally used as time series data analysis, but also NN (Neural network) technique are applied to compare and analyze the accuracy of each technique. In order to minimize the computational requirement, we also apply the pre-processing to each ML techniques. Our extensive evaluations demonstrate that the NN-based gesture recognition system achieves 99.1% recognition accuracy while the HMM and DTW achieve 96.6% and 95.9% recognition accuracy, respectively.

WiSee's trend analysis using Wi-Fi (Wi-Fi를 이용한 WiSee의 동향 분석)

  • Han, Seung-Ah;Son, Tae-Hyun;Kim, Hyun-Ho;Lee, Hoon-Jae
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.05a
    • /
    • pp.74-77
    • /
    • 2015
  • WiSee is by utilizing the frequency of Wi-Fi(802.11n/ac), a technique for performing the operation recognized by the user's gesture. Current motion recognition scheme are using a dedicated device (leaf motion, Kinekuto) and the recognition range is 30cm ~ 3.5m. also For recognition range increases the narrow recognition rate, there is inconvenience for maintaining a limited distance. But WiSee is used by Wi-Fi it is possible to anywhere motion recognition if available location. Permeability also has advantages as compared with the conventional recognition method. In this paper I take a look at the operation process and the recent trend of WiSee.

  • PDF