• Title/Summary/Keyword: Gesture Analysis

Search Result 140, Processing Time 0.025 seconds

Real-time Hand Gesture Recognition System based on Vision for Intelligent Robot Control (지능로봇 제어를 위한 비전기반 실시간 수신호 인식 시스템)

  • Yang, Tae-Kyu;Seo, Yong-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.10
    • /
    • pp.2180-2188
    • /
    • 2009
  • This paper is study on real-time hand gesture recognition system based on vision for intelligent robot control. We are proposed a recognition system using PCA and BP algorithm. Recognition of hand gestures consists of two steps which are preprocessing step using PCA algorithm and classification step using BP algorithm. The PCA algorithm is a technique used to reduce multidimensional data sets to lower dimensions for effective analysis. In our simulation, the PCA is applied to calculate feature projection vectors for the image of a given hand. The BP algorithm is capable of doing parallel distributed processing and expedite processing since it take parallel structure. The BP algorithm recognized in real time hand gestures by self learning of trained eigen hand gesture. The proposed PCA and BP algorithm show improvement on the recognition compared to PCA algorithm.

Study on the Hand Gesture Recognition System and Algorithm based on Millimeter Wave Radar (밀리미터파 레이더 기반 손동작 인식 시스템 및 알고리즘에 관한 연구)

  • Lee, Youngseok
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.3
    • /
    • pp.251-256
    • /
    • 2019
  • In this paper we proposed system and algorithm to recognize hand gestures based on the millimeter wave that is in 65GHz bandwidth. The proposed system is composed of millimeter wave radar board, analog to data conversion and data capture board and notebook to perform gesture recognition algorithms. As feature vectors in proposed algorithm. we used global and local zernike moment descriptor which are robust to distort by rotation of scaling of 2D data. As Experimental result, performance of the proposed algorithm is evaluated and compared with those of algorithms using single global or local zernike descriptor as feature vectors. In analysis of confusion matrix of algorithms, the proposed algorithm shows the better performance in comparison of precision, accuracy and sensitivity, subsequently total performance index of our method is 95.6% comparing with another two mehods in 88.4% and 84%.

Gesture Control Gaming for Motoric Post-Stroke Rehabilitation

  • Andi Bese Firdausiah Mansur
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.10
    • /
    • pp.37-43
    • /
    • 2023
  • The hospital situation, timing, and patient restrictions have become obstacles to an optimum therapy session. The crowdedness of the hospital might lead to a tight schedule and a shorter period of therapy. This condition might strike a post-stroke patient in a dilemma where they need regular treatment to recover their nervous system. In this work, we propose an in-house and uncomplex serious game system that can be used for physical therapy. The Kinect camera is used to capture the depth image stream of a human skeleton. Afterwards, the user might use their hand gesture to control the game. Voice recognition is deployed to ease them with play. Users must complete the given challenge to obtain a more significant outcome from this therapy system. Subjects will use their upper limb and hands to capture the 3D objects with different speeds and positions. The more substantial challenge, speed, and location will be increased and random. Each delegated entity will raise the scores. Afterwards, the scores will be further evaluated to correlate with therapy progress. Users are delighted with the system and eager to use it as their daily exercise. The experimental studies show a comparison between score and difficulty that represent characteristics of user and game. Users tend to quickly adapt to easy and medium levels, while high level requires better focus and proper synchronization between hand and eye to capture the 3D objects. The statistical analysis with a confidence rate(α:0.05) of the usability test shows that the proposed gaming is accessible, even without specialized training. It is not only for therapy but also for fitness because it can be used for body exercise. The result of the experiment is very satisfying. Most users enjoy and familiarize themselves quickly. The evaluation study demonstrates user satisfaction and perception during testing. Future work of the proposed serious game might involve haptic devices to stimulate their physical sensation.

The Relationship between Lexical Retrieval and Coverbal Gestures (어휘인출과 구어동반 제스처의 관계)

  • Ha, Ji-Wan;Sim, Hyun-Sub
    • Korean Journal of Cognitive Science
    • /
    • v.22 no.2
    • /
    • pp.123-143
    • /
    • 2011
  • At what point in the process of speech production are gestures involved? According to the Lexical Retrieval Hypothesis, gestures are involved in the lexicalization in the formulating stage. According to the Information Packaging Hypothesis, gestures are involved in the conceptual planning of massages in the conceptualizing stage. We investigated these hypotheses, using the game situation in a TV program that induced the players to involve in both lexicalization and conceptualization simultaneously. The transcription of the verbal utterances was augmented with all arm and hand gestures produced by the players. Coverbal gestures were classified into two types of gestures: lexical gestures and motor gestures. As a result, concrete words elicited lexical gestures significantly more frequently than abstract words, and abstract words elicited motor gestures significantly more frequently than concrete words. The difficulty of conceptualization in concrete words was significantly correlated with the amount of lexical gestures. However, the amount of words and the word frequency were not correlated with the amount of both gestures. This result supports the Information Packaging Hypothesis. Most of all, the importance of motor gestures was inferred from the result that abstract words elicited motor gestures more frequently rather than concrete words. Motor gestures, which have been considered as unrelated to verbal production, were excluded from analysis in many gestural studies. This study revealed motor gestures seemed to be connected to the abstract conceptualization.

  • PDF

Verification Process for Stable Human Detection and Tracking (안정적 사람 검출 및 추적을 위한 검증 프로세스)

  • Ahn, Jung-Ho;Choi, Jong-Ho
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.4 no.3
    • /
    • pp.202-208
    • /
    • 2011
  • Recently the technologies that control the computer system through human computer interaction(HCI) have been widely studied. Their applications usually involve the methods that locate user's positions via face detection and recognize user's gestures, but face detection performance is not good enough. In case that the applications do not locate user's position stably, user interface performance, such as gesture recognition, is significantly decreased. In this paper we propose a new stable face detection algorithm using skin color detection and cumulative distribution of face detection results, whose effectiveness was verified by experiments. The propsed algorithm can be applicable in the area of human tracking that uses correspondence matrix analysis.

Vision- Based Finger Spelling Recognition for Korean Sign Language

  • Park Jun;Lee Dae-hyun
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.6
    • /
    • pp.768-775
    • /
    • 2005
  • For sign languages are main communication means among hearing-impaired people, there are communication difficulties between speaking-oriented people and sign-language-oriented people. Automated sign-language recognition may resolve these communication problems. In sign languages, finger spelling is used to spell names and words that are not listed in the dictionary. There have been research activities for gesture and posture recognition using glove-based devices. However, these devices are often expensive, cumbersome, and inadequate for recognizing elaborate finger spelling. Use of colored patches or gloves also cause uneasiness. In this paper, a vision-based finger spelling recognition system is introduced. In our method, captured hand region images were separated from the background using a skin detection algorithm assuming that there are no skin-colored objects in the background. Then, hand postures were recognized using a two-dimensional grid analysis method. Our recognition system is not sensitive to the size or the rotation of the input posture images. By optimizing the weights of the posture features using a genetic algorithm, our system achieved high accuracy that matches other systems using devices or colored gloves. We applied our posture recognition system for detecting Korean Sign Language, achieving better than $93\%$ accuracy.

  • PDF

Design and Implementation of a Real-time Region Pointing System using Arm-Pointing Gesture Interface in a 3D Environment

  • Han, Yun-Sang;Seo, Yung-Ho;Doo, Kyoung-Soo;Choi, Jong-Soo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.290-293
    • /
    • 2009
  • In this paper, we propose a method to estimate pointing region in real-world from images of cameras. In general, arm-pointing gesture encodes a direction which extends from user's fingertip to target point. In the proposed work, we assume that the pointing ray can be approximated to a straight line which passes through user's face and fingertip. Therefore, the proposed method extracts two end points for the estimation of pointing direction; one from the user's face and another from the user's fingertip region. Then, the pointing direction and its target region are estimated based on the 2D-3D projective mapping between camera images and real-world scene. In order to demonstrate an application of the proposed method, we constructed an ICGS (interactive cinema guiding system) which employs two CCD cameras and a monitor. The accuracy and robustness of the proposed method are also verified on the experimental results of several real video sequences.

  • PDF

Gesture Recognition on image sequences (연속 영상에서의 제스처 인식)

  • 이현주;이칠우
    • Proceedings of the IEEK Conference
    • /
    • 2000.09a
    • /
    • pp.443-446
    • /
    • 2000
  • 인간은 일상 생활에서 제스처, 표정과 같은 비언어적인 수단을 이용하여 수많은 정보를 전달한다. 따라서 자연스럽고 지적인 인터페이스를 구축하기 위해서는 제스처 인식에 관한 연구가 매우 중요하다. 본 논문에서는 영상 시퀸스의 각 영상들이 가지고 있는 정적인 양이 아닌, 영상과 이웃하는 영상들의 변화량을 수치적으로 측정하고 이를 주성분 분석법(PCA : Principal Component Analysis)과 은닉 마르코프 모델(HMM : Hidden Markov Model)을 이용하여 인식하는 방법을 소개한다.

  • PDF

The Effect of Gesture during the e-Learning Class on Cross-cultural Learners

  • Shin, Sanggyu
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.10a
    • /
    • pp.313-316
    • /
    • 2018
  • In this paper, the authors reflect on how a lecturer's cross-cultural gestures affect learners from across cultures online and in the field teaching sessions for improving the service when to build an e-Learning system. The study extends to survey the way learners feel about cultural differences during a presentation from the research based on sociolinguistics research. Before starting a full-scale research, a preliminary study has been conducted to base the initial experiment, and analysis these result for main research.