• Title, Summary, Keyword: hand gesture recognition

Search Result 278, Processing Time 0.035 seconds

NUI/NUX framework based on intuitive hand motion (직관적인 핸드 모션에 기반한 NUI/NUX 프레임워크)

  • Lee, Gwanghyung;Shin, Dongkyoo;Shin, Dongil
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.11-19
    • /
    • 2014
  • The natural user interface/experience (NUI/NUX) is used for the natural motion interface without using device or tool such as mice, keyboards, pens and markers. Up to now, typical motion recognition methods used markers to receive coordinate input values of each marker as relative data and to store each coordinate value into the database. But, to recognize accurate motion, more markers are needed and much time is taken in attaching makers and processing the data. Also, as NUI/NUX framework being developed except for the most important intuition, problems for use arise and are forced for users to learn many NUI/NUX framework usages. To compensate for this problem in this paper, we didn't use markers and implemented for anyone to handle it. Also, we designed multi-modal NUI/NUX framework controlling voice, body motion, and facial expression simultaneously, and proposed a new algorithm of mouse operation by recognizing intuitive hand gesture and mapping it on the monitor. We implement it for user to handle the "hand mouse" operation easily and intuitively.

Estimation of Critical Threshold for Rejection in HMM Based Recognition Systems (HMM 기반의 인식시스템에서의 거절기능 수행을 위한 임계 문턱값 추정)

  • 김인철;진성일
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.2
    • /
    • pp.90-94
    • /
    • 2000
  • In this paper, we propose an efficient method of estimating a critical threshold which is used to reject unreliable patterns in a HMM based recognition system. The rejection methods based on the anti-models which are formulated as the statistical hypothesis determine whether or not to accept an input pattern by comparing the likelihood ratio of HMM and anti-models to a critical threshold. It is quite difficult to fix a threshold for the probability of a HMM because the range of such probabilities varies severely depending on the chosen class model. We estimate the critical threshold, which is very class-dependent, using the likelihood scores for the training database. In our experiments, we applied the proposed estimating method of the threshold to the HMM based 3D hand gesture recognition system. We found that this method can be used successfully for rejecting unreliable input gestures regardless of the types of anti-models.

  • PDF

Implementation of Multi-touch Tabletop Display for Human Computer Interaction (HCI 를 위한 멀티터치 테이블-탑 디스플레이 시스템 구현)

  • Kim, Song-Gook;Lee, Chil-Woo
    • 한국HCI학회:학술대회논문집
    • /
    • /
    • pp.553-560
    • /
    • 2007
  • 본 논문에서는 양손의 터치를 인식하여 실시간 상호작용이 가능한 테이블 탑 디스플레이 시스템 및 구현 알고리즘에 대해 기술한다. 제안하는 시스템은 FTIR(Frustrated Total Internal Reflection) 메커니즘을 기반으로 제작되었으며 multi-touch, multi-user 방식의 손 제스처 입력이 가능하다. 시스템은 크게 영상 투영을 위한 빔-프로젝터, 적외선 LED를 부착한 아크릴 스크린, Diffuser 그리고 영상을 획득하기 위한 적외선 카메라로 구성되어 있다. 시스템 제어에 필요한 제스처 명령어 종류는 상호작용 테이블에서의 입력과 출력의 자유도를 분석하고 편리함, 의사소통, 항상성, 완벽함의 정도를 고려하여 규정하였다. 규정된 제스처는 사용자가 상호작용을 위해 스크린에 접촉한 손가락의 개수, 위치, 그리고 움직임 변화를 기준으로 세분화된다. 적외선 카메라를 통해 입력받은 영상은 잡음제거 및 손가락 영역 탐색을 위해 간단한 모폴로지 기법이 적용된 후 인식과정에 들어간다. 인식 과정에서는 입력 받은 제스처 명령어들을 미리 정의해놓은 손 제스처 모델과 비교하여 인식을 행한다. 세부적으로는 먼저 스크린에 접촉된 손가락의 개수를 파악하고 그 영역을 결정하며 그 후 그 영역들의 중심점을 추출하여 그들의 각도 및 유클리디언 거리를 계산한다. 그리고 나서 멀티터치 포인트의 위치 변화값을 미리 정의해둔 모델의 정보와 비교를 한다. 본 논문에서 제안하는 시스템의 효율성은 Google-earth를 제어하는 것을 통해 입증될 수 있다.

  • PDF

Hand Gesture Recognition Method based on the MCSVM for Interaction with 3D Objects in Virtual Reality (가상현실 3D 오브젝트와 상호작용을 위한 MCSVM 기반 손 제스처 인식)

  • Kim, Yoon-Je;Koh, Tack-Kyun;Yoon, Min-Ho;Kim, Tae-Young
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • /
    • pp.1088-1091
    • /
    • 2017
  • 최근 그래픽스 기반의 가상현실 기술의 발전과 관심이 증가하면서 3D 객체와의 자연스러운 상호작용을 위한 방법들 중 손 제스처 인식에 대한 연구가 활발히 진행되고 있다. 본 논문은 가상현실 3D 오브젝트와의 상호작용을 위한 MCSVM 기반의 손 제스처 인식을 제안한다. 먼저 다양한 손 제스처들을 립모션을 통해 입력 받아 전처리를 수행한 손 데이터를 전달한다. 그 후 이진 결정 트리로 1차 분류를 한 손 데이터를 리샘플링 한 뒤 체인코드를 생성하고 이에 대한 히스토그램으로 특징 데이터를 구성한다. 이를 기반으로 MCSVM 학습을 통해 2차 분류를 수행하여 제스처를 인식한다. 실험 결과 3D 오브젝트와 상호작용을 위한 16개의 명령 제스처에 대해 평균 99.2%의 인식률을 보였고 마우스 인터페이스와 비교한 정서적 평가 결과에서는 마우스 입력에 비하여 직관적이고 사용자 친화적인 상호작용이 가능하다는 점에서 게임, 학습 시뮬레이션, 설계, 의료분야 등 많은 가상현실 응용 분야에서의 입력 인터페이스로 활용 될 수 있고 가상현실에서 몰입도를 높이는데 도움이 됨을 알 수 있었다.

An ANN-based gesture recognition algorithm for smart-home applications

  • Huu, Phat Nguyen;Minh, Quang Tran;The, Hoang Lai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.5
    • /
    • pp.1967-1983
    • /
    • 2020
  • The goal of this paper is to analyze and build an algorithm to recognize hand gestures applying to smart home applications. The proposed algorithm uses image processing techniques combing with artificial neural network (ANN) approaches to help users interact with computers by common gestures. We use five types of gestures, namely those for Stop, Forward, Backward, Turn Left, and Turn Right. Users will control devices through a camera connected to computers. The algorithm will analyze gestures and take actions to perform appropriate action according to users requests via their gestures. The results show that the average accuracy of proposal algorithm is 92.6 percent for images and more than 91 percent for video, which both satisfy performance requirements for real-world application, specifically for smart home services. The processing time is approximately 0.098 second with 10 frames/sec datasets. However, accuracy rate still depends on the number of training images (video) and their resolution.

Inexpensive Visual Motion Data Glove for Human-Computer Interface Via Hand Gesture Recognition (손 동작 인식을 통한 인간 - 컴퓨터 인터페이스용 저가형 비주얼 모션 데이터 글러브)

  • Han, Young-Mo
    • The KIPS Transactions:PartB
    • /
    • v.16B no.5
    • /
    • pp.341-346
    • /
    • 2009
  • The motion data glove is a representative human-computer interaction tool that inputs human hand gestures to computers by measuring their motions. The motion data glove is essential equipment used for new computer technologiesincluding home automation, virtual reality, biometrics, motion capture. For its popular usage, this paper attempts to develop an inexpensive visual.type motion data glove that can be used without any special equipment. The proposed approach has the special feature; it can be developed as a low-cost one becauseof not using high-cost motion-sensing fibers that were used in the conventional approaches. That makes its easy production and popular use possible. This approach adopts a visual method that is obtained by improving conventional optic motion capture technology, instead of mechanical method using motion-sensing fibers. Compared to conventional visual methods, the proposed method has the following advantages and originalities Firstly, conventional visual methods use many cameras and equipments to reconstruct 3D pose with eliminating occlusions But the proposed method adopts a mono vision approachthat makes simple and low cost equipments possible. Secondly, conventional mono vision methods have difficulty in reconstructing 3D pose of occluded parts in images because they have weak points about occlusions. But the proposed approach can reconstruct occluded parts in images by using originally designed thin-bar-shaped optic indicators. Thirdly, many cases of conventional methods use nonlinear numerical computation image analysis algorithm, so they have inconvenience about their initialization and computation times. But the proposed method improves these inconveniences by using a closed-form image analysis algorithm that is obtained from original formulation. Fourthly, many cases of conventional closed-form algorithms use approximations in their formulations processes, so they have disadvantages of low accuracy and confined applications due to singularities. But the proposed method improves these disadvantages by original formulation techniques where a closed-form algorithm is derived by using exponential-form twist coordinates, instead of using approximations or local parameterizations such as Euler angels.

Platform Independent Game Development Using HTML5 Canvas (HTML5 캔버스를 이용한 플랫폼 독립적인 게임의 구현)

  • Jang, Seok-Woo;Huh, Moon-Haeng
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.12
    • /
    • pp.3042-3048
    • /
    • 2014
  • Recently, HTML5 have drawn many people's attention since it is considered as a next-generation web standard and can implement a lot of graphic and multimedia-related techniques on a web browser without installing programs separately. In this paper, we implement a game independent of platforms, such as iOS and Android, using the HTML5 canvas. In the game, the main character can move up, down, left, and right not to collide with neighboring enemies. If the character collides with an enemy, the HP (hit point) gauge bar reduces. On the other hand, if the character obtains heart items, the gauge bar increases. In the future, we will add various items to the game and will diversify its user interfaces by applying computer vision techniques such as various gesture recognition.

Hand Gesture Recognition Regardless of Sensor Misplacement for Circular EMG Sensor Array System (원형 근전도 센서 어레이 시스템의 센서 틀어짐에 강인한 손 제스쳐 인식)

  • Joo, SeongSoo;Park, HoonKi;Kim, InYoung;Lee, JongShill
    • Journal of rehabilitation welfare engineering & assistive technology
    • /
    • v.11 no.4
    • /
    • pp.371-376
    • /
    • 2017
  • In this paper, we propose an algorithm that can recognize the pattern regardless of the sensor position when performing EMG pattern recognition using circular EMG system equipment. Fourteen features were extracted by using the data obtained by measuring the eight channel EMG signals of six motions for 1 second. In addition, 112 features extracted from 8 channels were analyzed to perform principal component analysis, and only the data with high influence was cut out to 8 input signals. All experiments were performed using k-NN classifier and data was verified using 5-fold cross validation. When learning data in machine learning, the results vary greatly depending on what data is learned. EMG Accuracy of 99.3% was confirmed when using the learning data used in the previous studies. However, even if the position of the sensor was changed by only 22.5 degrees, it was clearly dropped to 67.28% accuracy. The accuracy of the proposed method is 98% and the accuracy of the proposed method is about 98% even if the sensor position is changed. Using these results, it is expected that the convenience of the users using the circular EMG system can be greatly increased.