• Title/Summary/Keyword: 3D hand gesture

Search Result 66, Processing Time 0.027 seconds

Design and Implementation of a Stereoscopic Image Control System based on User Hand Gesture Recognition (사용자 손 제스처 인식 기반 입체 영상 제어 시스템 설계 및 구현)

  • Song, Bok Deuk;Lee, Seung-Hwan;Choi, HongKyw;Kim, Sung-Hoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.3
    • /
    • pp.396-402
    • /
    • 2022
  • User interactions are being developed in various forms, and in particular, interactions using human gestures are being actively studied. Among them, hand gesture recognition is used as a human interface in the field of realistic media based on the 3D Hand Model. The use of interfaces based on hand gesture recognition helps users access media media more easily and conveniently. User interaction using hand gesture recognition should be able to view images by applying fast and accurate hand gesture recognition technology without restrictions on the computer environment. This paper developed a fast and accurate user hand gesture recognition algorithm using the open source media pipe framework and machine learning's k-NN (K-Nearest Neighbor). In addition, in order to minimize the restriction of the computer environment, a stereoscopic image control system based on user hand gesture recognition was designed and implemented using a web service environment capable of Internet service and a docker container, a virtual environment.

Three Dimensional Hand Gesture Taxonomy for Commands

  • Choi, Eun-Jung;Lee, Dong-Hun;Chung, Min-K.
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.483-492
    • /
    • 2012
  • Objective: The aim of this study is to suggest three-dimensional(3D) hand gesture taxonomy to organize the user's intention of his/her decisions on deriving a certain gesture systematically. Background: With advanced technologies of gesture recognition, various researchers have studied to focus on deriving intuitive gestures for commands from users. In most of the previous studies, the users' reasons for deriving a certain gesture for a command were only used as a reference to group various gestures. Method: A total of eleven studies which categorized gestures accompanied by speech were investigated. Also a case study with thirty participants was conducted to understand gesture-features which derived from the users specifically. Results: Through the literature review, a total of nine gesture-features were extracted. After conducting the case study, the nine gesture-features were narrowed down a total of seven gesture-features. Conclusion: Three-dimensional hand gesture taxonomy including a total of seven gesture-features was developed. Application: Three-dimensional hand gesture taxonomy might be used as a check list to understand the users' reasons.

An Efficient Hand Gesture Recognition Method using Two-Stream 3D Convolutional Neural Network Structure (이중흐름 3차원 합성곱 신경망 구조를 이용한 효율적인 손 제스처 인식 방법)

  • Choi, Hyeon-Jong;Noh, Dae-Cheol;Kim, Tae-Young
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.14 no.6
    • /
    • pp.66-74
    • /
    • 2018
  • Recently, there has been active studies on hand gesture recognition to increase immersion and provide user-friendly interaction in a virtual reality environment. However, most studies require specialized sensors or equipment, or show low recognition rates. This paper proposes a hand gesture recognition method using Deep Learning technology without separate sensors or equipment other than camera to recognize static and dynamic hand gestures. First, a series of hand gesture input images are converted into high-frequency images, then each of the hand gestures RGB images and their high-frequency images is learned through the DenseNet three-dimensional Convolutional Neural Network. Experimental results on 6 static hand gestures and 9 dynamic hand gestures showed an average of 92.6% recognition rate and increased 4.6% compared to previous DenseNet. The 3D defense game was implemented to verify the results of our study, and an average speed of 30 ms of gesture recognition was found to be available as a real-time user interface for virtual reality applications.

Hand Gesture Interface for Manipulating 3D Objects in Augmented Reality (증강현실에서 3D 객체 조작을 위한 손동작 인터페이스)

  • Park, Keon-Hee;Lee, Guee-Sang
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.5
    • /
    • pp.20-28
    • /
    • 2010
  • In this paper, we propose a hand gesture interface for the manipulation of augmented objects in 3D space using a camera. Generally a marker is used for the detection of 3D movement in 2D images. However marker based system has obvious defects since markers are always to be included in the image or we need additional equipments for controling objects, which results in reduced immersion. To overcome this problem, we replace marker by planar hand shape by estimating the hand pose. Kalman filter is for robust tracking of the hand shape. The experimental result indicates the feasibility of the proposed algorithm for hand based AR interfaces.

Deep Learning Based 3D Gesture Recognition Using Spatio-Temporal Normalization (시 공간 정규화를 통한 딥 러닝 기반의 3D 제스처 인식)

  • Chae, Ji Hun;Gang, Su Myung;Kim, Hae Sung;Lee, Joon Jae
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.5
    • /
    • pp.626-637
    • /
    • 2018
  • Human exchanges information not only through words, but also through body gesture or hand gesture. And they can be used to build effective interfaces in mobile, virtual reality, and augmented reality. The past 2D gesture recognition research had information loss caused by projecting 3D information in 2D. Since the recognition of the gesture in 3D is higher than 2D space in terms of recognition range, the complexity of gesture recognition increases. In this paper, we proposed a real-time gesture recognition deep learning model and application in 3D space using deep learning technique. First, in order to recognize the gesture in the 3D space, the data collection is performed using the unity game engine to construct and acquire data. Second, input vector normalization for learning 3D gesture recognition model is processed based on deep learning. Thirdly, the SELU(Scaled Exponential Linear Unit) function is applied to the neural network's active function for faster learning and better recognition performance. The proposed system is expected to be applicable to various fields such as rehabilitation cares, game applications, and virtual reality.

Comparative Study on the Interface and Interaction for Manipulating 3D Virtual Objects in a Virtual Reality Environment (가상현실 환경에서 3D 가상객체 조작을 위한 인터페이스와 인터랙션 비교 연구)

  • Park, Kyeong-Beom;Lee, Jae Yeol
    • Korean Journal of Computational Design and Engineering
    • /
    • v.21 no.1
    • /
    • pp.20-30
    • /
    • 2016
  • Recently immersive virtual reality (VR) becomes popular due to the advanced development of I/O interfaces and related SWs for effectively constructing VR environments. In particular, natural and intuitive manipulation of 3D virtual objects is still considered as one of the most important user interaction issues. This paper presents a comparative study on the manipulation and interaction of 3D virtual objects using different interfaces and interactions in three VR environments. The comparative study includes both quantitative and qualitative aspects. Three different experimental setups are 1) typical desktop-based VR using mouse and keyboard, 2) hand gesture-supported desktop VR using a Leap Motion sensor, and 3) immersive VR by wearing an HMD with hand gesture interaction using a Leap Motion sensor. In the desktop VR with hand gestures, the Leap Motion sensor is put on the desk. On the other hand, in the immersive VR, the sensor is mounted on the HMD so that the user can manipulate virtual objects in the front of the HMD. For the quantitative analysis, a task completion time and success rate were measured. Experimental tasks require complex 3D transformation such as simultaneous 3D translation and 3D rotation. For the qualitative analysis, various factors relating to user experience such as ease of use, natural interaction, and stressfulness were evaluated. The qualitative and quantitative analyses show that the immersive VR with the natural hand gesture provides more intuitive and natural interactions, supports fast and effective performance on task completion, but causes stressful condition.

3D Virtual Reality Game with Deep Learning-based Hand Gesture Recognition (딥러닝 기반 손 제스처 인식을 통한 3D 가상현실 게임)

  • Lee, Byeong-Hee;Oh, Dong-Han;Kim, Tae-Young
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.5
    • /
    • pp.41-48
    • /
    • 2018
  • The most natural way to increase immersion and provide free interaction in a virtual environment is to provide a gesture interface using the user's hand. However, most studies about hand gesture recognition require specialized sensors or equipment, or show low recognition rates. This paper proposes a three-dimensional DenseNet Convolutional Neural Network that enables recognition of hand gestures with no sensors or equipment other than an RGB camera for hand gesture input and introduces a virtual reality game based on it. Experimental results on 4 static hand gestures and 6 dynamic hand gestures showed that they could be used as real-time user interfaces for virtual reality games with an average recognition rate of 94.2% at 50ms. Results of this research can be used as a hand gesture interface not only for games but also for education, medicine, and shopping.

Design and Implementation of Hand Gesture Recognizer Based on Artificial Neural Network (인공신경망 기반 손동작 인식기의 설계 및 구현)

  • Kim, Minwoo;Jeong, Woojae;Cho, Jaechan;Jung, Yunho
    • Journal of Advanced Navigation Technology
    • /
    • v.22 no.6
    • /
    • pp.675-680
    • /
    • 2018
  • In this paper, we propose a hand gesture recognizer using restricted coulomb energy (RCE) neural network, and present hardware implementation results for real-time learning and recognition. Since RCE-NN has a flexible network architecture and real-time learning process with low complexity, it is suitable for hand recognition applications. The 3D number dataset was created using an FPGA-based test platform and the designed hand gesture recognizer showed 98.8% recognition accuracy for the 3D number dataset. The proposed hand gesture recognizer is implemented in Intel-Altera cyclone IV FPGA and confirmed that it can be implemented with 26,702 logic elements and 258Kbit memory. In addition, real-time learning and recognition verification were performed at an operating frequency of 70MHz.

AdaBoost-Based Gesture Recognition Using Time Interval Trajectory Features (시간 간격 특징 벡터를 이용한 AdaBoost 기반 제스처 인식)

  • Hwang, Seung-Jun;Ahn, Gwang-Pyo;Park, Seung-Je;Baek, Joong-Hwan
    • Journal of Advanced Navigation Technology
    • /
    • v.17 no.2
    • /
    • pp.247-254
    • /
    • 2013
  • The task of 3D gesture recognition for controlling equipments is highly challenging due to the propagation of 3D smart TV recently. In this paper, the AdaBoost algorithm is applied to 3D gesture recognition by using Kinect sensor. By tracking time interval trajectory of hand, wrist and arm by Kinect, AdaBoost algorithm is used to train and classify 3D gesture. Experimental results demonstrate that the proposed method can successfully extract trained gestures from continuous hand, wrist and arm motion in real time.

Hand Gesture based Manipulation of Meeting Data in Teleconference (핸드제스처를 이용한 원격미팅 자료 인터페이스)

  • Song, Je-Hoon;Choi, Ki-Ho;Kim, Jong-Won;Lee, Yong-Gu
    • Korean Journal of Computational Design and Engineering
    • /
    • v.12 no.2
    • /
    • pp.126-136
    • /
    • 2007
  • Teleconferences have been used in business sectors to reduce traveling costs. Traditionally, specialized telephones that enabled multiparty conversations were used. With the introduction of high speed networks, we now have high definition videos that add more realism in the presence of counterparts who could be thousands of miles away. This paper presents a new technology that adds even more realism by telecommunicating with hand gestures. This technology is part of a teleconference system named SMS (Smart Meeting Space). In SMS, a person can use hand gestures to manipulate meeting data that could be in the form of text, audio, video or 3D shapes. Fer detecting hand gestures, a machine learning algorithm called SVM (Support Vector Machine) has been used. For the prototype system, a 3D interaction environment has been implemented with $OpenGL^{TM}$, where a 3D human skull model can be grasped and moved in 6-DOF during a remote conversation between distant persons.