• Title, Summary, Keyword: hand gesture recognition

Search Result 278, Processing Time 0.082 seconds

Navigation of a Mobile Robot Using Hand Gesture Recognition (손 동작 인식을 이용한 이동로봇의 주행)

  • Kim, Il-Myeong;Kim, Wan-Cheol;Yun, Gyeong-Sik;Lee, Jang-Myeong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.7
    • /
    • pp.599-606
    • /
    • 2002
  • A new method to govern the navigation of a mobile robot using hand gesture recognition is proposed based on the following two procedures. One is to achieve vision information by using a 2-DOF camera as a communicating medium between a man and a mobile robot and the other is to analyze and to control the mobile robot according to the recognized hand gesture commands. In the previous researches, mobile robots are passively to move through landmarks, beacons, etc. In this paper, to incorporate various changes of situation, a new control system that manages the dynamical navigation of mobile robot is proposed. Moreover, without any generally used expensive equipments or complex algorithms for hand gesture recognition, a reliable hand gesture recognition system is efficiently implemented to convey the human commands to the mobile robot with a few constraints.

A Study on Hand Gesture Recognition Algorithm for Immersive Interface Implementation Based on DTV and HNN (몰입형 인터페이스 구현을 위한 DTV와 HNN 기반 손 제스처 인식 알고리즘에 대한 연구)

  • Hyun, Jeong-Hwan;Kang, Dae-Seong
    • 한국정보기술학회논문지
    • /
    • v.15 no.3
    • /
    • pp.99-104
    • /
    • 2017
  • In this paper, we suggested an algorithm that learns hand gesture by learning multiple information obtained by Kinect camera and DTV(Distance Transform Vector) through hippocampal neural network algorithm. First, we propose a detection technique of palm area through a preprocessing using Kinect camera and DTV. After that we implemented a hippocampal neural network learning algorithm based on multiple information. And experiments were performed by hand center point, 8-direction gesture, mouse click and screen gesture setting for hand gesture recognition. The proposed hand gesture recognition algorithm is compared with the conventional learning algorithms BP and HMM. And it is proved through experiments that the excellent results can be obtained in terms of recognition rate.

PC User Authentication using Hand Gesture Recognition and Challenge-Response

  • Shin, Sang-Min;Kim, Minsoo
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.8 no.2
    • /
    • pp.79-87
    • /
    • 2018
  • The current PC user authentication uses character password based on user's knowledge. However, this can easily be exploited by password cracking or key-logging programs. In addition, the use of a difficult password and the periodic change of the password make it easy for the user to mistake exposing the password around the PC because it is difficult for the user to remember the password. In order to overcome this, we propose user gesture recognition and challenge-response authentication. We apply user's hand gesture instead of character password. In the challenge-response method, authentication is performed in the form of responding to a quiz, rather than using the same password every time. To apply the hand gesture to challenge-response authentication, the gesture is recognized and symbolized to be used in the quiz response. So we show that this method can be applied to PC user authentication.

The Effect of Visual Feedback on One-hand Gesture Performance in Vision-based Gesture Recognition System

  • Kim, Jun-Ho;Lim, Ji-Hyoun;Moon, Sung-Hyun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.551-556
    • /
    • 2012
  • Objective: This study presents the effect of visual feedback on one-hand gesture performance in vision-based gesture recognition system when people use gestures to control a screen device remotely. Backgroud: gesture interaction receives growing attention because it uses advanced sensor technology and it allows users natural interaction using their own body motion. In generating motion, visual feedback has been to considered critical factor affect speed and accuracy. Method: three types of visual feedback(arrow, star, and animation) were selected and 20 gestures were listed. 12 participants perform each 20 gestures while given 3 types of visual feedback in turn. Results: People made longer hand trace and take longer time to make a gesture when they were given arrow shape feedback than star-shape feedback. The animation type feedback was most preferred. Conclusion: The type of visual feedback showed statistically significant effect on the length of hand trace, elapsed time, and speed of motion in performing a gesture. Application: This study could be applied to any device that needs visual feedback for device control. A big feedback generate shorter length of motion trace, less time, faster than smaller one when people performs gestures to control a device. So the big size of visual feedback would be recommended for a situation requiring fast actions. On the other hand, the smaller visual feedback would be recommended for a situation requiring elaborated actions.

An Efficient Hand Gesture Recognition Method using Two-Stream 3D Convolutional Neural Network Structure (이중흐름 3차원 합성곱 신경망 구조를 이용한 효율적인 손 제스처 인식 방법)

  • Choi, Hyeon-Jong;Noh, Dae-Cheol;Kim, Tae-Young
    • Journal of Korean Institute of Next Generation Computing
    • /
    • v.14 no.6
    • /
    • pp.66-74
    • /
    • 2018
  • Recently, there has been active studies on hand gesture recognition to increase immersion and provide user-friendly interaction in a virtual reality environment. However, most studies require specialized sensors or equipment, or show low recognition rates. This paper proposes a hand gesture recognition method using Deep Learning technology without separate sensors or equipment other than camera to recognize static and dynamic hand gestures. First, a series of hand gesture input images are converted into high-frequency images, then each of the hand gestures RGB images and their high-frequency images is learned through the DenseNet three-dimensional Convolutional Neural Network. Experimental results on 6 static hand gestures and 9 dynamic hand gestures showed an average of 92.6% recognition rate and increased 4.6% compared to previous DenseNet. The 3D defense game was implemented to verify the results of our study, and an average speed of 30 ms of gesture recognition was found to be available as a real-time user interface for virtual reality applications.

Avatar Control by using hand gesture recognition (Hand Gesture 인식을 이용한 아바타 제어)

  • 최우영;김소연;송백균
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • /
    • pp.616-619
    • /
    • 2004
  • As interests Un virtual reality being increased, the importance of HCI(Human computer interaction) field using gesture is also increased. However, in the preceding gesture recognition, the requirement of high-cost peripheral equipments limits users right. In this paper we suggest that through using low cost of USB PC-camera users are allowed to have more flexibly and cost down so that it can be adopted much commonly.

  • PDF

HandButton: Gesture Recognition of Transceiver-free Object by Using Wireless Networks

  • Zhang, Dian;Zheng, Weiling
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.2
    • /
    • pp.787-806
    • /
    • 2016
  • Traditional radio-based gesture recognition approaches usually require the target to carry a device (e.g., an EMG sensor or an accelerometer sensor). However, such requirement cannot be satisfied in many applications. For example, in smart home, users want to control the light on/off by some specific hand gesture, without finding and pressing the button especially in dark area. They will not carry any device in this scenario. To overcome this drawback, in this paper, we propose three algorithms able to recognize the target gesture (mainly the human hand gesture) without carrying any device, based on just Radio Signal Strength Indicator (RSSI). Our platform utilizes only 6 telosB sensor nodes with a very easy deployment. Experiment results show that the successful recognition radio can reach around 80% in our system.

Gesture Recognition System using Motion Information (움직임 정보를 이용한 제스처 인식 시스템)

  • Han, Young-Hwan
    • The KIPS Transactions:PartB
    • /
    • v.10B no.4
    • /
    • pp.473-478
    • /
    • 2003
  • In this paper, we propose the gesture recognition system using a motion information from extracted hand region in complex background image. First of all, we measure entropy for the difference image between continuous frames. Using a color information that is similar to a skin color in candidate region which has high value, we extract hand region only from background image. On the extracted hand region, we detect a contour using the chain code and recognize hand gesture by applying improved centroidal profile method. In the experimental results for 6 kinds of hand gesture, unlike existing methods, we can stably recognize hand gesture in complex background and illumination changes without marker. Also, it shows the recognition rate with more than 95% for person and 90∼100% for each gesture at 15 frames/second.

Virtual Block Game Interface based on the Hand Gesture Recognition (손 제스처 인식에 기반한 Virtual Block 게임 인터페이스)

  • Yoon, Min-Ho;Kim, Yoon-Jae;Kim, Tae-Young
    • Journal of Korea Game Society
    • /
    • v.17 no.6
    • /
    • pp.113-120
    • /
    • 2017
  • With the development of virtual reality technology, in recent years, user-friendly hand gesture interface has been more studied for natural interaction with a virtual 3D object. Most earlier studies on the hand-gesture interface are using relatively simple hand gestures. In this paper, we suggest an intuitive hand gesture interface for interaction with 3D object in the virtual reality applications. For hand gesture recognition, first of all, we preprocess various hand data and classify the data through the binary decision tree. The classified data is re-sampled and converted to the chain-code, and then constructed to the hand feature data with the histograms of the chain code. Finally, the input gesture is recognized by MCSVM-based machine learning from the feature data. To test our proposed hand gesture interface we implemented a 'Virtual Block' game. Our experiments showed about 99.2% recognition ratio of 16 kinds of command gestures and more intuitive and user friendly than conventional mouse interface.

Alphabetical Gesture Recognition using HMM (HMM을 이용한 알파벳 제스처 인식)

  • Yoon, Ho-Sub;Soh, Jung;Min, Byung-Woo
    • Proceedings of the Korean Information Science Society Conference
    • /
    • /
    • pp.384-386
    • /
    • 1998
  • The use of hand gesture provides an attractive alternative to cumbersome interface devices for human-computer interaction(HCI). Many methods hand gesture recognition using visual analysis have been proposed such as syntactical analysis, neural network(NN), Hidden Markov Model(HMM) and so on. In our research, a HMMs is proposed for alphabetical hand gesture recognition. In the preprocessing stage, the proposed approach consists of three different procedures for hand localization, hand tracking and gesture spotting. The hand location procedure detects the candidated regions on the basis of skin-color and motion in an image by using a color histogram matching and time-varying edge difference techniques. The hand tracking algorithm finds the centroid of a moving hand region, connect those centroids, and thus, produces a trajectory. The spotting a feature database, the proposed approach use the mesh feature code for codebook of HMM. In our experiments, 1300 alphabetical and 1300 untrained gestures are used for training and testing, respectively. Those experimental results demonstrate that the proposed approach yields a higher and satisfying recognition rate for the images with different sizes, shapes and skew angles.

  • PDF