• Title/Summary/Keyword: hand gesture analysis

Search Result 55, Processing Time 0.025 seconds

Recognition of Hand gesture to Human-Computer Interaction (손 동작을 통한 인간과 컴퓨터간의 상호 작용)

  • Lee, Lae-Kyoung;Kim, Sung-Shin
    • Proceedings of the KIEE Conference
    • /
    • 2000.07d
    • /
    • pp.2930-2932
    • /
    • 2000
  • In this paper. a robust gesture recognition system is designed and implemented to explore the communication methods between human and computer. Hand gestures in the proposed approach are used to communicate with a computer for actions of a high degree of freedom. The user does not need to wear any cumbersome devices like cyber-gloves. No assumption is made on whether the user is wearing any ornaments and whether the user is using the left or right hand gestures. Image segmentation based upon the skin-color and a shape analysis based upon the invariant moments are combined. The features are extracted and used for input vectors to a radial basis function networks(RBFN). Our "Puppy" robot is employed as a testbed. Preliminary results on a set of gestures show recognition rates of about 87% on the a real-time implementation.

  • PDF

Effects of Spatio-temporal Features of Dynamic Hand Gestures on Learning Accuracy in 3D-CNN (3D-CNN에서 동적 손 제스처의 시공간적 특징이 학습 정확성에 미치는 영향)

  • Yeongjee Chung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.3
    • /
    • pp.145-151
    • /
    • 2023
  • 3D-CNN is one of the deep learning techniques for learning time series data. Such three-dimensional learning can generate many parameters, so that high-performance machine learning is required or can have a large impact on the learning rate. When learning dynamic hand-gestures in spatiotemporal domain, it is necessary for the improvement of the efficiency of dynamic hand-gesture learning with 3D-CNN to find the optimal conditions of input video data by analyzing the learning accuracy according to the spatiotemporal change of input video data without structural change of the 3D-CNN model. First, the time ratio between dynamic hand-gesture actions is adjusted by setting the learning interval of image frames in the dynamic hand-gesture video data. Second, through 2D cross-correlation analysis between classes, similarity between image frames of input video data is measured and normalized to obtain an average value between frames and analyze learning accuracy. Based on this analysis, this work proposed two methods to effectively select input video data for 3D-CNN deep learning of dynamic hand-gestures. Experimental results showed that the learning interval of image data frames and the similarity of image frames between classes can affect the accuracy of the learning model.

Real-time Hand Gesture Recognition System based on Vision for Intelligent Robot Control (지능로봇 제어를 위한 비전기반 실시간 수신호 인식 시스템)

  • Yang, Tae-Kyu;Seo, Yong-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.10
    • /
    • pp.2180-2188
    • /
    • 2009
  • This paper is study on real-time hand gesture recognition system based on vision for intelligent robot control. We are proposed a recognition system using PCA and BP algorithm. Recognition of hand gestures consists of two steps which are preprocessing step using PCA algorithm and classification step using BP algorithm. The PCA algorithm is a technique used to reduce multidimensional data sets to lower dimensions for effective analysis. In our simulation, the PCA is applied to calculate feature projection vectors for the image of a given hand. The BP algorithm is capable of doing parallel distributed processing and expedite processing since it take parallel structure. The BP algorithm recognized in real time hand gestures by self learning of trained eigen hand gesture. The proposed PCA and BP algorithm show improvement on the recognition compared to PCA algorithm.

On-line Motion Control of Avatar Using Hand Gesture Recognition (손 제스터 인식을 이용한 실시간 아바타 자세 제어)

  • Kim, Jong-Sung;Kim, Jung-Bae;Song, Kyung-Joon;Min, Byung-Eui;Bien, Zeung-Nam
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.36C no.6
    • /
    • pp.52-62
    • /
    • 1999
  • This paper presents a system which recognizes dynamic hand gestures on-line for controlling motion of numan avatar in virtual environment(VF). A dynamic hand gesture is a method of communication between a computer and a human being who uses gestures, especially both hands and fingers. A human avatar consists of 32 degree of freedom(DOF) for natural motion in VE and navigates by 8 pre-defined dynamic hand gestures. Inverse kinematics and dynamic kinematics are applied for real-time motion control of human avatar. In this paper, we apply a fuzzy min-max neural network and feature analysis method using fuzzy logic for on-line dynamic hand gesture recognition.

  • PDF

Hand Gesture Sequence Recognition using Morphological Chain Code Edge Vector (형태론적 체인코드 에지벡터를 이용한 핸드 제스처 시퀀스 인식)

  • Lee Kang-Ho;Choi Jong-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.9 no.4 s.32
    • /
    • pp.85-91
    • /
    • 2004
  • The use of gestures provides an attractive alternate to cumbersome interface devices for human-computer interaction. This has motivated a very active research area concerned with computer vision-based analysis and interpretation of hand gestures The most important issues in gesture recognition are the simplification of algorithm and the reduction of processing time. The mathematical morphology based on geometrical set theory is best used to perform the processing. The key idea of proposed algorithm is to track a trajectory of center points in primitive elements extracted by morphological shape decomposition. The trajectory of morphological center points includes the information on shape orientation. Based on this characteristic we proposed the morphological gesture sequence recognition algorithm using feature vectors calculated to the trajectory of morphological center points. Through the experiment, we demonstrated the efficiency of proposed algorithm.

  • PDF

Combining Object Detection and Hand Gesture Recognition for Automatic Lighting System Control

  • Pham, Giao N.;Nguyen, Phong H.;Kwon, Ki-Ryong
    • Journal of Multimedia Information System
    • /
    • v.6 no.4
    • /
    • pp.329-332
    • /
    • 2019
  • Recently, smart lighting systems are the combination between sensors and lights. These systems turn on/off and adjust the brightness of lights based on the motion of object and the brightness of environment. These systems are often applied in places such as buildings, rooms, garages and parking lot. However, these lighting systems are controlled by lighting sensors, motion sensors based on illumination environment and motion detection. In this paper, we propose an automatic lighting control system using one single camera for buildings, rooms and garages. The proposed system is one integration the results of digital image processing as motion detection, hand gesture detection to control and dim the lighting system. The experimental results showed that the proposed system work very well and could consider to apply for automatic lighting spaces.

Gadget Arms: Interactive Data Visualization using Hand Gesture in Extended Reality (가젯암: 확장현실을 위한 손 제스처 기반 대화형 데이터 시각화 시스템)

  • Choi, JunYoung;Jeong, HaeJin;Jeong, Won-Ki
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.2
    • /
    • pp.31-41
    • /
    • 2019
  • Extended Reality (XR), such as virtual and augmented reality, has huge potential for immersive data visualization and analysis. In XR, users can interact with data and other users realistically by navigating the shared virtual space, allowing for more intuitive data analysis. However, creating a visualization in XR also poses a challenge because complicated, low-level programming is required, which hinders broad adaptation in visual analytics. This paper proposes an interactive visualization authoring tool based on hand gesture for immersive data visualization-Gadget Arms. The proposed system provides a novel user interaction to create and place visualization in the 3D virtual world. This simple, but intuitive, user interaction enables user designs the entire visualization space in the XR without using a host computer and low-level programming. Our user study also confirmed that the proposed user interaction significantly improves the usability of the visualization authoring tool.

Effect of Input Data Video Interval and Input Data Image Similarity on Learning Accuracy in 3D-CNN

  • Kim, Heeil;Chung, Yeongjee
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.2
    • /
    • pp.208-217
    • /
    • 2021
  • 3D-CNN is one of the deep learning techniques for learning time series data. However, these three-dimensional learning can generate many parameters, requiring high performance or having a significant impact on learning speed. We will use these 3D-CNNs to learn hand gesture and find the parameters that showed the highest accuracy, and then analyze how the accuracy of 3D-CNN varies through input data changes without any structural changes in 3D-CNN. First, choose the interval of the input data. This adjusts the ratio of the stop interval to the gesture interval. Secondly, the corresponding interframe mean value is obtained by measuring and normalizing the similarity of images through interclass 2D cross correlation analysis. This experiment demonstrates that changes in input data affect learning accuracy without structural changes in 3D-CNN. In this paper, we proposed two methods for changing input data. Experimental results show that input data can affect the accuracy of the model.

Hand Motion Recognition Algorithm Using Skin Color and Center of Gravity Profile (피부색과 무게중심 프로필을 이용한 손동작 인식 알고리즘)

  • Park, Youngmin
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.2
    • /
    • pp.411-417
    • /
    • 2021
  • The field that studies human-computer interaction is called HCI (Human-computer interaction). This field is an academic field that studies how humans and computers communicate with each other and recognize information. This study is a study on hand gesture recognition for human interaction. This study examines the problems of existing recognition methods and proposes an algorithm to improve the recognition rate. The hand region is extracted based on skin color information for the image containing the shape of the human hand, and the center of gravity profile is calculated using principal component analysis. I proposed a method to increase the recognition rate of hand gestures by comparing the obtained information with predefined shapes. We proposed a method to increase the recognition rate of hand gestures by comparing the obtained information with predefined shapes. The existing center of gravity profile has shown the result of incorrect hand gesture recognition for the deformation of the hand due to rotation, but in this study, the center of gravity profile is used and the point where the distance between the points of all contours and the center of gravity is the longest is the starting point. Thus, a robust algorithm was proposed by re-improving the center of gravity profile. No gloves or special markers attached to the sensor are used for hand gesture recognition, and a separate blue screen is not installed. For this result, find the feature vector at the nearest distance to solve the misrecognition, and obtain an appropriate threshold to distinguish between success and failure.

Skin Color Based Hand and Finger Detection for Gesture Recognition in CCTV Surveillance (CCTV 관제에서 동작 인식을 위한 색상 기반 손과 손가락 탐지)

  • Kang, Sung-Kwan;Chung, Kyung-Yong;Rim, Kee-Wook;Lee, Jung-Hyun
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.10
    • /
    • pp.1-10
    • /
    • 2011
  • In this paper, we proposed the skin color based hand and finger detection technology for the gesture recognition in CCTV surveillance. The aim of this paper is to present the methodology for hand detection and propose the finger detection method. The detected hand and finger can be used to implement the non-contact mouse. This technology can be used to control the home devices such as home-theater and television. Skin color is used to segment the hand region from background and contour is extracted from the segmented hand. Analysis of contour gives us the location of finger tip in the hand. After detecting the location of the fingertip, this system tracks the fingertip by using only R channel alone, and in recognition of hand motions to apply differential image, such as the removal of useless image shows a robust side. We explain about experiment which relates in fingertip tracking and finger gestures recognition, and experiment result shows the accuracy above 96%.