• Title/Summary/Keyword: 3D hand gesture

Search Result 66, Processing Time 0.029 seconds

A Hand Gesture Recognition System using 3D Tracking Volume Restriction Technique (3차원 추적영역 제한 기법을 이용한 손 동작 인식 시스템)

  • Kim, Kyung-Ho;Jung, Da-Un;Lee, Seok-Han;Choi, Jong-Soo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.6
    • /
    • pp.201-211
    • /
    • 2013
  • In this paper, we propose a hand tracking and gesture recognition system. Our system employs a depth capture device to obtain 3D geometric information of user's bare hand. In particular, we build a flexible tracking volume and restrict the hand tracking area, so that we can avoid diverse problems caused by conventional object detection/tracking systems. The proposed system computes running average of the hand position, and tracking volume is actively adjusted according to the statistical information that is computed on the basis of uncertainty of the user's hand motion in the 3D space. Once the position of user's hand is obtained, then the system attempts to detect stretched fingers to recognize finger gesture of the user's hand. In order to test the proposed framework, we built a NUI system using the proposed technique, and verified that our system presents very stable performance even in the case that multiple objects exist simultaneously in the crowded environment, as well as in the situation that the scene is occluded temporarily. We also verified that our system ensures running speed of 24-30 frames per second throughout the experiments.

Recognition of 3D hand gestures using partially tuned composite hidden Markov models

  • Kim, In Cheol
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.2
    • /
    • pp.236-240
    • /
    • 2004
  • Stroke-based composite HMMs with articulation states are proposed to deal with 3D spatio-temporal trajectory gestures. The direct use of 3D data provides more naturalness in generating gestures, thereby avoiding some of the constraints usually imposed to prevent performance degradation when trajectory data are projected into a specific 2D plane. Also, the decomposition of gestures into more primitive strokes is quite attractive, since reversely concatenating stroke-based HMMs makes it possible to construct a new set of gesture HMMs without retraining their parameters. Any deterioration in performance arising from decomposition can be remedied by a partial tuning process for such composite HMMs.

Dual Autostereoscopic Display Platform for Multi-user Collaboration with Natural Interaction

  • Kim, Hye-Mi;Lee, Gun-A.;Yang, Ung-Yeon;Kwak, Tae-Jin;Kim, Ki-Hong
    • ETRI Journal
    • /
    • v.34 no.3
    • /
    • pp.466-469
    • /
    • 2012
  • In this letter, we propose a dual autostereoscopic display platform employing a natural interaction method, which will be useful for sharing visual data with users. To provide 3D visualization of a model to users who collaborate with each other, a beamsplitter is used with a pair of autostereoscopic displays, providing a visual illusion of a floating 3D image. To interact with the virtual object, we track the user's hands with a depth camera. The gesture recognition technique we use operates without any initialization process, such as specific poses or gestures, and supports several commands to control virtual objects by gesture recognition. Experiment results show that our system performs well in visualizing 3D models in real-time and handling them under unconstrained conditions, such as complicated backgrounds or a user wearing short sleeves.

HMM-based Intent Recognition System using 3D Image Reconstruction Data (3차원 영상복원 데이터를 이용한 HMM 기반 의도인식 시스템)

  • Ko, Kwang-Enu;Park, Seung-Min;Kim, Jun-Yeup;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.2
    • /
    • pp.135-140
    • /
    • 2012
  • The mirror neuron system in the cerebrum, which are handled by visual information-based imitative learning. When we observe the observer's range of mirror neuron system, we can assume intention of performance through progress of neural activation as specific range, in include of partially hidden range. It is goal of our paper that imitative learning is applied to 3D vision-based intelligent system. We have experiment as stereo camera-based restoration about acquired 3D image our previous research Using Optical flow, unscented Kalman filter. At this point, 3D input image is sequential continuous image as including of partially hidden range. We used Hidden Markov Model to perform the intention recognition about performance as result of restoration-based hidden range. The dynamic inference function about sequential input data have compatible properties such as hand gesture recognition include of hidden range. In this paper, for proposed intention recognition, we already had a simulation about object outline and feature extraction in the previous research, we generated temporal continuous feature vector about feature extraction and when we apply to Hidden Markov Model, make a result of simulation about hand gesture classification according to intention pattern. We got the result of hand gesture classification as value of posterior probability, and proved the accuracy outstandingness through the result.

Kinect-based Motion Recognition Model for the 3D Contents Control (3D 콘텐츠 제어를 위한 키넥트 기반의 동작 인식 모델)

  • Choi, Han Suk
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.1
    • /
    • pp.24-29
    • /
    • 2014
  • This paper proposes a kinect-based human motion recognition model for the 3D contents control after tracking the human body gesture through the camera in the infrared kinect project. The proposed human motion model in this paper computes the distance variation of the body movement from shoulder to right and left hand, wrist, arm, and elbow. The human motion model is classified into the movement directions such as the left movement, right movement, up, down, enlargement, downsizing. and selection. The proposed kinect-based human motion recognition model is very natural and low cost compared to other contact type gesture recognition technologies and device based gesture technologies with the expensive hardware system.

Implementation of a Gesture Recognition Signage Platform for Factory Work Environments

  • Rho, Jungkyu
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.12 no.3
    • /
    • pp.171-176
    • /
    • 2020
  • This paper presents an implementation of a gesture recognition platform that can be used in a factory workplaces. The platform consists of signages that display worker's job orders and a control center that is used to manage work orders for factory workers. Each worker does not need to bring work order documents and can browse the assigned work orders on the signage at his/her workplace. The contents of signage can be controlled by worker's hand and arm gestures. Gestures are extracted from body movement tracked by 3D depth camera and converted to the commandsthat control displayed content of the signage. Using the control center, the factory manager can assign tasks to each worker, upload work order documents to the system, and see each worker's progress. The implementation has been applied experimentally to a machining factory workplace. This flatform provides convenience for factory workers when they are working at workplaces, improves security of techincal documents, but can also be used to build smart factories.

Android Platform based Gesture Recognition using Smart Phone Sensor Data (안드로이드 플랫폼기반 스마트폰 센서 정보를 활용한 모션 제스처 인식)

  • Lee, Yong Cheol;Lee, Chil Woo
    • Smart Media Journal
    • /
    • v.1 no.4
    • /
    • pp.18-26
    • /
    • 2012
  • The increase of the number of smartphone applications has enforced the importance of new user interface emergence and has raised the interest of research in the convergence of multiple sensors. In this paper, we propose a method for the convergence of acceleration, magnetic and gyro sensors to recognize the gesture from motion of user smartphone. The proposed method first obtain the 3D orientation of smartphone and recognize the gesture of hand motion by using HMM(Hidden Markov Model). The proposed method for the representation for 3D orientation of smartphone in spherical coordinate was used for quantization of smartphone orientation to be more sensitive in rotation axis. The experimental result shows that the success rate of our method is 93%.

  • PDF

Action recognition, hand gesture recognition, and emotion recognition using text classification method (Text classification 방법을 사용한 행동 인식, 손동작 인식 및 감정 인식)

  • Kim, Gi-Duk
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.01a
    • /
    • pp.213-216
    • /
    • 2021
  • 본 논문에서는 Text Classification에 사용된 딥러닝 모델을 적용하여 행동 인식, 손동작 인식 및 감정 인식 방법을 제안한다. 먼저 라이브러리를 사용하여 영상에서 특징 추출 후 식을 적용하여 특징의 벡터를 저장한다. 이를 Conv1D, Transformer, GRU를 결합한 모델에 학습시킨다. 이 방법을 통해 하나의 딥러닝 모델을 사용하여 다양한 분야에 적용할 수 있다. 제안한 방법을 사용해 SYSU 3D HOI 데이터셋에서 99.66%, eNTERFACE' 05 데이터셋에 대해 99.0%, DHG-14 데이터셋에 대해 95.48%의 클래스 분류 정확도를 얻을 수 있었다.

  • PDF

Development of a Hand~posture Recognition System Using 3D Hand Model (3차원 손 모델을 이용한 비전 기반 손 모양 인식기의 개발)

  • Jang, Hyo-Young;Bien, Zeung-Nam
    • Proceedings of the KIEE Conference
    • /
    • 2007.04a
    • /
    • pp.219-221
    • /
    • 2007
  • Recent changes to ubiquitous computing requires more natural human-computer(HCI) interfaces that provide high information accessibility. Hand-gesture, i.e., gestures performed by one 'or two hands, is emerging as a viable technology to complement or replace conventional HCI technology. This paper deals with hand-posture recognition. Hand-posture database construction is important in hand-posture recognition. Human hand is composed of 27 bones and the movement of each joint is modeled by 23 degrees of freedom. Even for the same hand-posture,. grabbed images may differ depending on user's characteristic and relative position between the hand and cameras. To solve the difficulty in defining hand-postures and construct database effective in size, we present a method using a 3D hand model. Hand joint angles for each hand-posture and corresponding silhouette images from many viewpoints by projecting the model into image planes are used to construct the ?database. The proposed method does not require additional equations to define movement constraints of each joint. Also using the method, it is easy to get images of one hand-posture from many vi.ewpoints and distances. Hence it is possible to construct database more precisely and concretely. The validity of the method is evaluated by applying it to the hand-posture recognition system.

  • PDF

Interface of Interactive Contents using Vision-based Body Gesture Recognition (비전 기반 신체 제스처 인식을 이용한 상호작용 콘텐츠 인터페이스)

  • Park, Jae Wan;Song, Dae Hyun;Lee, Chil Woo
    • Smart Media Journal
    • /
    • v.1 no.2
    • /
    • pp.40-46
    • /
    • 2012
  • In this paper, we describe interactive contents which is used the result of the inputted interface recognizing vision-based body gesture. Because the content uses the imp which is the common culture as the subject in Asia, we can enjoy it with culture familiarity. And also since the player can use their own gesture to fight with the imp in the game, they are naturally absorbed in the game. And the users can choose the multiple endings of the contents in the end of the scenario. In the part of the gesture recognition, KINECT is used to obtain the three-dimensional coordinates of each joint of the limb to capture the static pose of the actions. The vision-based 3D human pose recognition technology is used to method for convey human gesture in HCI(Human-Computer Interaction). 2D pose model based recognition method recognizes simple 2D human pose in particular environment On the other hand, 3D pose model which describes 3D human body skeletal structure can recognize more complex 3D pose than 2D pose model in because it can use joint angle and shape information of body part Because gestures can be presented through sequential static poses, we recognize the gestures which are configured poses by using HMM In this paper, we describe the interactive content which is used as input interface by using gesture recognition result. So, we can control the contents using only user's gestures naturally. And we intended to improve the immersion and the interest by using the imp who is used real-time interaction with user.

  • PDF