• Title/Summary/Keyword: 3D hand gesture

Search Result 66, Processing Time 0.031 seconds

Robust Hand Region Extraction Using a Joint-based Model (관절 기반의 모델을 활용한 강인한 손 영역 추출)

  • Jang, Seok-Woo;Kim, Sul-Ho;Kim, Gye-Young
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.9
    • /
    • pp.525-531
    • /
    • 2019
  • Efforts to utilize human gestures to effectively implement a more natural and interactive interface between humans and computers have been ongoing in recent years. In this paper, we propose a new algorithm that accepts consecutive three-dimensional (3D) depth images, defines a hand model, and robustly extracts the human hand region based on six palm joints and 15 finger joints. Then, the 3D depth images are adaptively binarized to exclude non-interest areas, such as the background, and accurately extracts only the hand of the person, which is the area of interest. Experimental results show that the presented algorithm detects only the human hand region 2.4% more accurately than the existing method. The hand region extraction algorithm proposed in this paper is expected to be useful in various practical applications related to computer vision and image processing, such as gesture recognition, virtual reality implementation, 3D motion games, and sign recognition.

Development for Multi-modal Realistic Experience I/O Interaction System (멀티모달 실감 경험 I/O 인터랙션 시스템 개발)

  • Park, Jae-Un;Whang, Min-Cheol;Lee, Jung-Nyun;Heo, Hwan;Jeong, Yong-Mu
    • Science of Emotion and Sensibility
    • /
    • v.14 no.4
    • /
    • pp.627-636
    • /
    • 2011
  • The purpose of this study is to develop the multi-modal interaction system. This system provides realistic and an immersive experience through multi-modal interaction. The system recognizes user behavior, intention, and attention, which overcomes the limitations of uni-modal interaction. The multi-modal interaction system is based upon gesture interaction methods, intuitive gesture interaction and attention evaluation technology. The gesture interaction methods were based on the sensors that were selected to analyze the accuracy of the 3-D gesture recognition technology using meta-analysis. The elements of intuitive gesture interaction were reflected through the results of experiments. The attention evaluation technology was developed by the physiological signal analysis. This system is divided into 3 modules; a motion cognitive system, an eye gaze detecting system, and a bio-reaction sensing system. The first module is the motion cognitive system which uses the accelerator sensor and flexible sensors to recognize hand and finger movements of the user. The second module is an eye gaze detecting system that detects pupil movements and reactions. The final module consists of a bio-reaction sensing system or attention evaluating system which tracks cardiovascular and skin temperature reactions. This study will be used for the development of realistic digital entertainment technology.

  • PDF

Segmentation of Pointed Objects for Service Robots (서비스 로봇을 위한 지시 물체 분할 방법)

  • Kim, Hyung-O;Kim, Soo-Hwan;Kim, Dong-Hwan;Park, Sung-Kee
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.2
    • /
    • pp.139-146
    • /
    • 2009
  • This paper describes how a person extracts a unknown object with pointing gesture while interacting with a robot. Using a stereo vision sensor, our proposed method consists of two stages: the detection of the operators' face, the estimation of the pointing direction, and the extraction of the pointed object. The operator's face is recognized by using the Haar-like features. And then we estimate the 3D pointing direction from the shoulder-to-hand line. Finally, we segment an unknown object from 3D point clouds in estimated region of interest. On the basis of this proposed method, we implemented an object registration system with our mobile robot and obtained reliable experimental results.

  • PDF

Web-based 3D Virtual Experience using Unity and Leap Motion (Unity와 Leap Motion을 이용한 웹 기반 3D 가상품평)

  • Jung, Ho-Kyun;Park, Hyungjun
    • Korean Journal of Computational Design and Engineering
    • /
    • v.21 no.2
    • /
    • pp.159-169
    • /
    • 2016
  • In order to realize the virtual prototyping (VP) of digital products, it is important to provide the people involved in product development with the appropriate visualization and interaction of the products, and the vivid simulation of user interface (UI) behaviors in an interactive 3D virtual environment. In this paper, we propose an approach to web-based 3D virtual experience using Unity and Leap Motion. We adopt Unity as an implementation platform which easily and rapidly implements the visualization of the products and the design and simulation of their UI behaviors, and allows remote users to get an easy access to the virtual environment. Additionally, we combine Leap Motion with Unity to embody natural and immersive interaction using the user's hand gesture. Based on the proposed approach, we have developed a testbed system for web-based 3D virtual experience and applied it for the design evaluation of various digital products. Button selection test was done to investigate the quality of the interaction using Leap Motion, and a preliminary user study was also performed to show the usefulness of the proposed approach.

Hand Gesture Recognition Method based on the MCSVM for Interaction with 3D Objects in Virtual Reality (가상현실 3D 오브젝트와 상호작용을 위한 MCSVM 기반 손 제스처 인식)

  • Kim, Yoon-Je;Koh, Tack-Kyun;Yoon, Min-Ho;Kim, Tae-Young
    • Annual Conference of KIPS
    • /
    • 2017.11a
    • /
    • pp.1088-1091
    • /
    • 2017
  • 최근 그래픽스 기반의 가상현실 기술의 발전과 관심이 증가하면서 3D 객체와의 자연스러운 상호작용을 위한 방법들 중 손 제스처 인식에 대한 연구가 활발히 진행되고 있다. 본 논문은 가상현실 3D 오브젝트와의 상호작용을 위한 MCSVM 기반의 손 제스처 인식을 제안한다. 먼저 다양한 손 제스처들을 립모션을 통해 입력 받아 전처리를 수행한 손 데이터를 전달한다. 그 후 이진 결정 트리로 1차 분류를 한 손 데이터를 리샘플링 한 뒤 체인코드를 생성하고 이에 대한 히스토그램으로 특징 데이터를 구성한다. 이를 기반으로 MCSVM 학습을 통해 2차 분류를 수행하여 제스처를 인식한다. 실험 결과 3D 오브젝트와 상호작용을 위한 16개의 명령 제스처에 대해 평균 99.2%의 인식률을 보였고 마우스 인터페이스와 비교한 정서적 평가 결과에서는 마우스 입력에 비하여 직관적이고 사용자 친화적인 상호작용이 가능하다는 점에서 게임, 학습 시뮬레이션, 설계, 의료분야 등 많은 가상현실 응용 분야에서의 입력 인터페이스로 활용 될 수 있고 가상현실에서 몰입도를 높이는데 도움이 됨을 알 수 있었다.

Vision-based 3D Hand Gesture Recognition for Human-Robot Interaction (휴먼-로봇 상호작용을 위한 비전 기반3차원 손 제스처 인식)

  • Roh, Myung-Cheol;Chang, Hye-Min;Kang, Seung-Yeon;Lee, Seong-Whan
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.10b
    • /
    • pp.421-425
    • /
    • 2006
  • 최근 들어서 휴머노이드 로봇을 비롯한 로봇에 대하여 관심이 증대되고 있다. 이에 따라, 외모를 닮은 로봇 뿐 만 아니라, 사람과 상호 작용을 할 수 있는 로봇 기술의 중요성이 부각되고 있다. 이러한 상호 작용을 위한 효율적이고, 가장 자연스러운 방법 중의 하나가 비전을 기반으로 한 제스처 인식이다. 제스처를 인식하는데 있어서 가장 중요한 것은 손의 모양과 움직임을 인식하는3차원 제스처 인식이다. 본 논문에서는 3차원 손 제스처를 인식하기 위하여3차원 손 모델 추정 방법과 명령형 제스처 인식 시스템을 소개하고, 수화, 지화 등으로의 확장성을 위한 프레임워크를 제안한다.

  • PDF

Dynamic Hand Gesture Recognition Using a CNN Model with 3D Receptive Fields (3 차원 수용영역 구조의 CNN 모델을 이용한 동적 수신호 인식 기법)

  • Park, Jin-Hee;Lee, Joseph S.;Kim, Ho-Joon
    • Annual Conference of KIPS
    • /
    • 2007.05a
    • /
    • pp.459-462
    • /
    • 2007
  • 본 연구에서는 동적 수신호 인식문제를 위하여 CNN 모델을 사용한 특징추출 기법과, FMM 신경망을 사용한 특징 분석 기법을 상호 결합한 형태의 패턴 인식 모델을 제안한다. 수신호 인식을 위하여 영상패턴에서 대상물의 움직임 정보에 기초한 3 차원 형식의 데이터 표현 기법과, 이로부터 인식을 위한 특징추출 기법을 제시한다. 특징추출 모듈에서는 3 차원으로 확장된 구조의 수용영역을 고려한 CNN 모델을 제안하며, 이로부터 학습패턴에서 특징점의 공간적 변이에 대한 영향을 최소화할 수 있음을 고찰한다. 또한 인식효율의 개선을 위하여 방대한 양의 특징집합으로부터 효과적인 특징을 선별하기 위한 방법론으로서 WFMM 모델 기반의 특징분석 기법을 정의하고 이로부터 선별된 특징을 사용하는 인식 기법을 소개한다.

A real-time robust body-part tracking system for intelligent environment (지능형 환경을 위한 실시간 신체 부위 추적 시스템 -조명 및 복장 변화에 강인한 신체 부위 추적 시스템-)

  • Jung, Jin-Ki;Cho, Kyu-Sung;Choi, Jin;Yang, Hyun S.
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.411-417
    • /
    • 2009
  • We proposed a robust body part tracking system for intelligent environment that will not limit freedom of users. Unlike any previous gesture recognizer, we upgraded the generality of the system by creating the ability the ability to recognize details, such as, the ability to detect the difference between long sleeves and short sleeves. For the precise each body part tracking, we obtained the image of hands, head, and feet separately from a single camera, and when detecting each body part, we separately chose the appropriate feature for certain parts. Using a calibrated camera, we transferred 2D detected body parts into the 3D posture. In the experimentation, this system showed advanced hand tracking performance in real time(50fps).

  • PDF

Developing Interactive Game Contents using 3D Human Pose Recognition (3차원 인체 포즈 인식을 이용한 상호작용 게임 콘텐츠 개발)

  • Choi, Yoon-Ji;Park, Jae-Wan;Song, Dae-Hyeon;Lee, Chil-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.12
    • /
    • pp.619-628
    • /
    • 2011
  • Normally vision-based 3D human pose recognition technology is used to method for convey human gesture in HCI(Human-Computer Interaction). 2D pose model based recognition method recognizes simple 2D human pose in particular environment. On the other hand, 3D pose model which describes 3D human body skeletal structure can recognize more complex 3D pose than 2D pose model in because it can use joint angle and shape information of body part. In this paper, we describe a development of interactive game contents using pose recognition interface that using 3D human body joint information. Our system was proposed for the purpose that users can control the game contents with body motion without any additional equipment. Poses are recognized comparing current input pose and predefined pose template which is consist of 14 human body joint 3D information. We implement the game contents with the our pose recognition system and make sure about the efficiency of our proposed system. In the future, we will improve the system that can be recognized poses in various environments robustly.

AdaBoost-based Gesture Recognition Using Time Interval Window Applied Global and Local Feature Vectors with Mono Camera (모노 카메라 영상기반 시간 간격 윈도우를 이용한 광역 및 지역 특징 벡터 적용 AdaBoost기반 제스처 인식)

  • Hwang, Seung-Jun;Ko, Ha-Yoon;Baek, Joong-Hwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.3
    • /
    • pp.471-479
    • /
    • 2018
  • Recently, the spread of smart TV based Android iOS Set Top box has become common. This paper propose a new approach to control the TV using gestures away from the era of controlling the TV using remote control. In this paper, the AdaBoost algorithm is applied to gesture recognition by using a mono camera. First, we use Camshift-based Body tracking and estimation algorithm based on Gaussian background removal for body coordinate extraction. Using global and local feature vectors, we recognized gestures with speed change. By tracking the time interval trajectories of hand and wrist, the AdaBoost algorithm with CART algorithm is used to train and classify gestures. The principal component feature vector with high classification success rate is searched using CART algorithm. As a result, 24 optimal feature vectors were found, which showed lower error rate (3.73%) and higher accuracy rate (95.17%) than the existing algorithm.