• Title/Summary/Keyword: 손 끝점 검출

Search Result 10, Processing Time 0.022 seconds

Multi Fingertip Detection Method (다중 손끝점 검출 기법)

  • Yu, Sunjin;Koh, Wan Ki;Kim, Sang Hoon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.11a
    • /
    • pp.1718-1720
    • /
    • 2013
  • 본 논문에서는 다중 손 끝점 검출을 위해 특징 추출 기법 및 이를 기반으로 한 손 끝점 검출 알고리즘을 제안한다. 특징 추출을 위해 Local Binary Feature(LBP)을 사용하였고 특징의 차원을 축소하기 위해 Principal Component Analysis(PCA) 기법을 이용하였다. 손 끝점 판별을 위해 Reduced multivariate polynomial Model(RM) Classifier를 사용하여 실험 결과 제안된 손 끝점 검출 기법이 다양한 환경에서 동작 하는 것을 확인 하였다.

Hand Region Tracking and Fingertip Detection based on Depth Image (깊이 영상 기반 손 영역 추적 및 손 끝점 검출)

  • Joo, Sung-Il;Weon, Sun-Hee;Choi, Hyung-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.8
    • /
    • pp.65-75
    • /
    • 2013
  • This paper proposes a method of tracking the hand region and detecting the fingertip using only depth images. In order to eliminate the influence of lighting conditions and obtain information quickly and stably, this paper proposes a tracking method that relies only on depth information, as well as a method of using region growing to identify errors that can occur during the tracking process and a method of detecting the fingertip that can be applied for the recognition of various gestures. First, the closest point of approach is identified through the process of transferring the center point in order to locate the tracking point, and the region is grown from that point to detect the hand region and boundary line. Next, the ratio of the invalid boundary, obtained by means of region growing, is used to calculate the validity of the tracking region and thereby judge whether the tracking is normal. If tracking is normal, the contour line is extracted from the detected hand region and the curvature and RANSAC and Convex-Hull are used to detect the fingertip. Lastly, quantitative and qualitative analyses are performed to verify the performance in various situations and prove the efficiency of the proposed algorithm for tracking and detecting the fingertip.

Robust Endpoint Detection for Bimodal System in Noisy Environments (잡음환경에서의 바이모달 시스템을 위한 견실한 끝점검출)

  • 오현화;권홍석;손종목;진성일;배건성
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.40 no.5
    • /
    • pp.289-297
    • /
    • 2003
  • The performance of a bimodal system is affected by the accuracy of the endpoint detection from the input signal as well as the performance of the speech recognition or lipreading system. In this paper, we propose the endpoint detection method which detects the endpoints from the audio and video signal respectively and utilizes the signal to-noise ratio (SNR) estimated from the input audio signal to select the reliable endpoints to the acoustic noise. In other words, the endpoints are detected from the audio signal under the high SNR and from the video signal under the low SNR. Experimental results show that the bimodal system using the proposed endpoint detector achieves satisfactory recognition rates, especially when the acoustic environment is quite noisy.

Image Processing Based Virtual Reality Input Method using Gesture (영상처리 기반의 제스처를 이용한 가상현실 입력기)

  • Hong, Dong-Gyun;Cheon, Mi-Hyeon;Lee, Donghwa
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.24 no.5
    • /
    • pp.129-137
    • /
    • 2019
  • Ubiquitous computing technology is emerging as information technology advances. In line with this, a number of studies are being carried out to increase device miniaturization and user convenience. Some of the proposed devices are user-friendly and uncomfortable with hand-held operation. To address these inconveniences, this paper proposed a virtual button that could be used in watching television. When watching a video on television, a camera is installed at the top of the TV, using the fact that the user watches the video from the front, so that the camera takes a picture of the top of the head. Extract the background and hand area separately from the filmed image, extract the outline to the extracted hand area, and detect the tip point of the finger. Detection of the end point of the finger produces a virtual button interface at the top of the image being filmed in front, and the button activates when the end point of the detected finger becomes a pointer and is located inside the button.

Hand Region Tracking and Finger Detection for Hand Gesture Recognition (손 제스처 인식을 위한 손 영역 추적 및 손가락 검출 방법)

  • Park, Se-Ho;Kim, Tae-Gon;Lee, Ji-Eun;Lee, Kyung-Taek
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2014.06a
    • /
    • pp.34-35
    • /
    • 2014
  • 본 논문에서는 손가락 제스처 인식을 위해서 깊이 영상 카메라를 이용하여 손 영역을 추적하고 손가락 끝점을 찾는 방법을 제시하고자 한다. 실시간 영역 추적을 위해 적은 연산량으로 손 영역의 중심점을 검출하고 추적이 가능하여야 하며, 다양한 제스처를 효과적으로 인식하기 위해서는 손 모양에서 손가락을 인식하여야 하기 때문에 손가락 끝점을 찾는 방법도 함꼐 제시하고자 한다. 또한 손가락이 정확히 검출되었는지를 확인하기 위해서 손가락의 이동과 손가락의 클릭 제스처를 마우스에 연동하여 검출 결과를 테스트 하였다.

  • PDF

MPEG-U based Advanced User Interaction Interface System Using Hand Posture Recognition (손 자세 인식을 이용한 MPEG-U 기반 향상된 사용자 상호작용 인터페이스 시스템)

  • Han, Gukhee;Lee, Injae;Choi, Haechul
    • Journal of Broadcast Engineering
    • /
    • v.19 no.1
    • /
    • pp.83-95
    • /
    • 2014
  • Hand posture recognition is an important technique to enable a natural and familiar interface in HCI(human computer interaction) field. In this paper, we introduce a hand posture recognition method by using a depth camera. Moreover, the hand posture recognition method is incorporated with MPEG-U based advanced user interaction (AUI) interface system, which can provide a natural interface with a variety of devices. The proposed method initially detects positions and lengths of all fingers opened and then it recognizes hand posture from pose of one or two hands and the number of fingers folded when user takes a gesture representing a pattern of AUI data format specified in the MPEG-U part 2. The AUI interface system represents user's hand posture as compliant MPEG-U schema structure. Experimental results show performance of the hand posture recognition and it is verified that the AUI interface system is compatible with the MPEG-U standard.

Hand Posture Recognition using Data of Edge Orientation Histogram (에지 방향성 히스토그램 데이터를 이용한 손 형상 인식)

  • Kim, Jang-Woon;Kim, Song-Gook;Jang, Han-Byul;Bae, Ki-Tae;Lee, Chil-Woo
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.10b
    • /
    • pp.49-53
    • /
    • 2006
  • 본 논문에서는 복잡한 배경을 가진 영상에서 손 영역을 안정적으로 검출, 손 형상을 인식하여 그림 맞추기 응용 프로그램을 제어하는 시스템에 대해 기술한다. 피부색의 컬러 정보를 이용하여 손 영역만을 추출한 후 핑거 팁 템플릿매칭을 사용하여 손가락 끝점을 찾아낸다. 또한 손 영역의 에지 방향성 히스토그램을 구하여 얻어진 정보를 바탕으로 주성분 분석법을 사용하여 손 형상을 인식한다. 최종적으로 인식된 손 형상 정보와 손가락 끝점 추적을 이용한 명령어 실행으로 그림 맞추기 응용 프로그램을 제어 한다. 본 논문에서 제안한 알고리즘으로 그림 맞추기 응용 프로그램 제어에 적용한 결과 안정적인 실험 결과를 얻을 수 있었고, HCI 분야에서 다양하게 활용될 수 있음을 확인하였다.

  • PDF

Vision and Depth Information based Real-time Hand Interface Method Using Finger Joint Estimation (손가락 마디 추정을 이용한 비전 및 깊이 정보 기반 손 인터페이스 방법)

  • Park, Kiseo;Lee, Daeho;Park, Youngtae
    • Journal of Digital Convergence
    • /
    • v.11 no.7
    • /
    • pp.157-163
    • /
    • 2013
  • In this paper, we propose a vision and depth information based real-time hand gesture interface method using finger joint estimation. For this, the areas of left and right hands are segmented after mapping of the visual image and depth information image, and labeling and boundary noise removal is performed. Then, the centroid point and rotation angle of each hand area are calculated. Afterwards, a circle is expanded at following pattern from a centroid point of the hand to detect joint points and end points of the finger by obtaining the midway points of the hand boundary crossing and the hand model is recognized. Experimental results that our method enabled fingertip distinction and recognized various hand gestures fast and accurately. As a result of the experiment on various hand poses with the hidden fingers using both hands, the accuracy showed over 90% and the performance indicated over 25 fps. The proposed method can be used as a without contacts input interface in HCI control, education, and game applications.

Implementation of Paper Keyboard Piano with a Kinect (키넥트를 이용한 종이건반 피아노 구현 연구)

  • Lee, Jung-Chul;Kim, Min-Seong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.12
    • /
    • pp.219-228
    • /
    • 2012
  • In this paper, we propose a paper keyboard piano implementation using the finger movement detection with the 3D image data from a kinect. Keyboard pattern and keyboard depth information are extracted from the color image and depth image to detect the touch event on the paper keyboard and to identify the touched key. Hand region detection error is unavoidable when using the simple comparison method between input depth image and background depth image, and this error is critical in key touch detection. Skin color is used to minimize the error. And finger tips are detected using contour detection with area limit and convex hull. Finally decision of key touch is carried out with the keyboard pattern information at the finger tip position. The experimental results showed that the proposed method can detect key touch with high accuracy. Paper keyboard piano can be utilized for the easy and convenient interface for the beginner to learn playing piano with the PC-based learning software.

A Real-time Hand Pose Recognition Method with Hidden Finger Prediction (은닉된 손가락 예측이 가능한 실시간 손 포즈 인식 방법)

  • Na, Min-Young;Choi, Jae-In;Kim, Tae-Young
    • Journal of Korea Game Society
    • /
    • v.12 no.5
    • /
    • pp.79-88
    • /
    • 2012
  • In this paper, we present a real-time hand pose recognition method to provide an intuitive user interface through hand poses or movements without a keyboard and a mouse. For this, the areas of right and left hands are segmented from the depth camera image, and noise removal is performed. Then, the rotation angle and the centroid point of each hand area are calculated. Subsequently, a circle is expanded at regular intervals from a centroid point of the hand to detect joint points and end points of the finger by obtaining the midway points of the hand boundary crossing. Lastly, the matching between the hand information calculated previously and the hand model of previous frame is performed, and the hand model is recognized to update the hand model for the next frame. This method enables users to predict the hidden fingers through the hand model information of the previous frame using temporal coherence in consecutive frames. As a result of the experiment on various hand poses with the hidden fingers using both hands, the accuracy showed over 95% and the performance indicated over 32 fps. The proposed method can be used as a contactless input interface in presentation, advertisement, education, and game applications.