• 제목/요약/키워드: hands tracking

검색결과 52건 처리시간 0.022초

Real Time Recognition of Finger-Language Using Color Information and Fuzzy Clustering Algorithm

  • Kim, Kwang-Baek;Song, Doo-Heon;Woo, Young-Woon
    • Journal of information and communication convergence engineering
    • /
    • 제8권1호
    • /
    • pp.19-22
    • /
    • 2010
  • A finger language helping hearing impaired people in communication A sign language helping hearing impaired people in communication is not popular to ordinary healthy people. In this paper, we propose a method for real-time sign language recognition from a vision system using color information and fuzzy clustering system. We use YCbCr color model and canny mask to decide the position of hands and the boundary lines. After extracting regions of two hands by applying 8-directional contour tracking algorithm and morphological information, the system uses FCM in classifying sign language signals. In experiment, the proposed method is proven to be sufficiently efficient.

일본과학계박물관의 전시수법과 연출에 따른 이용자행동반응에 관한 연구 (A Study on the Visitors' Behavior by the Exhibition Method and the Presentation in Science Museum)

  • 박종래;최준혁;배선화;임채진
    • 한국실내디자인학회:학술대회논문집
    • /
    • 한국실내디자인학회 2004년도 춘계학술발표대회 논문집
    • /
    • pp.127-130
    • /
    • 2004
  • In order to verify the validity of the exhibition moethod in a science museum, this study undertakes a visitor follow-up survey, and clarifies the influence an exhibition method affect visitor's behavior, and its feature. The tracking research performed the visitor follow-up survey to family company children. The characteristic of the visitor's behavior by the exhibition method; Use frequency becomes low in order of 'Experience type' 'Participation type' 'Fixed type'. Experience type has tendencies, such as repetitive and continuing use, and use time was long. Otherwise use time of Fixed type was short. As a result, although the use frequency of Hands-on is high and its use time is longer than Hands-off, it turns out that is influenced according to factors, such as the exhibition method, presentation and the contents of exhibition.

  • PDF

Dense RGB-D Map-Based Human Tracking and Activity Recognition using Skin Joints Features and Self-Organizing Map

  • Farooq, Adnan;Jalal, Ahmad;Kamal, Shaharyar
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제9권5호
    • /
    • pp.1856-1869
    • /
    • 2015
  • This paper addresses the issues of 3D human activity detection, tracking and recognition from RGB-D video sequences using a feature structured framework. During human tracking and activity recognition, initially, dense depth images are captured using depth camera. In order to track human silhouettes, we considered spatial/temporal continuity, constraints of human motion information and compute centroids of each activity based on chain coding mechanism and centroids point extraction. In body skin joints features, we estimate human body skin color to identify human body parts (i.e., head, hands, and feet) likely to extract joint points information. These joints points are further processed as feature extraction process including distance position features and centroid distance features. Lastly, self-organized maps are used to recognize different activities. Experimental results demonstrate that the proposed method is reliable and efficient in recognizing human poses at different realistic scenes. The proposed system should be applicable to different consumer application systems such as healthcare system, video surveillance system and indoor monitoring systems which track and recognize different activities of multiple users.

복잡한 배경에서 MAWUPC 알고리즘을 이용한 얼굴과 손의 추적 (Face and Hand Tracking using MAWUPC algorithm in Complex background)

  • 이상환;안상철;김형곤;김재희
    • 대한전자공학회논문지SP
    • /
    • 제39권2호
    • /
    • pp.39-49
    • /
    • 2002
  • 본 논문에서는 움직임 색상(Moving Color) 개념을 바탕으로 물체의 색상 정보와 움직임 정보의 효율적인 결합을 통해서 추적을 수행하는 MAWUPC(Motion Adaptive Weighted Unmatched Pixel Count)알고리즘을 제안하고, 이를 이용하여 일반적인 배경을 가지는 영상시퀀스에서 얼굴과 손을 추적하는 방법을 제안한다. MAWUPC 알고리즘은 색상 정보와 움직임 정보의 효과적인 결합을 수행하는 움직임 색상 개념에 관한 기존 연구인 AWUPC 알고리즘을 개선한 것으로, 추적하고자 하는 물체의 색상 정보를 이용한 색상 변환(Color Transform)과 움직임 검출을 위한 UPC(Unmatched Pixel Count) 연산, 그리고 움직임 정보를 추출하는 이산 칼만 필터(Discrete Kalman Filter)의 효과적인 결합으로 이루어진다. 제안하는 알고리즘은 일반적으로 물체들의 추적 과정에서 발생되는 가장 큰 문제인 유사한 색상을 가진 추적하고자 하는 물체들간의 겹침 문제와 물체의 추적에서 방해가 되는 복잡한 배경 문제를 해결할 수 있는 장점이 있다. 논문에서는 제안하는 알고리즘이 복잡한 배경 내에서 한 대의 카메라를 사용하여 획득된 컬러 영상을 대상으로 움직임이 있는 얼굴과 손의 추적에서 자주 발생되는 심각한 문제인 얼굴과 손, 손과 손의 겹침 문제를 잘 해결할 수 있다는 것을 실험을 통해 보인다.

Creating Deep Learning-based Acrobatic Videos Using Imitation Videos

  • Choi, Jong In;Nam, Sang Hun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권2호
    • /
    • pp.713-728
    • /
    • 2021
  • This paper proposes an augmented reality technique to generate acrobatic scenes from hitting motion videos. After a user shoots a motion that mimics hitting an object with hands or feet, their pose is analyzed using motion tracking with deep learning to track hand or foot movement while hitting the object. Hitting position and time are then extracted to generate the object's moving trajectory using physics optimization and synchronized with the video. The proposed method can create videos for hitting objects with feet, e.g. soccer ball lifting; fists, e.g. tap ball, etc. and is suitable for augmented reality applications to include virtual objects.

Automatic Gesture Recognition for Human-Machine Interaction: An Overview

  • Nataliia, Konkina
    • International Journal of Computer Science & Network Security
    • /
    • 제22권1호
    • /
    • pp.129-138
    • /
    • 2022
  • With the increasing reliance of computing systems in our everyday life, there is always a constant need to improve the ways users can interact with such systems in a more natural, effective, and convenient way. In the initial computing revolution, the interaction between the humans and machines have been limited. The machines were not necessarily meant to be intelligent. This begged for the need to develop systems that could automatically identify and interpret our actions. Automatic gesture recognition is one of the popular methods users can control systems with their gestures. This includes various kinds of tracking including the whole body, hands, head, face, etc. We also touch upon a different line of work including Brain-Computer Interface (BCI), Electromyography (EMG) as potential additions to the gesture recognition regime. In this work, we present an overview of several applications of automated gesture recognition systems and a brief look at the popular methods employed.

상지장애인을 위한 시선 인터페이스에서 포인터 실행 방법의 오작동 비교 분석을 통한 Eye-Voice 방식의 제안 (A Proposal of Eye-Voice Method based on the Comparative Analysis of Malfunctions on Pointer Click in Gaze Interface for the Upper Limb Disabled)

  • 박주현;박미현;임순범
    • 한국멀티미디어학회논문지
    • /
    • 제23권4호
    • /
    • pp.566-573
    • /
    • 2020
  • Computers are the most common tool when using the Internet and utilizing a mouse to select and execute objects. Eye tracking technology is welcomed as an alternative technology to help control computers for users who cannot use their hands due to their disabilities. However, the pointer execution method of the existing eye tracking technique causes many malfunctions. Therefore, in this paper, we developed a gaze tracking interface that combines voice commands to solve the malfunction problem when the upper limb disabled uses the existing gaze tracking technology to execute computer menus and objects. Usability verification was conducted through comparative experiments regarding the improvements of the malfunction. The upper limb disabled who are hand-impaired use eye tracking technology to move the pointer and utilize the voice commands, such as, "okay" while browsing the computer screen for instant clicks. As a result of the comparative experiments on the reduction of the malfunction of pointer execution with the existing gaze interfaces, we verified that our system, Eye-Voice, reduced the malfunction rate of pointer execution and is effective for the upper limb disabled to use.

Stereo Vision Based 3-D Motion Tracking for Human Animation

  • Han, Seung-Il;Kang, Rae-Won;Lee, Sang-Jun;Ju, Woo-Suk;Lee, Joan-Jae
    • 한국멀티미디어학회논문지
    • /
    • 제10권6호
    • /
    • pp.716-725
    • /
    • 2007
  • In this paper we describe a motion tracking algorithm for 3D human animation using stereo vision system. This allows us to extract the motion data of the end effectors of human body by following the movement through segmentation process in HIS or RGB color model, and then blob analysis is used to detect robust shape. When two hands or two foots are crossed at any position and become disjointed, an adaptive algorithm is presented to recognize whether it is left or right one. And the real motion is the 3-D coordinate motion. A mono image data is a data of 2D coordinate. This data doesn't acquire distance from a camera. By stereo vision like human vision, we can acquire a data of 3D motion such as left, right motion from bottom and distance of objects from camera. This requests a depth value including x axis and y axis coordinate in mono image for transforming 3D coordinate. This depth value(z axis) is calculated by disparity of stereo vision by using only end-effectors of images. The position of the inner joints is calculated and 3D character can be visualized using inverse kinematics.

  • PDF

동적 베이스망 기반의 양손 제스처 인식 (Dynamic Bayesian Network based Two-Hand Gesture Recognition)

  • 석흥일;신봉기
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제35권4호
    • /
    • pp.265-279
    • /
    • 2008
  • 손 제스처를 이용한 사람과 컴퓨터간의 상호 작용은 오랜 기간 많은 사람들이 연구해 오고 있으며 커다란 발전을 보이고 있지만, 여전히 만족스러운 결과를 보이지는 못하고 있다. 본 논문에서는 동적 베이스망 프레임워크를 이용한 손 제스처 인식 방법을 제안한다. 유선 글러브를 이용하는 방법들과는 달리, 카메라 기반의 방법에서는 영상 처리와 특징 추출 단계의 결과들이 인식 성능에 큰 영향을 미친다. 제안하는 제스처 모델에서의 추론에 앞서 피부 색상 모델링 및 검출과 움직임 추적을 수행한다. 특징들간의 관계와 새로운 정보들을 쉽게 모델에 반영할 수 있는 동적 베이스망을 이용하여 두 손 제스처와 한 손 제스처 모두를 인식할 수 있는 새로운 모델을 제안한다. 10가지 독립 제스처에 대한 실험에서 최대 99.59%의 높은 인식 성능을 보였다. 제안하는 모델과 관련 방법들은 수화 인식과 같은 다른 문제들에도 적용 가능할 것으로 판단된다.

피부색과 변형된 다중 CAMShift 알고리즘을 이용한 실시간 휴먼 트래킹 (Real-Time Human Tracking Using Skin Area and Modified Multi-CAMShift Algorithm)

  • 민재홍;김인규;백중환
    • 한국항행학회논문지
    • /
    • 제15권6호
    • /
    • pp.1132-1137
    • /
    • 2011
  • 본 논문에서는 사람의 신체 일부분을 추적하는 시스템을 위해서 피부영역을 추출하고 여러 개의 영역을 추적하는 다중 CAMShift 알고리즘(Multi Continuously Adaptive Mean Shift Algorithm)을 제안하였다. 입력 영상에서 피부영역을 추출하기 위해 영상의 RGB의 특정값을 기준으로 피부색에 적응적인 임계값을 적용하였다. 이때 적용된 피부영역을 양손, 얼굴 등에 초기 윈도우를 설정하였다. 이 영역들을 추적함에 있어 영역들 사이에 폐색 영역을 회피하기 위해 가우시안 배경 모델(Gaussian Background Model)을 사용하여 각 추적 영역들을 제한하였다. 또한 폐색영역에 가중치를 부가하여 확률분포영상에서 중심값을 이동시켜 폐색 영역을 회피하였다. 실험 결과 다중 물체들에 강인한 추적을 보이고 유사한 색상을 갖는 물체의 폐색 시에도 우수한 결과를 보임을 확인하였다.