• Title/Summary/Keyword: predefined gesture

Search Result 12, Processing Time 0.025 seconds

Object Detection Using Predefined Gesture and Tracking (약속된 제스처를 이용한 객체 인식 및 추적)

  • Bae, Dae-Hee;Yi, Joon-Hwan
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.10
    • /
    • pp.43-53
    • /
    • 2012
  • In the this paper, a gesture-based user interface based on object detection using predefined gesture and the tracking of the detected object is proposed. For object detection, moving objects in a frame are computed by comparing multiple previous frames and predefined gesture is used to detect the target object among those moving objects. Any object with the predefined gesture can be used to control. We also propose an object tracking algorithm, namely density based meanshift algorithm, that uses color distribution of the target objects. The proposed object tracking algorithm tracks a target object crossing the background with a similar color more accurately than existing techniques. Experimental results show that the proposed object detection and tracking algorithms achieve higher detection capability with less computational complexity.

Interactive visual knowledge acquisition for hand-gesture recognition (손 제스쳐 인식을 위한 상호작용 시각정보 추출)

  • 양선옥;최형일
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.9
    • /
    • pp.88-96
    • /
    • 1996
  • Computer vision-based gesture recognition systems consist of image segmentation, object tracking and decision. However, it is difficult to segment an object from image for gesture in computer systems because of vaious illuminations and backgrounds. In this paper, we describe a method to learn features for segmentation, which improves the performance of computer vision-based hand-gesture recognition systems. Systems interact with a user to acquire exact training data and segment information according to a predefined plan. System provides some models to the user, takes pictures of the user's response and then analyzes the pictures with models and a prior knowledge. The system sends messages to the user and operates learning module to extract information with the analyzed result.

  • PDF

Full-body Skeleton-based Motion Game System with Interactive Gesture Registration (상호작용적 제스처 등록이 가능한 전신 스켈레톤 기반 동작 게임 시스템)

  • Kim, Daehwan
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.419-420
    • /
    • 2022
  • This paper presents a method that allows users to interactively register their own gestures for a motion-based game system. Existing motion-based game systems create recognizers by collecting predefined gesture data. However, this sometimes requires difficult expertise or rather difficult courses. To alleviate these complex situations, we propose a full-body skeleton-based game system that can interactively register gestures.

  • PDF

Primitive Body Model Encoding and Selective / Asynchronous Input-Parallel State Machine for Body Gesture Recognition (바디 제스처 인식을 위한 기초적 신체 모델 인코딩과 선택적 / 비동시적 입력을 갖는 병렬 상태 기계)

  • Kim, Juchang;Park, Jeong-Woo;Kim, Woo-Hyun;Lee, Won-Hyong;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.8 no.1
    • /
    • pp.1-7
    • /
    • 2013
  • Body gesture Recognition has been one of the interested research field for Human-Robot Interaction(HRI). Most of the conventional body gesture recognition algorithms used Hidden Markov Model(HMM) for modeling gestures which have spatio-temporal variabilities. However, HMM-based algorithms have difficulties excluding meaningless gestures. Besides, it is necessary for conventional body gesture recognition algorithms to perform gesture segmentation first, then sends the extracted gesture to the HMM for gesture recognition. This separated system causes time delay between two continuing gestures to be recognized, and it makes the system inappropriate for continuous gesture recognition. To overcome these two limitations, this paper suggests primitive body model encoding, which performs spatio/temporal quantization of motions from human body model and encodes them into predefined primitive codes for each link of a body model, and Selective/Asynchronous Input-Parallel State machine(SAI-PSM) for multiple-simultaneous gesture recognition. The experimental results showed that the proposed gesture recognition system using primitive body model encoding and SAI-PSM can exclude meaningless gestures well from the continuous body model data, while performing multiple-simultaneous gesture recognition without losing recognition rates compared to the previous HMM-based work.

A Study on Hand Gesture Recognition with Low-Resolution Hand Images (저해상도 손 제스처 영상 인식에 대한 연구)

  • Ahn, Jung-Ho
    • Journal of Satellite, Information and Communications
    • /
    • v.9 no.1
    • /
    • pp.57-64
    • /
    • 2014
  • Recently, many human-friendly communication methods have been studied for human-machine interface(HMI) without using any physical devices. One of them is the vision-based gesture recognition that this paper deals with. In this paper, we define some gestures for interaction with objects in a predefined virtual world, and propose an efficient method to recognize them. For preprocessing, we detect and track the both hands, and extract their silhouettes from the low-resolution hand images captured by a webcam. We modeled skin color by two Gaussian distributions in RGB color space and use blob-matching method to detect and track the hands. Applying the foodfill algorithm we extracted hand silhouettes and recognize the hand shapes of Thumb-Up, Palm and Cross by detecting and analyzing their modes. Then, with analyzing the context of hand movement, we recognized five predefined one-hand or both-hand gestures. Assuming that one main user shows up for accurate hand detection, the proposed gesture recognition method has been proved its efficiency and accuracy in many real-time demos.

A Study of Pattern-based Gesture Interaction in Tabletop Environments (테이블탑 환경에서 패턴 기반의 제스처 인터렉션 방법 연구)

  • Kim, Gun-Hee;Cho, Hyun-Chul;Pei, Wen-Hua;Ha, Sung-Do;Park, Ji-Hyung
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.696-700
    • /
    • 2009
  • In this paper, we present a framework which enables users to interact naturally with hand gestures on a digital table. In general tabletop applications, one gesture is mapped to one function or command. Therefore, users should know these relations, and make predefined gestures as input. In contrast, users can make input gesture without cognitive load in our system. Instead of burdening users, the system possesses knowledge about gesture interaction, and infers proactively users' gestures and intentions. When users make a gesture on the digital surface, the system begins to analyze the gestures and designs the response according to users' intention.

  • PDF

Motion-Understanding Cell Phones for Intelligent User Interaction and Entertainment (지능형 UI와 Entertainment를 위한 동작 이해 휴대기기)

  • Cho, Sung-Jung;Choi, Eun-Seok;Bang, Won-Chul;Yang, Jing;Cho, Joon-Kee;Ki, Eun-Kwang;Sohn, Jun-Il;Kim, Dong-Yoon;Kim, Sang-Ryong
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.684-691
    • /
    • 2006
  • As many functionalities such as cameras and MP3 players are converged to mobile phones, more intuitive and interesting interaction methods are essential. In this paper, we present applications and their enabling technologies for gesture interactive cell phones. They employ gesture recognition and real-time shake detection algorithm for supporting motion-based user interface and entertainment applications respectively. The gesture recognition algorithm classifies users' movement into one of predefined gestures by modeling basic components of acceleration signals and their relationships. The recognition performance is further enhanced by discriminating frequently confusing classes with support vector machines. The shake detection algorithm detects in real time the exact motion moment when the phone is shaken significantly by utilizing variance and mean of acceleration signals. The gesture interaction algorithms show reliable performance for commercialization; with 100 novice users, the average recognition rate was 96.9% on 11 gestures (digits 1-9, O, X) and users' movements were detected in real time. We have applied the motion understanding technologies to Samsung cell phones in Korean, American, Chinese and European markets since May 2005.

  • PDF

Hand Motion Recognition Algorithm Using Skin Color and Center of Gravity Profile (피부색과 무게중심 프로필을 이용한 손동작 인식 알고리즘)

  • Park, Youngmin
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.2
    • /
    • pp.411-417
    • /
    • 2021
  • The field that studies human-computer interaction is called HCI (Human-computer interaction). This field is an academic field that studies how humans and computers communicate with each other and recognize information. This study is a study on hand gesture recognition for human interaction. This study examines the problems of existing recognition methods and proposes an algorithm to improve the recognition rate. The hand region is extracted based on skin color information for the image containing the shape of the human hand, and the center of gravity profile is calculated using principal component analysis. I proposed a method to increase the recognition rate of hand gestures by comparing the obtained information with predefined shapes. We proposed a method to increase the recognition rate of hand gestures by comparing the obtained information with predefined shapes. The existing center of gravity profile has shown the result of incorrect hand gesture recognition for the deformation of the hand due to rotation, but in this study, the center of gravity profile is used and the point where the distance between the points of all contours and the center of gravity is the longest is the starting point. Thus, a robust algorithm was proposed by re-improving the center of gravity profile. No gloves or special markers attached to the sensor are used for hand gesture recognition, and a separate blue screen is not installed. For this result, find the feature vector at the nearest distance to solve the misrecognition, and obtain an appropriate threshold to distinguish between success and failure.

Hidden Markov Model for Gesture Recognition (제스처 인식을 위한 은닉 마르코프 모델)

  • Park, Hye-Sun;Kim, Eun-Yi;Kim, Hang-Joon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.1 s.307
    • /
    • pp.17-26
    • /
    • 2006
  • This paper proposes a novel hidden Markov model (HMM)-based gesture recognition method and applies it to an HCI to control a computer game. The novelty of the proposed method is two-fold: 1) the proposed method uses a continuous streaming of human motion as the input to the HMM instead of isolated data sequences or pre-segmented sequences of data and 2) the gesture segmentation and recognition are performed simultaneously. The proposed method consists of a single HMM composed of thirteen gesture-specific HMMs that independently recognize certain gestures. It takes a continuous stream of pose symbols as an input, where a pose is composed of coordinates that indicate the face, left hand, and right hand. Whenever a new input Pose arrives, the HMM continuously updates its state probabilities, then recognizes a gesture if the probability of a distinctive state exceeds a predefined threshold. To assess the validity of the proposed method, it was applied to a real game, Quake II, and the results demonstrated that the proposed HMM could provide very useful information to enhance the discrimination between different classes and reduce the computational cost.

Design and Implementation of Immersive Media System Based on Dynamic Projection Mapping and Gesture Recognition (동적 프로젝션 맵핑과 제스처 인식 기반의 실감 미디어 시스템 설계 및 구현)

  • Kim, Sang Joon;Koh, You Jon;Choi, Yoo-Joo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.3
    • /
    • pp.109-122
    • /
    • 2020
  • In recent, projection mapping, which has attracted high attention in the field of realistic media, is regarded as a technology to increase the users' immersion. However, most existing methods perform projection mapping on static objects. In this paper, we developed a technology to track the movements of users and dynamically map the media contents to the users' bodies. The projected media content is built by predefined gestures just using the user's bare hands without the special devices. An interactive immersive media system has been implemented by integrating these dynamic projection mapping technologies and gesture-based drawing technologies. The proposed realistic media system recognizes the movements and open / closed states of the user 's hands, selects the functions necessary to draw a picture. The users can freely draw the picture by changing the color of the brush using the colors of any real objects. In addition, the user's drawing is dynamically projected on the user's body, allowing the user to design and wear his t-shirt in real-time.