• Title/Summary/Keyword: Gesture-based Interaction

Search Result 152, Processing Time 0.035 seconds

Primitive Body Model Encoding and Selective / Asynchronous Input-Parallel State Machine for Body Gesture Recognition (바디 제스처 인식을 위한 기초적 신체 모델 인코딩과 선택적 / 비동시적 입력을 갖는 병렬 상태 기계)

  • Kim, Juchang;Park, Jeong-Woo;Kim, Woo-Hyun;Lee, Won-Hyong;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.8 no.1
    • /
    • pp.1-7
    • /
    • 2013
  • Body gesture Recognition has been one of the interested research field for Human-Robot Interaction(HRI). Most of the conventional body gesture recognition algorithms used Hidden Markov Model(HMM) for modeling gestures which have spatio-temporal variabilities. However, HMM-based algorithms have difficulties excluding meaningless gestures. Besides, it is necessary for conventional body gesture recognition algorithms to perform gesture segmentation first, then sends the extracted gesture to the HMM for gesture recognition. This separated system causes time delay between two continuing gestures to be recognized, and it makes the system inappropriate for continuous gesture recognition. To overcome these two limitations, this paper suggests primitive body model encoding, which performs spatio/temporal quantization of motions from human body model and encodes them into predefined primitive codes for each link of a body model, and Selective/Asynchronous Input-Parallel State machine(SAI-PSM) for multiple-simultaneous gesture recognition. The experimental results showed that the proposed gesture recognition system using primitive body model encoding and SAI-PSM can exclude meaningless gestures well from the continuous body model data, while performing multiple-simultaneous gesture recognition without losing recognition rates compared to the previous HMM-based work.

Interface of Interactive Contents using Vision-based Body Gesture Recognition (비전 기반 신체 제스처 인식을 이용한 상호작용 콘텐츠 인터페이스)

  • Park, Jae Wan;Song, Dae Hyun;Lee, Chil Woo
    • Smart Media Journal
    • /
    • v.1 no.2
    • /
    • pp.40-46
    • /
    • 2012
  • In this paper, we describe interactive contents which is used the result of the inputted interface recognizing vision-based body gesture. Because the content uses the imp which is the common culture as the subject in Asia, we can enjoy it with culture familiarity. And also since the player can use their own gesture to fight with the imp in the game, they are naturally absorbed in the game. And the users can choose the multiple endings of the contents in the end of the scenario. In the part of the gesture recognition, KINECT is used to obtain the three-dimensional coordinates of each joint of the limb to capture the static pose of the actions. The vision-based 3D human pose recognition technology is used to method for convey human gesture in HCI(Human-Computer Interaction). 2D pose model based recognition method recognizes simple 2D human pose in particular environment On the other hand, 3D pose model which describes 3D human body skeletal structure can recognize more complex 3D pose than 2D pose model in because it can use joint angle and shape information of body part Because gestures can be presented through sequential static poses, we recognize the gestures which are configured poses by using HMM In this paper, we describe the interactive content which is used as input interface by using gesture recognition result. So, we can control the contents using only user's gestures naturally. And we intended to improve the immersion and the interest by using the imp who is used real-time interaction with user.

  • PDF

Human Robot Interaction Using Face Direction Gestures

  • Kwon, Dong-Soo;Bang, Hyo-Choong
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.171.4-171
    • /
    • 2001
  • This paper proposes a method of human- robot interaction (HRI) using face directional gesture. A single CCD color camera is used to input face region, and the robot recognizes the face directional gesture based on the facial feature´s positions. One can give a command such as stop, go, left and right turn to the robot using the face directional gesture. Since the robot also has the ultra sonic sensors, it can detect obstacles and determine a safe direction at the current position. By combining the user´s command with the sensed obstacle configuration, the robot selects the safe and efficient motion direction. From simulation results, we show that the robot with HRI is more reliable for the robot´s navigation.

  • PDF

A Controlled Study of Interactive Exhibit based on Gesture Image Recognition (제스처 영상 인식기반의 인터렉티브 전시용 제어기술 연구)

  • Cha, Jaesang;Kang, Joonsang;Rho, Jung-Kyu;Choi, Jungwon;Koo, Eunja
    • Journal of Satellite, Information and Communications
    • /
    • v.9 no.1
    • /
    • pp.1-5
    • /
    • 2014
  • Recently, building is rapidly develop more intelligently because of the development of industries. And people seek such as comfort, efficiency, and convenience in office environment and the living environment. Also, people were able to use a variety of devices. Smart TV and smart phones were distributed widely so interaction between devices and human has been increase the interest. A various method study for interaction but there are some discomfort and limitations using controller for interaction. In this paper, a user could be easily interaction and control LED through using Kinect and gesture(hand gestures) without controller. we designed interface which is control LED using the joint information of gesture obtained from Kinect. A user could be individually controlled LED through gestures (hand movements) using the implementation of the interface. We expected developed interface would be useful in LED control and various fields.

Human-Computer Interaction Survey for Intelligent Robot (지능형 로봇을 위한 인간-컴퓨터 상호작용(HCI) 연구동향)

  • Hong, Seok-Ju;Lee, Chil-Woo
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2006.11a
    • /
    • pp.507-511
    • /
    • 2006
  • Intelligent robot is defined as a system that it judges autonomously based on sensory organ of sight, hearing etc.. analogously with human. Human communicates using nonverbal means such as gesture in addition to language. If robot understands such nonverbal communication means, robot may become familiar with human . HCI (Human Computer Interaction) technologies are studied vigorously including face recognition and gesture recognition, but they are many problems that must be solved in real conditions. In this paper, we introduce the importance of contents and give application example of technology stressed on the recent research result about gesture recognition technology as one of most natural communication method with human.

  • PDF

Gesture Recognition by Analyzing a Trajetory on Spatio-Temporal Space (시공간상의 궤적 분석에 의한 제스쳐 인식)

  • 민병우;윤호섭;소정;에지마 도시야끼
    • Journal of KIISE:Software and Applications
    • /
    • v.26 no.1
    • /
    • pp.157-157
    • /
    • 1999
  • Researches on the gesture recognition have become a very interesting topic in the computer vision area, Gesture recognition from visual images has a number of potential applicationssuch as HCI (Human Computer Interaction), VR(Virtual Reality), machine vision. To overcome thetechnical barriers in visual processing, conventional approaches have employed cumbersome devicessuch as datagloves or color marked gloves. In this research, we capture gesture images without usingexternal devices and generate a gesture trajectery composed of point-tokens. The trajectory Is spottedusing phase-based velocity constraints and recognized using the discrete left-right HMM. Inputvectors to the HMM are obtained by using the LBG clustering algorithm on a polar-coordinate spacewhere point-tokens on the Cartesian space .are converted. A gesture vocabulary is composed oftwenty-two dynamic hand gestures for editing drawing elements. In our experiment, one hundred dataper gesture are collected from twenty persons, Fifty data are used for training and another fifty datafor recognition experiment. The recognition result shows about 95% recognition rate and also thepossibility that these results can be applied to several potential systems operated by gestures. Thedeveloped system is running in real time for editing basic graphic primitives in the hardwareenvironments of a Pentium-pro (200 MHz), a Matrox Meteor graphic board and a CCD camera, anda Window95 and Visual C++ software environment.

An Emotional Gesture-based Dialogue Management System using Behavior Network (행동 네트워크를 이용한 감정형 제스처 기반 대화 관리 시스템)

  • Yoon, Jong-Won;Lim, Sung-Soo;Cho, Sung-Bae
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.10
    • /
    • pp.779-787
    • /
    • 2010
  • Since robots have been used widely recently, research about human-robot communication is in process actively. Typically, natural language processing or gesture generation have been applied to human-robot interaction. However, existing methods for communication among robot and human have their limits in performing only static communication, thus the method for more natural and realistic interaction is required. In this paper, an emotional gesture based dialogue management system is proposed for sophisticated human-robot communication. The proposed system performs communication by using the Bayesian networks and pattern matching, and generates emotional gestures of robots in real-time while the user communicates with the robot. Through emotional gestures robot can communicate the user more efficiently also realistically. We used behavior networks as the gesture generation method to deal with dialogue situations which change dynamically. Finally, we designed a usability test to confirm the usefulness of the proposed system by comparing with the existing dialogue system.

Accelerometer-based Mobile Game Using the Gestures and Postures (제스처와 자세를 이용한 가속도센서 기반 모바일 게임)

  • Baek, Jong-Hun;Jang, Ik-Jin;Yun, Byoung-Ju
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.379-380
    • /
    • 2006
  • As a result of growth sensor-enabled mobile devices such as PDA, cellular phone and other computing devices, in recent years, users can utilize the diverse digital contents everywhere and anytime. However, the interfaces of mobile applications are often unnatural due to limited resources and miniaturized input/output. Especially, users may feel this problem in some applications such as the mobile game. Therefore, Novel interaction forms have been developed in order to complement the poor user interface of the mobile device and to increase the interest for the mobile game. In this paper, we describe the demonstration of the gesture and posture input supported by an accelerometer. The application example we created are AM-Fishing game on the mobile device that employs the accelerometer as the main interaction modality. The demos show the usability for the gesture and posture interaction.

  • PDF

A Study of Hand Gesture Recognition for Human Computer Interface (컴퓨터 인터페이스를 위한 Hand Gesture 인식에 관한 연구)

  • Chang, Ho-Jung;Baek, Han-Wook;Chung, Chin-Hyun
    • Proceedings of the KIEE Conference
    • /
    • 2000.07d
    • /
    • pp.3041-3043
    • /
    • 2000
  • GUI(graphical user interface) has been the dominant platform for HCI(human computer interaction). The GUI-based style of interaction has made computers simpler and easier to use. However GUI will not easily support the range of interaction necessary to meet users' needs that are natural, intuitive, and adaptive. In this paper we study an approach to track a hand in an image sequence and recognize it, in each video frame for replacing the mouse as a pointing device to virtual reality. An algorithm for real time processing is proposed by estimating of the position of the hand and segmentation, considering the orientation of motion and color distribution of hand region.

  • PDF

Hand gesture recognition for player control

  • Shi, Lan Yan;Kim, Jin-Gyu;Yeom, Dong-Hae;Joo, Young-Hoon
    • Proceedings of the KIEE Conference
    • /
    • 2011.07a
    • /
    • pp.1908-1909
    • /
    • 2011
  • Hand gesture recognition has been widely used in virtual reality and HCI (Human-Computer-Interaction) system, which is challenging and interesting subject in the vision based area. The existing approaches for vision-driven interactive user interfaces resort to technologies such as head tracking, face and facial expression recognition, eye tracking and gesture recognition. The purpose of this paper is to combine the finite state machine (FSM) and the gesture recognition method, in other to control Windows Media Player, such as: play/pause, next, pervious, and volume up/down.

  • PDF