• 제목/요약/키워드: Gesture-based interaction

검색결과 152건 처리시간 0.019초

The Effect of Gesture-Command Pairing Condition on Learnability when Interacting with TV

  • Jo, Chun-Ik;Lim, Ji-Hyoun;Park, Jun
    • 대한인간공학회지
    • /
    • 제31권4호
    • /
    • pp.525-531
    • /
    • 2012
  • Objective: The aim of this study is to investigate learnability of gestures-commands pair when people use gestures to control a device. Background: In vision-based gesture recognition system, selecting gesture-command pairing is critical for its usability in learning. Subjective preference and its agreement score, used in previous study(Lim et al., 2012) was used to group four gesture-command pairings. To quantify the learnability, two learning models, average time model and marginal time model, were used. Method: Two sets of eight gestures, total sixteen gestures were listed by agreement score and preference data. Fourteen participants divided into two groups, memorized each set of gesture-command pair and performed gesture. For a given command, time to recall the paired gesture was collected. Results: The average recall time for initial trials were differed by preference and agreement score as well as the learning rate R driven by the two learning models. Conclusion: Preference rate agreement score showed influence on learning of gesture-command pairs. Application: This study could be applied to any device considered to adopt gesture interaction system for device control.

A Development of Gesture Interfaces using Spatial Context Information

  • Kwon, Doo-Young;Bae, Ki-Tae
    • International Journal of Contents
    • /
    • 제7권1호
    • /
    • pp.29-36
    • /
    • 2011
  • Gestures have been employed for human computer interaction to build more natural interface in new computational environments. In this paper, we describe our approach to develop a gesture interface using spatial context information. The proposed gesture interface recognizes a system action (e.g. commands) by integrating gesture information with spatial context information within a probabilistic framework. Two ontologies of spatial contexts are introduced based on the spatial information of gestures: gesture volume and gesture target. Prototype applications are developed using a smart environment scenario that a user can interact with digital information embedded to physical objects using gestures.

Road Traffic Control Gesture Recognition using Depth Images

  • Le, Quoc Khanh;Pham, Chinh Huu;Le, Thanh Ha
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제1권1호
    • /
    • pp.1-7
    • /
    • 2012
  • This paper presents a system used to automatically recognize the road traffic control gestures of police officers. In this approach,the control gestures of traffic police officers are captured in the form of depth images.A human skeleton is then constructed using a kinematic model. The feature vector describing a traffic control gesture is built from the relative angles found amongst the joints of the constructed human skeleton. We utilize Support Vector Machines (SVMs) to perform the gesture recognition. Experiments show that our proposed method is robust and efficient and is suitable for real-time application. We also present a testbed system based on the SVMs trained data for real-time traffic gesture recognition.

  • PDF

Towards Establishing a Touchless Gesture Dictionary based on User Participatory Design

  • Song, Hae-Won;Kim, Huhn
    • 대한인간공학회지
    • /
    • 제31권4호
    • /
    • pp.515-523
    • /
    • 2012
  • Objective: The aim of this study is to investigate users' intuitive stereotypes on non-touch gestures and establish the gesture dictionary that can be applied to gesture-based interaction designs. Background: Recently, the interaction based on non-touch gestures is emerging as an alternative for natural interactions between human and systems. However, in order for non-touch gestures to become a universe interaction method, the studies on what kinds of gestures are intuitive and effective should be prerequisite. Method: In this study, as applicable domains of non-touch gestures, four devices(i.e. TV, Audio, Computer, Car Navigation) and sixteen basic operations(i.e. power on/off, previous/next page, volume up/down, list up/down, zoom in/out, play, cancel, delete, search, mute, save) were drawn from both focus group interview and survey. Then, a user participatory design was performed. The participants were requested to design three gestures suitable to each operation in the devices, and they evaluated intuitiveness, memorability, convenience, and satisfaction of their derived gestures. Through the participatory design, agreement scores, frequencies and planning times of each distinguished gesture were measured. Results: The derived gestures were not different in terms of four devices. However, diverse but common gestures were derived in terms of kinds of operations. In special, manipulative gestures were suitable for all kinds of operations. On the contrary, semantic or descriptive gestures were proper to one-shot operations like power on/off, play, cancel or search. Conclusion: The touchless gesture dictionary was established by mapping intuitive and valuable gestures onto each operation. Application: The dictionary can be applied to interaction designs based on non-touch gestures. Moreover, it will be used as a basic reference for standardizing non-touch gestures.

2단계 히든마코프 모델을 이용한 제스쳐의 성능향상 연구 (Improvement of Gesture Recognition using 2-stage HMM)

  • 정훤재;박현준;김동한
    • 제어로봇시스템학회논문지
    • /
    • 제21권11호
    • /
    • pp.1034-1037
    • /
    • 2015
  • In recent years in the field of robotics, various methods have been developed to create an intimate relationship between people and robots. These methods include speech, vision, and biometrics recognition as well as gesture-based interaction. These recognition technologies are used in various wearable devices, smartphones and other electric devices for convenience. Among these technologies, gesture recognition is the most commonly used and appropriate technology for wearable devices. Gesture recognition can be classified as contact or noncontact gesture recognition. This paper proposes contact gesture recognition with IMU and EMG sensors by using the hidden Markov model (HMM) twice. Several simple behaviors make main gestures through the one-stage HMM. It is equal to the Hidden Markov model process, which is well known for pattern recognition. Additionally, the sequence of the main gestures, which comes from the one-stage HMM, creates some higher-order gestures through the two-stage HMM. In this way, more natural and intelligent gestures can be implemented through simple gestures. This advanced process can play a larger role in gesture recognition-based UX for many wearable and smart devices.

지능형 로봇 구동을 위한 제스처 인식 기술 동향 (Survey: Gesture Recognition Techniques for Intelligent Robot)

  • 오재용;이칠우
    • 제어로봇시스템학회논문지
    • /
    • 제10권9호
    • /
    • pp.771-778
    • /
    • 2004
  • Recently, various applications of robot system become more popular in accordance with rapid development of computer hardware/software, artificial intelligence, and automatic control technology. Formerly robots mainly have been used in industrial field, however, nowadays it is said that the robot will do an important role in the home service application. To make the robot more useful, we require further researches on implementation of natural communication method between the human and the robot system, and autonomous behavior generation. The gesture recognition technique is one of the most convenient methods for natural human-robot interaction, so it is to be solved for implementation of intelligent robot system. In this paper, we describe the state-of-the-art of advanced gesture recognition technologies for intelligent robots according to three methods; sensor based method, feature based method, appearance based method, and 3D model based method. And we also discuss some problems and real applications in the research field.

FPS 게임에서 제스처 인식과 VR 컨트롤러를 이용한 게임 상호 작용 제어 설계 (Design of Gaming Interaction Control using Gesture Recognition and VR Control in FPS Game)

  • 이용환;안효창
    • 반도체디스플레이기술학회지
    • /
    • 제18권4호
    • /
    • pp.116-119
    • /
    • 2019
  • User interface/experience and realistic game manipulation play an important role in virtual reality first-person-shooting game. This paper presents an intuitive hands-free interface of gaming interaction scheme for FPS based on user's gesture recognition and VR controller. We focus on conventional interface of VR FPS interaction, and design the player interaction wearing head mounted display with two motion controllers; leap motion to handle low-level physics interaction and VIVE tracker to control movement of the player joints in the VR world. The FPS prototype system shows that the design interface helps to enjoy playing immersive FPS and gives players a new gaming experience.

강인한 손가락 끝 추출과 확장된 CAMSHIFT 알고리즘을 이용한 자연스러운 Human-Robot Interaction을 위한 손동작 인식 (A Robust Fingertip Extraction and Extended CAMSHIFT based Hand Gesture Recognition for Natural Human-like Human-Robot Interaction)

  • 이래경;안수용;오세영
    • 제어로봇시스템학회논문지
    • /
    • 제18권4호
    • /
    • pp.328-336
    • /
    • 2012
  • In this paper, we propose a robust fingertip extraction and extended Continuously Adaptive Mean Shift (CAMSHIFT) based robust hand gesture recognition for natural human-like HRI (Human-Robot Interaction). Firstly, for efficient and rapid hand detection, the hand candidate regions are segmented by the combination with robust $YC_bC_r$ skin color model and haar-like features based adaboost. Using the extracted hand candidate regions, we estimate the palm region and fingertip position from distance transformation based voting and geometrical feature of hands. From the hand orientation and palm center position, we find the optimal fingertip position and its orientation. Then using extended CAMSHIFT, we reliably track the 2D hand gesture trajectory with extracted fingertip. Finally, we applied the conditional density propagation (CONDENSATION) to recognize the pre-defined temporal motion trajectories. Experimental results show that the proposed algorithm not only rapidly extracts the hand region with accurately extracted fingertip and its angle but also robustly tracks the hand under different illumination, size and rotation conditions. Using these results, we successfully recognize the multiple hand gestures.

An Outlook for Interaction Experience in Next-generation Television

  • Kim, Sung-Woo
    • 대한인간공학회지
    • /
    • 제31권4호
    • /
    • pp.557-565
    • /
    • 2012
  • Objective: This paper focuses on the new trend of applying NUI(natural user interface) such as gesture interaction into television and investigates on the design improvement needed in application. The intention is to find better design direction of NUI on television context, which will contribute to making new features and behavioral changes occurring in next-generation television more practically usable and meaningful use experience elements. Background: Traditional television is rapidly evolving into next-generation television thanks to the influence of "smartness" from mobile domain. A number of new features and behavioral changes occurred from such evolution are on their way to be characterized as the new experience elements of next-generation television. Method: A series of expert review by television UX professionals based on AHP (Analytic Hierarchy Process) was conducted to check on the "relative appropriateness" of applying gesture interaction to a number of selected television user experience scenarios. Conclusion: It is critical not to indiscriminately apply new interaction techniques like gesture into television. It may be effective in demonstrating new technology but generally results in poor user experience. It is imperative to conduct consistent validation of its practical appropriateness in real context. Application: The research will be helpful in applying gesture interaction in next-generation television to bring optimal user experience in.

손 제스처 인식에 기반한 Virtual Block 게임 인터페이스 (Virtual Block Game Interface based on the Hand Gesture Recognition)

  • 윤민호;김윤제;김태영
    • 한국게임학회 논문지
    • /
    • 제17권6호
    • /
    • pp.113-120
    • /
    • 2017
  • 최근 가상현실 기술의 발전으로 가상의 3D 객체와 자연스러운 상호작용이 가능하도록 하는 사용자 친화적인 손 제스처 인터페이스에 대한 연구가 활발히 진행되고 있다. 그러나 대부분의 연구는 단순하고 적은 종류의 손 제스처만 지원되고 있는 실정이다. 본 논문은 가상환경에서 3D 객체와 보다 직관적인 방식의 손 제스처 인터페이스 방법을 제안한다. 손 제스처 인식을 위하여 먼저 전처리 과정을 거친 다양한 손 데이터를 이진 결정트리로 1차 분류를 한다. 분류된 데이터는 리샘플링을 한 다음 체인코드를 생성하고 이에 대한 히스토그램으로 특징 데이터를 구성한다. 이를 기반으로 학습된 MCSVM을 통해 2차 분류를 수행하여 제스처를 인식한다. 본 방법의 검증을 위하여 3D 블록을 손 제스처를 통하여 조작하는 'Virtual Block'이라는 게임을 구현하여 실험한 결과 16개의 제스처에 대해 99.2%의 인식률을 보였으며 기존의 인터페이스보다 직관적이고 사용자 친화적임을 알 수 있었다.