• Title/Summary/Keyword: Gesture-based Interaction

Search Result 152, Processing Time 0.027 seconds

Mobile Browser UX Based on Mobile User Behavior (모바일 사용 행태에 따른 모바일 브라우저 UX)

  • Lee, Kate T.S.
    • Journal of the Ergonomics Society of Korea
    • /
    • v.29 no.4
    • /
    • pp.547-551
    • /
    • 2010
  • In mobile browser two mental models coexist; one for mobile users and the other for PC users. In this research shows that users apply these two mental models simultaneously while they use mobile browsers. However cases where these two mental models conflict with each other, rapid deterioration of usability of the UX based on the mobile user's mental model was evident. Also usability of mobile user interfaces for use cases like "View Mode" or "Copy and Send Mode" were also poor, and the research shows that these "Modes" could be substituted by gesture interaction with which users were already familiar.

Hand posture recognition robust to rotation using temporal correlation between adjacent frames (인접 프레임의 시간적 상관 관계를 이용한 회전에 강인한 손 모양 인식)

  • Lee, Seong-Il;Min, Hyun-Seok;Shin, Ho-Chul;Lim, Eul-Gyoon;Hwang, Dae-Hwan;Ro, Yong-Man
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.11
    • /
    • pp.1630-1642
    • /
    • 2010
  • Recently, there is an increasing need for developing the technique of Hand Gesture Recognition (HGR), for vision based interface. Since hand gesture is defined as consecutive change of hand posture, developing the algorithm of Hand Posture Recognition (HPR) is required. Among the factors that decrease the performance of HPR, we focus on rotation factor. To achieve rotation invariant HPR, we propose a method that uses the property of video that adjacent frames in video have high correlation, considering the environment of HGR. The proposed method introduces template update of object tracking using the above mentioned property, which is different from previous works based on still images. To compare our proposed method with previous methods such as template matching, PCA and LBP, we performed experiments with video that has hand rotation. The accuracy rate of the proposed method is 22.7%, 14.5%, 10.7% and 4.3% higher than ordinary template matching, template matching using KL-Transform, PCA and LBP, respectively.

Autonomous Mobile Robot Control using the Wearable Devices Based on EMG Signal for detecting fire (EMG 신호 기반의 웨어러블 기기를 통한 화재감지 자율 주행 로봇 제어)

  • Kim, Jin-Woo;Lee, Woo-Young;Yu, Je-Hun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.3
    • /
    • pp.176-181
    • /
    • 2016
  • In this paper, the autonomous mobile robot control system for detecting fire was proposed using the wearable device based on EMG(Electromyogram) signal. Myo armband is used for detecting the user's EMG signal. The gesture was classified after sending the data of EMG signal to a computer using Bluetooth communication. Then the robot named 'uBrain' was implemented to move by received data from Bluetooth communication in our experiment. 'Move front', 'Turn right', 'Turn left', and 'Stop' are controllable commands for the robot. And if the robot cannot receive the Bluetooth signal from a user or if a user wants to change manual mode to autonomous mode, the robot was implemented to be in the autonomous mode. The robot flashes the LED when IR sensor detects the fire during moving.

Skin Color Based Hand and Finger Detection for Gesture Recognition in CCTV Surveillance (CCTV 관제에서 동작 인식을 위한 색상 기반 손과 손가락 탐지)

  • Kang, Sung-Kwan;Chung, Kyung-Yong;Rim, Kee-Wook;Lee, Jung-Hyun
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.10
    • /
    • pp.1-10
    • /
    • 2011
  • In this paper, we proposed the skin color based hand and finger detection technology for the gesture recognition in CCTV surveillance. The aim of this paper is to present the methodology for hand detection and propose the finger detection method. The detected hand and finger can be used to implement the non-contact mouse. This technology can be used to control the home devices such as home-theater and television. Skin color is used to segment the hand region from background and contour is extracted from the segmented hand. Analysis of contour gives us the location of finger tip in the hand. After detecting the location of the fingertip, this system tracks the fingertip by using only R channel alone, and in recognition of hand motions to apply differential image, such as the removal of useless image shows a robust side. We explain about experiment which relates in fingertip tracking and finger gestures recognition, and experiment result shows the accuracy above 96%.

Interaction between BIM Model and Physical Model During Conceptual Design Stage (설계 초기 단계에서 BIM 모델과 물리적 모델의 상호작용 방안)

  • Yi, Ingeun;Kim, Sung-Ah
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.4
    • /
    • pp.344-350
    • /
    • 2013
  • It is essential to consider geometry in the early design stage for rational design decisions. However, a crucial decision had been taken by conversation, physical model, and gesture rather than BIM mode which can analyze geometry efficiently. This research proposes the framework of interaction between BIM model and physical model for real-time BIM analysis. Through this real-time system framework of two models, architects can adopt BIM data at early design stage to review analysis of BIM model. It should facilitate dynamic design based on rich BIM information from an early stage to a final stage.

A Virtual Reality System for Molecular Modeling (분자 모델링을 위한 가상현실 시스템)

  • Kim, Jee-In;Park, Sung-Jun;Lee, Jun;Choi, Young-Jin;Jung, Seun-Ho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.10 no.2
    • /
    • pp.1-9
    • /
    • 2004
  • 본 논문에서는 바이러스와 같은 생화학 물질의 분자구조를 3 차원 모델로 시각화하여 관찰하고, 그 분자모델을 직관적인 방법으로 조작하기 위한 가상 현실 분자 모델링 시스템을 제안한다. 이 시스템을 사용하면, 입체영상 디스플레이 장치와 데이터 글러브 및 동작 추적 장치를 사용하여 3 차원 분자 모델을 실감나게 조작할 수 있어서 효율적으로 분자들을 관찰하고 결합, 분리하는 등의 분자 모델링 작업이 가능하다. 사용자들은 마우스나 키보드 등의 장비 대신에 자연스러운 몸 동작이나 손 동작을 이용하여 분자 모델링 작업을 위한 동작을 하게 된다. 분자들의 결합을 화학적으로 정확하게, 그리고 실시간으로 시뮬레이션 하기 위해서 에너지 계산 알고리즘을 구현하였으며 이러한 작업이 가능하도록 분자 구조를 표현하는 새로운 자료구조를 제안하였다. 본 연구에서 제안하는 동작 기반의 VR 분자 모델링 시스템의 타당성을 검증하기 위하여 HIV 바이러스 분자를 가지고 분자 모델링 작업을 수행하였고, 사용자 테스트를 실시하여 기존의 방식과 작업 성능 및 사용자 만족도를 비교하였다.

  • PDF

Interactive media facade based on image processing (영상처리 기반의 인터랙티브 미디어파사드에 관한 연구)

  • Jun, Ha-Ree;Yun, Hyun-Jung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.21 no.3
    • /
    • pp.46-54
    • /
    • 2015
  • Development of digital technology is advancing to levels which influence the formation of city landscapes and spaces. The use of media facade is creating modernized city-spaces with contemporary application of various digital mediums. In this way, media facade functions as media-art in an artistic point of view, while also providing the means for buildings to become landmarks in a city-scape point of view. Going a step further, media facade using interaction is drawing a lot of attention as it enables communication with the city inhabitants instead of one-way contents provision. This paper will research such interactive media facade using transparent display glass currently being used as construction building material and its potential growth.

Gesture-Based Display Control Using Nature Interaction System (자연스러운 상호작용 시스템을 이용한 동작 기반 디스플레이 제어)

  • Kim, Sung-Woo;Jin, Moon-Sup;Uhm, Tae Young;Park, Jong-Il
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2011.07a
    • /
    • pp.387-389
    • /
    • 2011
  • 본 논문에서는 원거리에서 디스플레이를 제어하는 인터페이스를 제안한다. 제안하는 인터페이스는 사용자의 얼굴과 손을 관심 영역으로 규정하고, 이를 추적하여 사용자의 특정 동작을 인터페이스 입력으로 사용한다. 사용자에게 익숙한 손 동작을 인터페이스 입력으로 제공하고, 추가적인 장비를 강요하지 않는 비전 기반의 상호작용 방법을 이용하기 때문에, 사용자는 별도의 훈련 과정 없이 편하게 디스플레이를 제어할 수 있다. 빠르고 정확하게 사용자의 손을 검출하기 위해서 적외선 영상과 컬러 영상을 혼합하는 다중 비전 기반 방법을 사용하며, 손가락 끝 검출을 통해서 손가락 동작을 인식 한다. 인식된 동작을 원거리 통신방법을 이용하여 실제 디스플레이에 적용하여 효용성을 검증 한다.

  • PDF

Vision-based 3D Hand Gesture Recognition for Human-Robot Interaction (휴먼-로봇 상호작용을 위한 비전 기반3차원 손 제스처 인식)

  • Roh, Myung-Cheol;Chang, Hye-Min;Kang, Seung-Yeon;Lee, Seong-Whan
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.10b
    • /
    • pp.421-425
    • /
    • 2006
  • 최근 들어서 휴머노이드 로봇을 비롯한 로봇에 대하여 관심이 증대되고 있다. 이에 따라, 외모를 닮은 로봇 뿐 만 아니라, 사람과 상호 작용을 할 수 있는 로봇 기술의 중요성이 부각되고 있다. 이러한 상호 작용을 위한 효율적이고, 가장 자연스러운 방법 중의 하나가 비전을 기반으로 한 제스처 인식이다. 제스처를 인식하는데 있어서 가장 중요한 것은 손의 모양과 움직임을 인식하는3차원 제스처 인식이다. 본 논문에서는 3차원 손 제스처를 인식하기 위하여3차원 손 모델 추정 방법과 명령형 제스처 인식 시스템을 소개하고, 수화, 지화 등으로의 확장성을 위한 프레임워크를 제안한다.

  • PDF

W3C based Interoperable Multimodal Communicator (W3C 기반 상호연동 가능한 멀티모달 커뮤니케이터)

  • Park, Daemin;Gwon, Daehyeok;Choi, Jinhuyck;Lee, Injae;Choi, Haechul
    • Journal of Broadcast Engineering
    • /
    • v.20 no.1
    • /
    • pp.140-152
    • /
    • 2015
  • HCI(Human Computer Interaction) enables the interaction between people and computers by using a human-familiar interface called as Modality. Recently, to provide an optimal interface according to various devices and service environment, an advanced HCI method using multiple modalities is intensively studied. However, the multimodal interface has difficulties that modalities have different data formats and are hard to be cooperated efficiently. To solve this problem, a multimodal communicator is introduced, which is based on EMMA(Extensible Multimodal Annotation Markup language) and MMI(Multimodal Interaction Framework) of W3C(World Wide Web Consortium) standards. This standard based framework consisting of modality component, interaction manager, and presentation component makes multiple modalities interoperable and provides a wide expansion capability for other modalities. Experimental results show that the multimodal communicator is facilitated by using multiple modalities of eye tracking and gesture recognition for a map browsing scenario.