• Title/Summary/Keyword: Gesture-Based User Interface

Search Result 107, Processing Time 0.025 seconds

Augmented Reality Authoring Tool with Marker & Gesture Interactive Features (마커 및 제스처 상호작용이 가능한 증강현실 저작도구)

  • Shim, Jinwook;Kong, Minje;Kim, Hayoung;Chae, Seungho;Jeong, Kyungho;Seo, Jonghoon;Han, Tack-Don
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.6
    • /
    • pp.720-734
    • /
    • 2013
  • In this paper, we suggest an augmented reality authoring tool system that users can easily make augmented reality contents using hand gesture and marker-based interaction methods. The previous augmented reality authoring tools are focused on augmenting a virtual object and to interact with this kind of augmented reality contents, user used the method utilizing marker or sensor. We want to solve this limited interaction method problem by applying marker based interaction method and gesture interaction method using depth sensing camera, Kinect. In this suggested system, user can easily develop simple form of marker based augmented reality contents through interface. Also, not just providing fragmentary contents, this system provides methods that user can actively interact with augmented reality contents. This research provides two interaction methods, one is marker based method using two markers and the other is utilizing marker occlusion. In addition, by recognizing and tracking user's bare hand, this system provides gesture interaction method which can zoom-in, zoom-out, move and rotate object. From heuristic evaluation about authoring tool and compared usability about marker and gesture interaction, this study confirmed a positive result.

Motion-Understanding Cell Phones for Intelligent User Interaction and Entertainment (지능형 UI와 Entertainment를 위한 동작 이해 휴대기기)

  • Cho, Sung-Jung;Choi, Eun-Seok;Bang, Won-Chul;Yang, Jing;Cho, Joon-Kee;Ki, Eun-Kwang;Sohn, Jun-Il;Kim, Dong-Yoon;Kim, Sang-Ryong
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.684-691
    • /
    • 2006
  • As many functionalities such as cameras and MP3 players are converged to mobile phones, more intuitive and interesting interaction methods are essential. In this paper, we present applications and their enabling technologies for gesture interactive cell phones. They employ gesture recognition and real-time shake detection algorithm for supporting motion-based user interface and entertainment applications respectively. The gesture recognition algorithm classifies users' movement into one of predefined gestures by modeling basic components of acceleration signals and their relationships. The recognition performance is further enhanced by discriminating frequently confusing classes with support vector machines. The shake detection algorithm detects in real time the exact motion moment when the phone is shaken significantly by utilizing variance and mean of acceleration signals. The gesture interaction algorithms show reliable performance for commercialization; with 100 novice users, the average recognition rate was 96.9% on 11 gestures (digits 1-9, O, X) and users' movements were detected in real time. We have applied the motion understanding technologies to Samsung cell phones in Korean, American, Chinese and European markets since May 2005.

  • PDF

A Research for Interface Based on EMG Pattern Combinations of Commercial Gesture Controller (상용 제스처 컨트롤러의 근전도 패턴 조합에 따른 인터페이스 연구)

  • Kim, Ki-Chang;Kang, Min-Sung;Ji, Chang-Uk;Ha, Ji-Woo;Sun, Dong-Ik;Xue, Gang;Shin, Kyoo-Sik
    • Journal of Engineering Education Research
    • /
    • v.19 no.1
    • /
    • pp.31-36
    • /
    • 2016
  • These days, ICT-related products are pouring out due to development of mobile technology and increase of smart phones. Among the ICT-related products, wearable devices are being spotlighted with the advent of hyper-connected society. In this paper, a body-attached type wearable device using EMG(electromyography) sensors is studied. The research field of EMG sensors is divided into two parts. One is medical area and another is control device area. This study corresponds to the latter that is a method of transmitting user's manipulation intention to robots, games or computers through the measurement of EMG. We used commercial device MYO developed by Thalmic Labs in Canada and matched up EMG of arm muscles with gesture controller. In the experiment part, first of all, various arm motions for controlling devices are defined. Finally, we drew several distinguishing kinds of motions through analysis of the EMG signals and substituted a joystick with the motions.

A Compensation Algorithm for the Position of User Hands Based on Moving Mean-Shift for Gesture Recognition in HRI System (HRI 시스템에서 제스처 인식을 위한 Moving Mean-Shift 기반 사용자 손 위치 보정 알고리즘)

  • Kim, Tae-Wan;Kwon, Soon-Ryang;Lee, Dong Myung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.5
    • /
    • pp.863-870
    • /
    • 2015
  • A Compensation Algorithm for The Position of the User Hands based on the Moving Mean-Shift ($CAPUH_{MMS}$) in Human Robot Interface (HRI) System running the Kinect sensor is proposed in order to improve the performance of the gesture recognition is proposed in this paper. The average error improvement ratio of the trajectories ($AEIR_{TJ}$) in left-right movements of hands for the $CAPUH_{MMS}$ is compared with other compensation algorithms such as the Compensation Algorithm based on the Compensation Algorithm based on the Kalman Filter ($CA_{KF}$) and the Compensation Algorithm based on Least-Squares Method ($CA_{LSM}$) by the developed realtime performance simulator. As a result, the $AEIR_{TJ}$ in up-down movements of hands of the $CAPUH_{MMS}$ is measured as 19.35%, it is higher value compared with that of the $CA_{KF}$ and the $CA_{LSM}$ as 13.88% and 16.68%, respectively.

Human-Computer Natur al User Inter face Based on Hand Motion Detection and Tracking

  • Xu, Wenkai;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.4
    • /
    • pp.501-507
    • /
    • 2012
  • Human body motion is a non-verbal part for interaction or movement that can be used to involves real world and virtual world. In this paper, we explain a study on natural user interface (NUI) in human hand motion recognition using RGB color information and depth information by Kinect camera from Microsoft Corporation. To achieve the goal, hand tracking and gesture recognition have no major dependencies of the work environment, lighting or users' skin color, libraries of particular use for natural interaction and Kinect device, which serves to provide RGB images of the environment and the depth map of the scene were used. An improved Camshift tracking algorithm is used to tracking hand motion, the experimental results show out it has better performance than Camshift algorithm, and it has higher stability and accuracy as well.

Gesture interface with 3D accelerometer for mobile users (모바일 사용자를 위한 3 차원 가속도기반 제스처 인터페이스)

  • Choe, Bong-Whan;Hong, Jin-Hyuk;Cho, Sung-Bae
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.378-383
    • /
    • 2009
  • In these days, many systems are equipped with people to infer their intention and provide the corresponding service. People always carry their own mobile device with various sensors, and the accelerator takes a role in this environment. The accelerator collects motion information, which is useful for the development of gesture-based user interfaces. Generally, it needs to develop an effective method for the mobile environment that supports relatively less computational capability since huge computation is required to recognize time-series patterns such as gestures. In this paper, we propose a 2-stage motion recognizer composed of low-level and high-level motions based on the motion library. The low-level motion recognizer uses the dynamic time warping with 3D acceleration data, and the high-level motion is defined linguistically with the low-level motions.

  • PDF

Human-Computer Interaction Based Only on Auditory and Visual Information

  • Sha, Hui;Agah, Arvin
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.2 no.4
    • /
    • pp.285-297
    • /
    • 2000
  • One of the research objectives in the area of multimedia human-computer interaction is the application of artificial intelligence and robotics technologies to the development of computer interfaces. This involves utilizing many forms of media, integrating speed input, natural language, graphics, hand pointing gestures, and other methods for interactive dialogues. Although current human-computer communication methods include computer keyboards, mice, and other traditional devices, the two basic ways by which people communicate with each other are voice and gesture. This paper reports on research focusing on the development of an intelligent multimedia interface system modeled based on the manner in which people communicate. This work explores the interaction between humans and computers based only on the processing of speech(Work uttered by the person) and processing of images(hand pointing gestures). The purpose of the interface is to control a pan/tilt camera to point it to a location specified by the user through utterance of words and pointing of the hand, The systems utilizes another stationary camera to capture images of the users hand and a microphone to capture the users words. Upon processing of the images and sounds, the systems responds by pointing the camera. Initially, the interface uses hand pointing to locate the general position which user is referring to and then the interface uses voice command provided by user to fine-the location, and change the zooming of the camera, if requested. The image of the location is captured by the pan/tilt camera and sent to a color TV monitor to be displayed. This type of system has applications in tele-conferencing and other rmote operations, where the system must respond to users command, in a manner similar to how the user would communicate with another person. The advantage of this approach is the elimination of the traditional input devices that the user must utilize in order to control a pan/tillt camera, replacing them with more "natural" means of interaction. A number of experiments were performed to evaluate the interface system with respect to its accuracy, efficiency, reliability, and limitation.

  • PDF

Design and Implementation of a Real-time Region Pointing System using Arm-Pointing Gesture Interface in a 3D Environment

  • Han, Yun-Sang;Seo, Yung-Ho;Doo, Kyoung-Soo;Choi, Jong-Soo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.290-293
    • /
    • 2009
  • In this paper, we propose a method to estimate pointing region in real-world from images of cameras. In general, arm-pointing gesture encodes a direction which extends from user's fingertip to target point. In the proposed work, we assume that the pointing ray can be approximated to a straight line which passes through user's face and fingertip. Therefore, the proposed method extracts two end points for the estimation of pointing direction; one from the user's face and another from the user's fingertip region. Then, the pointing direction and its target region are estimated based on the 2D-3D projective mapping between camera images and real-world scene. In order to demonstrate an application of the proposed method, we constructed an ICGS (interactive cinema guiding system) which employs two CCD cameras and a monitor. The accuracy and robustness of the proposed method are also verified on the experimental results of several real video sequences.

  • PDF

User-centric Immersible and Interactive Electronic Book based on the Interface of Tabletop Display (테이블탑 디스플레이 기반 사용자 중심의 실감형 상호작용 전자책)

  • Song, Dae-Hyeon;Park, Jae-Wan;Lee, Chil-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.6
    • /
    • pp.117-125
    • /
    • 2009
  • In this paper, we propose user-centric immersible and interactive electronic book based on the interface of tabletop display. Electronic book is usually used for users that want to read the text book with multimedia contents like video, audio, animation and etc. It is based on tabletop display platform then the conventional input device like keyboard and mouse is not essentially needed. Users can interact with the contents based on the gestures defined for the interface of tabletop display using hand finger touches then it gives superior and effective interface for users to use the electronic book interestingly. This interface supports multiple users then it gives more diverse effects on the conventional electronic contents just made for one user. In this paper our method gives new way for the conventional electronics book and it can define the user-centric gestures and help users to interact with the book easily. We expect our method can be utilized for many edutainment contents.

Kinect-based Motion Recognition Model for the 3D Contents Control (3D 콘텐츠 제어를 위한 키넥트 기반의 동작 인식 모델)

  • Choi, Han Suk
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.1
    • /
    • pp.24-29
    • /
    • 2014
  • This paper proposes a kinect-based human motion recognition model for the 3D contents control after tracking the human body gesture through the camera in the infrared kinect project. The proposed human motion model in this paper computes the distance variation of the body movement from shoulder to right and left hand, wrist, arm, and elbow. The human motion model is classified into the movement directions such as the left movement, right movement, up, down, enlargement, downsizing. and selection. The proposed kinect-based human motion recognition model is very natural and low cost compared to other contact type gesture recognition technologies and device based gesture technologies with the expensive hardware system.