• Title/Summary/Keyword: Gesture-based Interaction

Search Result 152, Processing Time 0.031 seconds

3D Feature Based Tracking using SVM

  • Kim, Se-Hoon;Choi, Seung-Joon;Kim, Sung-Jin;Won, Sang-Chul
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1458-1463
    • /
    • 2004
  • Tracking is one of the most important pre-required task for many application such as human-computer interaction through gesture and face recognition, motion analysis, visual servoing, augment reality, industrial assembly and robot obstacle avoidance. Recently, 3D information of object is required in realtime for many aforementioned applications. 3D tracking is difficult problem to solve because during the image formation process of the camera, explicit 3D information about objects in the scene is lost. Recently, many vision system use stereo camera especially for 3D tracking. The 3D feature based tracking(3DFBT) which is on of the 3D tracking system using stereo vision have many advantage compare to other tracking methods. If we assumed the correspondence problem which is one of the subproblem of 3DFBT is solved, the accuracy of tracking depends on the accuracy of camera calibration. However, The existing calibration method based on accurate camera model so that modelling error and weakness to lens distortion are embedded. Therefore, this thesis proposes 3D feature based tracking method using SVM which is used to solve reconstruction problem.

  • PDF

Effects of whole body movements in using virtual reality headsets on visually induced motion sickness (전신 움직임을 요구하는 컨트롤러가 가상현실 디바이스에서 시지각과 가상현실 멀미에 끼치는 영향)

  • Kim, Sung-ho;Shin, Dong-Hee
    • Journal of Digital Contents Society
    • /
    • v.18 no.2
    • /
    • pp.283-291
    • /
    • 2017
  • Though new body movement based input system immerged in Virtual Reality (VR), VR still has a visually induced motion sickness (VIMS) problem to be accepted for users. VIMS are caused by changes in visually perceived movement that discord with vestibular system's sense of movement. Not only Head-body movements, but also hand gestures to make commands and torso movement can affect visual movement perception by enhancing immersion and its psychological product; presence. The question arises does whole body movement and hand gesture to make commands are more dominant to arousal, presence, and VIMS? To address this question, we conducted "2 (IV1; head-body movements only vs. whole body movements) * 1" between subject design experiment. The results showed that significant effect on whole body movements and arousal, marginally significant effect on presence. Eyewear usage was a moderator between hand gesture and presence relationship.

Design and Performance Analysis of ML Techniques for Finger Motion Recognition (손가락 움직임 인식을 위한 웨어러블 디바이스 설계 및 ML 기법별 성능 분석)

  • Jung, Woosoon;Lee, Hyung Gyu
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.25 no.2
    • /
    • pp.129-136
    • /
    • 2020
  • Recognizing finger movements have been used as a intuitive way of human-computer interaction. In this study, we implement an wearable device for finger motion recognition and evaluate the accuracy of several ML (Machine learning) techniques. Not only HMM (Hidden markov model) and DTW (Dynamic time warping) techniques that have been traditionally used as time series data analysis, but also NN (Neural network) technique are applied to compare and analyze the accuracy of each technique. In order to minimize the computational requirement, we also apply the pre-processing to each ML techniques. Our extensive evaluations demonstrate that the NN-based gesture recognition system achieves 99.1% recognition accuracy while the HMM and DTW achieve 96.6% and 95.9% recognition accuracy, respectively.

Multimodal Interaction Framework for Collaborative Augmented Reality in Education

  • Asiri, Dalia Mohammed Eissa;Allehaibi, Khalid Hamed;Basori, Ahmad Hoirul
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.7
    • /
    • pp.268-282
    • /
    • 2022
  • One of the most important technologies today is augmented reality technology, it allows users to experience the real world using virtual objects that are combined with the real world. This technology is interesting and has become applied in many sectors such as the shopping and medicine, also it has been included in the sector of education. In the field of education, AR technology has become widely used due to its effectiveness. It has many benefits, such as arousing students' interest in learning imaginative concepts that are difficult to understand. On the other hand, studies have proven that collaborative between students increases learning opportunities by exchanging information, and this is known as Collaborative Learning. The use of multimodal creates a distinctive and interesting experience, especially for students, as it increases the interaction of users with the technologies. The research aims at developing collaborative framework for developing achievement of 6th graders through designing a framework that integrated a collaborative framework with a multimodal input "hand-gesture and touch", considering the development of an effective, fun and easy to use framework with a multimodal interaction in AR technology that was applied to reformulate the genetics and traits lesson from the science textbook for the 6th grade, the first semester, the second lesson, in an interactive manner by creating a video based on the science teachers' consultations and a puzzle game in which the game images were inserted. As well, the framework adopted the cooperative between students to solve the questions. The finding showed a significant difference between post-test and pre-test of the experimental group on the mean scores of the science course at the level of remembering, understanding, and applying. Which indicates the success of the framework, in addition to the fact that 43 students preferred to use the framework over traditional education.

A Study on Flow-emotion-state for Analyzing Flow-situation of Video Content Viewers (영상콘텐츠 시청자의 몰입상황 분석을 위한 몰입감정상태 연구)

  • Kim, Seunghwan;Kim, Cheolki
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.3
    • /
    • pp.400-414
    • /
    • 2018
  • It is required for today's video contents to interact with a viewer in order to provide more personalized experience to viewer(s) than before. In order to do so by providing friendly experience to a viewer from video contents' systemic perspective, understanding and analyzing the situation of the viewer have to be preferentially considered. For this purpose, it is effective to analyze the situation of a viewer by understanding the state of the viewer based on the viewer' s behavior(s) in the process of watching the video contents, and classifying the behavior(s) into the viewer's emotion and state during the flow. The term 'Flow-emotion-state' presented in this study is the state of the viewer to be assumed based on the emotions that occur subsequently in relation to the target video content in a situation which the viewer of the video content is already engaged in the viewing behavior. This Flow-emotion-state of a viewer can be expected to be utilized to identify characteristics of the viewer's Flow-situation by observing and analyzing the gesture and the facial expression that serve as the input modality of the viewer to the video content.

Presentation Training System based on 3D Virtual Reality (3D 가상현실기반의 발표훈련시스템)

  • Jung, Young-Kee
    • The Journal of the Convergence on Culture Technology
    • /
    • v.4 no.4
    • /
    • pp.309-316
    • /
    • 2018
  • In this study, we propose a 3D virtual reality based presentation training system to help implement the virtual presentation environment, such as the real world, to present it confidently in the real world. The proposed system provided a realistic and highly engaging presentation and interview environment by analyzing the speakers' voice and behavior in real time to be reflected in the audience of the virtual space. Using HMD and VR Controller that become 6DOF Tracking, the presenter can change the timing and interaction of the virtual space using Kinect, and the virtual space can be changed to various settings set by the user. The presenter will look at presentation files and scripts displayed in separate views within the virtual space to understand the content and master the presentation.

A Visual Programming Environment on Tablet PCs to Control Industrial Robots (산업용 로봇 제어를 위한 태블릿 PC 기반의 비주얼 프로그래밍 연구)

  • Park, Eun Ji;Seo, Kyeong Eun;Park, Tae Gon;Sun, Duk Han;Cho, Hyeonjoong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.2
    • /
    • pp.107-116
    • /
    • 2016
  • Industrial robots have been usually controlled using text-based programming languages provided by each manufacturer with its button-based TP(Teaching Pendent) terminal. Unfortunately, when we consider that people who manipulate TPs in manufacturing sites are mostly unskilled with no background knowledge about computer programming, these text-based programming languages using button-based interaction on manufacturing sites are too difficult for them to learn and use. In order to overcome the weaknesses of the text-based programming language, we propose a visual programming language that can be easily used on gesture-enabled devices. Especially, in our visual programming environment, each command is represented as a block and robots are controlled by stacking those blocks using drag-and-drop gestures, which is easily learnable even by beginners. In this paper, we utilize a widely-spread device, Tablet PC as the gesture-enabled TP. Considering that Tablet PC has limited display space in contrast to PC environments, we designed different kinds of sets of command blocks and conducted user tests. Based on the experiment results, we propose an effective set of command blocks for Tablet PC environment.

Developing Interactive Game Contents using 3D Human Pose Recognition (3차원 인체 포즈 인식을 이용한 상호작용 게임 콘텐츠 개발)

  • Choi, Yoon-Ji;Park, Jae-Wan;Song, Dae-Hyeon;Lee, Chil-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.12
    • /
    • pp.619-628
    • /
    • 2011
  • Normally vision-based 3D human pose recognition technology is used to method for convey human gesture in HCI(Human-Computer Interaction). 2D pose model based recognition method recognizes simple 2D human pose in particular environment. On the other hand, 3D pose model which describes 3D human body skeletal structure can recognize more complex 3D pose than 2D pose model in because it can use joint angle and shape information of body part. In this paper, we describe a development of interactive game contents using pose recognition interface that using 3D human body joint information. Our system was proposed for the purpose that users can control the game contents with body motion without any additional equipment. Poses are recognized comparing current input pose and predefined pose template which is consist of 14 human body joint 3D information. We implement the game contents with the our pose recognition system and make sure about the efficiency of our proposed system. In the future, we will improve the system that can be recognized poses in various environments robustly.

An ANN-based gesture recognition algorithm for smart-home applications

  • Huu, Phat Nguyen;Minh, Quang Tran;The, Hoang Lai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.5
    • /
    • pp.1967-1983
    • /
    • 2020
  • The goal of this paper is to analyze and build an algorithm to recognize hand gestures applying to smart home applications. The proposed algorithm uses image processing techniques combing with artificial neural network (ANN) approaches to help users interact with computers by common gestures. We use five types of gestures, namely those for Stop, Forward, Backward, Turn Left, and Turn Right. Users will control devices through a camera connected to computers. The algorithm will analyze gestures and take actions to perform appropriate action according to users requests via their gestures. The results show that the average accuracy of proposal algorithm is 92.6 percent for images and more than 91 percent for video, which both satisfy performance requirements for real-world application, specifically for smart home services. The processing time is approximately 0.098 second with 10 frames/sec datasets. However, accuracy rate still depends on the number of training images (video) and their resolution.

Hand Gesture Recognition Method based on the MCSVM for Interaction with 3D Objects in Virtual Reality (가상현실 3D 오브젝트와 상호작용을 위한 MCSVM 기반 손 제스처 인식)

  • Kim, Yoon-Je;Koh, Tack-Kyun;Yoon, Min-Ho;Kim, Tae-Young
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.11a
    • /
    • pp.1088-1091
    • /
    • 2017
  • 최근 그래픽스 기반의 가상현실 기술의 발전과 관심이 증가하면서 3D 객체와의 자연스러운 상호작용을 위한 방법들 중 손 제스처 인식에 대한 연구가 활발히 진행되고 있다. 본 논문은 가상현실 3D 오브젝트와의 상호작용을 위한 MCSVM 기반의 손 제스처 인식을 제안한다. 먼저 다양한 손 제스처들을 립모션을 통해 입력 받아 전처리를 수행한 손 데이터를 전달한다. 그 후 이진 결정 트리로 1차 분류를 한 손 데이터를 리샘플링 한 뒤 체인코드를 생성하고 이에 대한 히스토그램으로 특징 데이터를 구성한다. 이를 기반으로 MCSVM 학습을 통해 2차 분류를 수행하여 제스처를 인식한다. 실험 결과 3D 오브젝트와 상호작용을 위한 16개의 명령 제스처에 대해 평균 99.2%의 인식률을 보였고 마우스 인터페이스와 비교한 정서적 평가 결과에서는 마우스 입력에 비하여 직관적이고 사용자 친화적인 상호작용이 가능하다는 점에서 게임, 학습 시뮬레이션, 설계, 의료분야 등 많은 가상현실 응용 분야에서의 입력 인터페이스로 활용 될 수 있고 가상현실에서 몰입도를 높이는데 도움이 됨을 알 수 있었다.