• Title/Summary/Keyword: Hand-motion recognition

Search Result 145, Processing Time 0.026 seconds

Interaction Technique in Smoke Simulations using Mouth-Wind on Mobile Devices (모바일 디바이스에서 사용자의 입 바람을 이용한 연기 시뮬레이션의 상호작용 방법)

  • Kim, Jong-Hyun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.4
    • /
    • pp.21-27
    • /
    • 2018
  • In this paper, we propose a real-time interaction method using user's mouth wind in mobile device. In mobile and virtual reality, user interaction technology is important, but various user interface methods is still lacking. Most of the interaction technologies are hand touch screen touch or motion recognition. In this study, we propose an interface technology that can interact with real time using user's mouth wind. The direction of the wind is determined by using the angle and the position between the user and the mobile device, and the size of the wind is calculated by using the magnitude of user's mouth wind. To show the superiority of the proposed technique, we show the result of visualizing the flow of the vector field in real time by integrating the mouth-wind interface into the Navier-Stokes equations. We show the results of the paper on mobile devices, but can be applied in the Agumented reality(AR) and Virtual reality(VR) fields requiring interface technology.

A Study on the Gesture Based Virtual Object Manipulation Method in Multi-Mixed Reality

  • Park, Sung-Jun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.2
    • /
    • pp.125-132
    • /
    • 2021
  • In this paper, We propose a study on the construction of an environment for collaboration in mixed reality and a method for working with wearable IoT devices. Mixed reality is a mixed form of virtual reality and augmented reality. We can view objects in the real and virtual world at the same time. And unlike VR, MR HMD does not occur the motion sickness. It is using a wireless and attracting attention as a technology to be applied in industrial fields. Myo wearable device is a device that enables arm rotation tracking and hand gesture recognition by using a triaxial sensor, an EMG sensor, and an acceleration sensor. Although various studies related to MR are being progressed, discussions on developing an environment in which multiple people can participate in mixed reality and manipulating virtual objects with their own hands are insufficient. In this paper, We propose a method of constructing an environment where collaboration is possible and an interaction method for smooth interaction in order to apply mixed reality in real industrial fields. As a result, two people could participate in the mixed reality environment at the same time to share a unified object for the object, and created an environment where each person could interact with the Myo wearable interface equipment.

Development for Multi-modal Realistic Experience I/O Interaction System (멀티모달 실감 경험 I/O 인터랙션 시스템 개발)

  • Park, Jae-Un;Whang, Min-Cheol;Lee, Jung-Nyun;Heo, Hwan;Jeong, Yong-Mu
    • Science of Emotion and Sensibility
    • /
    • v.14 no.4
    • /
    • pp.627-636
    • /
    • 2011
  • The purpose of this study is to develop the multi-modal interaction system. This system provides realistic and an immersive experience through multi-modal interaction. The system recognizes user behavior, intention, and attention, which overcomes the limitations of uni-modal interaction. The multi-modal interaction system is based upon gesture interaction methods, intuitive gesture interaction and attention evaluation technology. The gesture interaction methods were based on the sensors that were selected to analyze the accuracy of the 3-D gesture recognition technology using meta-analysis. The elements of intuitive gesture interaction were reflected through the results of experiments. The attention evaluation technology was developed by the physiological signal analysis. This system is divided into 3 modules; a motion cognitive system, an eye gaze detecting system, and a bio-reaction sensing system. The first module is the motion cognitive system which uses the accelerator sensor and flexible sensors to recognize hand and finger movements of the user. The second module is an eye gaze detecting system that detects pupil movements and reactions. The final module consists of a bio-reaction sensing system or attention evaluating system which tracks cardiovascular and skin temperature reactions. This study will be used for the development of realistic digital entertainment technology.

  • PDF

Hand Gesture Segmentation Method using a Wrist-Worn Wearable Device

  • Lee, Dong-Woo;Son, Yong-Ki;Kim, Bae-Sun;Kim, Minkyu;Jeong, Hyun-Tae;Cho, Il-Yeon
    • Journal of the Ergonomics Society of Korea
    • /
    • v.34 no.5
    • /
    • pp.541-548
    • /
    • 2015
  • Objective: We introduce a hand gesture segmentation method using a wrist-worn wearable device which can recognize simple gestures of clenching and unclenching ones' fist. Background: There are many types of smart watches and fitness bands in the markets. And most of them already adopt a gesture interaction to provide ease of use. However, there are many cases in which the malfunction is difficult to distinguish between the user's gesture commands and user's daily life motion. It is needed to develop a simple and clear gesture segmentation method to improve the gesture interaction performance. Method: At first, we defined the gestures of making a fist (start of gesture command) and opening one's fist (end of gesture command) as segmentation gestures to distinguish a gesture. The gestures of clenching and unclenching one's fist are simple and intuitive. And we also designed a single gesture consisting of a set of making a fist, a command gesture, and opening one's fist in order. To detect segmentation gestures at the bottom of the wrist, we used a wrist strap on which an array of infrared sensors (emitters and receivers) were mounted. When a user takes gestures of making a fist and opening one's a fist, this changes the shape of the bottom of the wrist, and simultaneously changes the reflected amount of the infrared light detected by the receiver sensor. Results: An experiment was conducted in order to evaluate gesture segmentation performance. 12 participants took part in the experiment: 10 males, and 2 females with an average age of 38. The recognition rates of the segmentation gestures, clenching and unclenching one's fist, are 99.58% and 100%, respectively. Conclusion: Through the experiment, we have evaluated gesture segmentation performance and its usability. The experimental results show a potential for our suggested segmentation method in the future. Application: The results of this study can be used to develop guidelines to prevent injury in auto workers at mission assembly plants.

Secondary camera position optimization for observing the close space between objects (근접한 물체 사이의 공간 관찰을 위한 보조 카메라 위치 최적화)

  • Lee, Ji Hye;Han, Yun Ha;Choi, Myung Geol
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.3
    • /
    • pp.33-41
    • /
    • 2018
  • We present a secondary camera optimization method that helps the user exploring 3D virtual environment to precisely observe possible collisions between objects. The first role of our secondary camera is to automatically detect the area with the greatest possible collision. The second role is to show the detected area from a new angle of view that the current main camera cannot show. However, as the shapes of target objects are complex, the shape of the empty space between objects is also complex and narrow. It means that the space for the secondary camera position is highly constrained and its optimization can be very difficult. To avoid this difficulty and increase the efficiency of the optimization, we first compute a bisector surface between two target objects. Then, we limit the domain of the secondary camera's position on the bisector surface in the optimization process. To verify the utility of our method, we built a demonstration program in which the user can explore in a 3D virtual world and interact with objects by using a hand motion recognition device and conducted a user study.