• Title/Summary/Keyword: virtual interaction

Search Result 824, Processing Time 0.032 seconds

Provision of Effective Spatial Interaction for Users in Advanced Collaborative Environment (지능형 협업 환경에서 사용자를 위한 효과적인 공간 인터랙션 제공)

  • Ko, Su-Jin;Kim, Jong-Won
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.677-684
    • /
    • 2009
  • With various sensor network and ubiquitous technologies, we can extend interaction area from a virtual domain to physical space domain. This spatial interaction is differ in that traditional interaction is mainly processed by direct interaction with the computer machine which is a target machine or provides interaction tools and the spatial interaction is performed indirectly between users with smart interaction tools and many distributed components of space. So, this interaction gives methods to users to control whole manageable space components by registering and recognizing objects. Finally, this paper provides an effective spatial interaction method with template-based task mapping algorithm which is sorted by historical interaction data for support of users' intended task. And then, we analyze how much the system performance would be improved with the task mapping algorithm and conclude with an introduction of a GUI method to visualize results of spatial interaction.

  • PDF

Effects of Image Resolution and HMD Luminance on Virtual Reality Viewing Experience (영상의 해상도와 HMD의 휘도가 가상현실 시청 경험에 미치는 영향)

  • Lee, Hyejin;Chung, Donghun
    • Journal of Broadcast Engineering
    • /
    • v.23 no.1
    • /
    • pp.74-85
    • /
    • 2018
  • The research investigated the interaction effect of video resolution and device luminance on the perceived characteristics, presence, and fatigue viewing virtual reality. Experiments were composed of mixed design, and the resolution and luminance were classified into three types, HD, 2K, and 4K, and 20, 60, and 100, respectively. Participants watched video of 6-minutes randomly, and responded to the questionnaire after watching each video. The results showed that no interaction effect existed. Meanwhile, there are statistically significant differences on the luminance of depth perception, the resolution of visual intervention, and the resolution of adjustment fatigue. Also, higher resolution and luminance showed higher cognitive function, presence and fatigue.

Input Device for Immersive Virtual Education (몰입형 가상교육을 위한 입력장치)

  • Jeong, GooCheol;Im, SungMin;Kim, Sang-Youn
    • The Journal of Korean Institute for Practical Engineering Education
    • /
    • v.5 no.1
    • /
    • pp.34-39
    • /
    • 2013
  • This paper suggests an input device that allows a user not only to naturally interact with education contents in virtual environment but also to sense haptic feedback according to his/her interaction. The proposed system measures a user's motion and then creates haptic feedback based on the measured position. To create haptic information in response to a user's interaction with educational contents in virtual environment, we develop a motion input device which consists of a motion controller, a haptic actuator, a wireless communication module, and a motion sensor. To measure a user's motion input, an accelerometer is used as the motion sensor. The experiment shows that the proposed system creates continuous haptic sensation without any jerky motion or vibration.

  • PDF

Control of Haptic Hand Controller Using Collision Detection Algorithm (충돌감지 알고리듬을 적용한 햅틱 핸드 컨트롤러의 제어)

  • 손원선;조경래;송재복
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2003.06a
    • /
    • pp.992-995
    • /
    • 2003
  • A haptic device operated by the user's hand can receive information on position and orientation of the hand and display force and moment generated in the virtual environment to the hand. For realistic haptic display, the detailed information on collision between objects is necessary. In the past, the point-based graphic environment has been used in which the end effector of a haptic device was represented as a point and the interaction of this point with the virtual environment was investigated. In this paper, the shape-based graphic environment is proposed in which the interaction of the shape with the environment is considered to analyze collision or contact more accurately. To this end. the so-called Gilbert-Johnson-Keerthi (GJK) algorithm is adopted to compute collision points and collision instants between two shapes in the 3-D space. The 5- DOF haptic hand controller is used with the GJK algorithm to demonstrate a peg-in-hole operation in the virtual environment in conjunction with a haptic device. It is shown from various experiments that the shape-based representation with the GJK algorithm can provide more realistic haptic display for peg-in-hole operations.

  • PDF

Effective 3D Object Selection Interface in Non-immersive Virtual Environment (비몰입형 가상환경에서 효과적인 3D객체선택 인터페이스)

  • 한덕수;임윤호;최윤철;임순범
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.5
    • /
    • pp.896-908
    • /
    • 2003
  • Interaction technique in .3D virtual environment is a decisive factor that affects the immersion and presence felt by users in virtual space. Especially, in fields that require exquisite manipulation of objects such as electronic manuals in desktop environment, interaction technique that supports effective and natural manipulation of object is in demand. In this paper, 3D scene graph can be internally divided and reconstructed to a list defending on the users selection and through moving focus among the selection candidate objects list, the user can select 3D object more accurately Also, by providing various feedbacks for each manipulation stage, more effective manipulation is possible. The proposed technique can be used as 3D user interface in areas that require exquisite object manipulation.

  • PDF

Augmented System for Immersive 3D Expansion and Interaction

  • Yang, Ungyeon;Kim, Nam-Gyu;Kim, Ki-Hong
    • ETRI Journal
    • /
    • v.38 no.1
    • /
    • pp.149-158
    • /
    • 2016
  • In the field of augmented reality technologies, commercial optical see-through-type wearable displays have difficulty providing immersive visual experiences, because users perceive different depths between virtual views on display surfaces and see-through views to the real world. Many cases of augmented reality applications have adopted eyeglasses-type displays (EGDs) for visualizing simple 2D information, or video see-through-type displays for minimizing virtual- and real-scene mismatch errors. In this paper, we introduce an innovative optical see-through-type wearable display hardware, called an EGD. In contrast to common head-mounted displays, which are intended for a wide field of view, our EGD provides more comfortable visual feedback at close range. Users of an EGD device can accurately manipulate close-range virtual objects and expand their view to distant real environments. To verify the feasibility of the EGD technology, subject-based experiments and analysis are performed. The analysis results and EGD-related application examples show that EGD is useful for visually expanding immersive 3D augmented environments consisting of multiple displays.

Volume Haptic Rendering Algorithm for Realistic Modeling (실감형 모델링을 위한 볼륨 햅틱 렌더링 알고리즘)

  • Jung, Ji-Chan;Park, Joon-Young
    • Korean Journal of Computational Design and Engineering
    • /
    • v.15 no.2
    • /
    • pp.136-143
    • /
    • 2010
  • Realistic Modeling is to maximize the reality of the environment in which perception is made by virtual environment or remote control using two or more senses of human. Especially, the field of haptic rendering, which provides reality through interaction of visual and tactual sense in realistic model, has brought attention. Haptic rendering calculates the force caused by model deformation during interaction with a virtual model and returns it to the user. Deformable model in the haptic rendering has more complexity than a rigid body because the deformation is calculated inside as well as the outside the model. For this model, Gibson suggested the 3D ChainMail algorithm using volumetric data. However, in case of the deformable model with non-homogeneous materials, there were some discordances between visual and tactual sense information when calculating the force-feedback in real time. Therefore, we propose an algorithm for the Volume Haptic Rendering of non-homogeneous deformable object that reflects the force-feedback consistently in real time, depending on visual information (the amount of deformation), without any post-processing.

Interactive Rehabilitation Support System for Dementia Patients

  • Kim, Sung-Ill
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.11 no.3
    • /
    • pp.221-225
    • /
    • 2010
  • This paper presents the preliminary study of an interactive rehabilitation support system for both dementia patients and their caregivers, the goal of which is to improve the quality of life(QOL) of the patients suffering from dementia through virtual interaction. To achieve the virtual interaction, three kinds of recognition modules for speech, facial image and pen-mouse gesture are studied. The results of both practical tests and questionnaire surveys show that the proposed system had to be further improved, especially in both speech recognition and user interface for real-world applications. The surveys also revealed that the pen-mouse gesture recognition, as one of possible interactive aids, show us a probability to support weakness of speech recognition.

Dual Autostereoscopic Display Platform for Multi-user Collaboration with Natural Interaction

  • Kim, Hye-Mi;Lee, Gun-A.;Yang, Ung-Yeon;Kwak, Tae-Jin;Kim, Ki-Hong
    • ETRI Journal
    • /
    • v.34 no.3
    • /
    • pp.466-469
    • /
    • 2012
  • In this letter, we propose a dual autostereoscopic display platform employing a natural interaction method, which will be useful for sharing visual data with users. To provide 3D visualization of a model to users who collaborate with each other, a beamsplitter is used with a pair of autostereoscopic displays, providing a visual illusion of a floating 3D image. To interact with the virtual object, we track the user's hands with a depth camera. The gesture recognition technique we use operates without any initialization process, such as specific poses or gestures, and supports several commands to control virtual objects by gesture recognition. Experiment results show that our system performs well in visualizing 3D models in real-time and handling them under unconstrained conditions, such as complicated backgrounds or a user wearing short sleeves.

Development of a VR based epidural anesthesia trainer using a robotic device (로봇을 이용한 경막외마취 훈련기의 개발)

  • Kim J.
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2005.10a
    • /
    • pp.135-138
    • /
    • 2005
  • Robotic devices have been widely used in many medical applications due to their accuracy and programming ability. One of the applications is a virtual reality medical simulator, which trains medical personnel in a computer generated environment. In this paper, we are going to present an application, an epidural anesthesia trainer. Because performing epidural injections is a delicate task, it demands a high level of skill and precision from the physician. This trainer uses a robotic device and computer controlled solenoid valve to recreate interaction forces between the needle and the various layers of tissues around the spinal cord. The robotic device is responsible for generation of interaction forces in real time and can be used to be haptic guidance that allows the user to follow a previous recorded expert procedure and feel the encountered forces.

  • PDF