• Title/Summary/Keyword: immersive user interface

Search Result 53, Processing Time 0.024 seconds

MyWorkspace: VR Platform with an Immersive User Interface (MyWorkspace: 몰입형 사용자 인터페이스를 이용한 가상현실 플랫폼)

  • Yoon, Jong-Won;Hong, Jin-Hyuk;Cho, Sung-Bae
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.52-55
    • /
    • 2009
  • With the recent development of virtual reality, it has been actively investigated to develop user interfaces for immersive interaction. Immersive user interfaces improve the efficiency and the capability of information processing in the virtual environment providing various services, and provide effective interaction in the field of ubiquitous and mobile computing. In this paper, we propose an virtual reality platform "My Workspace" which renders an 3D virtual workspace by using an immersive user interface. We develop an interface that integrates an optical see-through head-mounted display, a Wii remote controller, and a helmet with infrared LEDs. It estimates the user's gaze direction in terms of horizontal and vertical angles based on the model of head movements. My Workspace expands the current 2D workspace based on monitors into the layered 3D workspace, and renders a part of 3D virtual workspace corresponding to the gaze direction. The user can arrange various tasks on the virtual workspace and switch each task by moving his head. In this paper, we will also verify the performance of the immersive user interface as well as its usefulness with the usability test.

  • PDF

A Full Body Gumdo Game with an Intelligent Cyber Fencer using Multi-modal(3D Vision and Speech) Interface (멀티모달 인터페이스(3차원 시각과 음성 )를 이용한 지능적 가상검객과의 전신 검도게임)

  • 윤정원;김세환;류제하;우운택
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.9 no.4
    • /
    • pp.420-430
    • /
    • 2003
  • This paper presents an immersive multimodal Gumdo simulation game that allows a user to experience the whole body interaction with an intelligent cyber fencer. The proposed system consists of three modules: (i) a nondistracting multimodal interface with 3D vision and speech (ii) an intelligent cyber fencer and (iii) an immersive feedback by a big screen and sound. First, the multimodal Interface with 3D vision and speech allows a user to move around and to shout without distracting the user. Second, an intelligent cyber fencer provides the user with intelligent interactions by perception and reaction modules that are created by the analysis of real Gumdo game. Finally, an immersive audio-visual feedback by a big screen and sound effects helps a user experience an immersive interaction. The proposed system thus provides the user with an immersive Gumdo experience with the whole body movement. The suggested system can be applied to various applications such as education, exercise, art performance, etc.

Immersive user interfaces for visual telepresence in human-robot interaction (사람과 로봇간 원격작동을 위한 몰입형 사용자 인터페이스)

  • Jang, Su-Hyeong
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.406-410
    • /
    • 2009
  • As studies on more realistic human-robot interface are being actively carried out, people's interests about telepresence which remotely controls robot and obtains environmental information through video display are increasing. In order to provide natural telepresence services by moving a remote robot, it is required to recognize user's behaviors. The recognition of user movements used in previous telepresence system was difficult and costly to be implemented, limited in its applications to human-robot interaction. In this paper, using the Nintendo's Wii controller getting a lot of attention in these days and infrared LEDs, we propose an immersive user interface that easily recognizes user's position and gaze direction and provides remote video information through HMD.

  • PDF

Prototyping Training Program in Immersive Virtual Learning Environment with Head Mounted Displays and Touchless Interfaces for Hearing-Impaired Learners

  • HAN, Insook;RYU, Jeeheon;KIM, Minjeong
    • Educational Technology International
    • /
    • v.18 no.1
    • /
    • pp.49-71
    • /
    • 2017
  • The purpose of the study was to identify key design features of virtual reality with head-mounted displays (HMD) and touchless interface for the hearing-impaired and hard-of-hearing learners. The virtual reality based training program was aimed to help hearing-impaired learners in machine operating learning, which requires spatial understanding to operate. We developed an immersive virtual learning environment prototype with an HMD (Oculus Rift) and a touchless natural user interface (Leap Motion) to identify the key design features required to enhance virtual reality for the hearing-impaired and hard-of-hearing learners. Two usability tests of the prototype were conducted, which revealed that several features in the system need revision and that the technology presents an enormous potential to help hearing-impaired learners by providing realistic and immersive learning experiences. After the usability tests of hearing-impaired students' exploring the 3D virtual space, interviews were conducted, which also established that further revision of the system is needed, which would take into account the learners' physical as well as cognitive characteristics.

Interface Application of a Virtual Assistant Agent in an Immersive Virtual Environment (몰입형 가상환경에서 가상 보조 에이전트의 인터페이스 응용)

  • Giri Na;Jinmo Kim
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.1
    • /
    • pp.1-10
    • /
    • 2024
  • In immersive virtual environments including mixed reality (MR) and virtual reality (VR), avatars or agents, which are virtual humans, are being studied and applied in various ways as factors that increase users' social presence. Recently, studies are being conducted to apply generative AI as an agent to improve user learning effects or suggest a collaborative environment in an immersive virtual environment. This study proposes a novel method for interface application of a virtual assistant agent (VAA) using OpenAI's ChatGPT in an immersive virtual environment including VR and MR. The proposed method consists of an information agent that responds to user queries and a control agent that controls virtual objects and environments according to user needs. We set up a development environment that integrates the Unity 3D engine, OpenAI, and packages and development tools for user participation in MR and VR. Additionally, we set up a workflow that leads from voice input to the creation of a question query to an answer query, or a control request query to a control script. Based on this, MR and VR experience environments were produced, and experiments to confirm the performance of VAA were divided into response time of information agent and accuracy of control agent. It was confirmed that the interface application of the proposed VAA can increase efficiency in simple and repetitive tasks along with user-friendly features. We present a novel direction for the interface application of an immersive virtual environment through the proposed VAA and clarify the discovered problems and limitations so far.

A Study on Comparative Experiment of Hand-based Interface in Immersive Virtua Reality (몰입형 가상현실에서 손 기반 인터페이스의 비교 실험에 관한 연구)

  • Kim, Jinmo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.2
    • /
    • pp.1-9
    • /
    • 2019
  • This study compares hand-based interfaces to improve a user's virtual reality (VR) presence by enhancing user immersion in VR interactions. To provide an immersive experience, in which users can more directly control the virtual environment and objects within that environment using their hands and, to simultaneously minimize the device burden on users using immersive VR systems, we designed two experimental interfaces (hand motion recognition sensor- and controller-based interactions). Hand motion recognition sensor-based interaction reflects accurate hand movements, direct gestures, and motion representations in the virtual environment, and it does not require using a device in addition to the VR head mounted display (HMD). Controller-based interaction designs a generalized interface that maps the gesture to the controller's key for easy access to the controller provided with the VR HMD. The comparative experiments in this study confirm the convenience and intuitiveness of VR interactions using the user's hand.

Gesture Recognition based on Mixture-of-Experts for Wearable User Interface of Immersive Virtual Reality (몰입형 가상현실의 착용식 사용자 인터페이스를 위한 Mixture-of-Experts 기반 제스처 인식)

  • Yoon, Jong-Won;Min, Jun-Ki;Cho, Sung-Bae
    • Journal of the HCI Society of Korea
    • /
    • v.6 no.1
    • /
    • pp.1-8
    • /
    • 2011
  • As virtual realty has become an issue of providing immersive services, in the area of virtual realty, it has been actively investigated to develop user interfaces for immersive interaction. In this paper, we propose a gesture recognition based immersive user interface by using an IR LED embedded helmet and data gloves in order to reflect the user's movements to the virtual reality environments effectively. The system recognizes the user's head movements by using the IR LED embedded helmet and IR signal transmitter, and the hand gestures with the data gathered from data gloves. In case of hand gestures recognition, it is difficult to recognize accurately with the general recognition model because there are various hand gestures since human hands consist of many articulations and users have different hand sizes and hand movements. In this paper, we applied the Mixture-of-Experts based gesture recognition for various hand gestures of multiple users accurately. The movement of the user's head is used to change the perspection in the virtual environment matching to the movement in the real world, and the gesture of the user's hand can be used as inputs in the virtual environment. A head mounted display (HMD) can be used with the proposed system to make the user absorbed in the virtual environment. In order to evaluate the usefulness of the proposed interface, we developed an interface for the virtual orchestra environment. The experiment verified that the user can use the system easily and intuituvely with being entertained.

  • PDF

Mobile Haptic Interface for Large Immersive Virtual Environments: PoMHI v0.5 (대형 가상환경을 위한 이동형 햅틱 인터페이스: PoMHI v0.5)

  • Lee, Chae-Hyun;Hong, Min-Sik;Lee, In;Choi, Oh-Kyu;Han, Kyung-Lyong;Kim, Yoo-Yeon;Choi, Seung-Moon;Lee, Jin-Soo
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.2
    • /
    • pp.137-145
    • /
    • 2008
  • We present the initial results of on-going research for building a novel Mobile Haptic Interface (MHI) that can provide an unlimited haptic workspace in large immersive virtual environments. When a user explores a large virtual environment, the MHI can sense the position and orientation of the user, place itself to an appropriate configuration, and deliver force feedback, thereby enabling a virtually limitless workspace. Our MHI (PoMHI v0.5) features with omnidirectional mobility, a collision-free motion planning algorithm, and force feedback for general environment models. We also provide experimental results that show the fidelity of our mobile haptic interface.

  • PDF

User Interfaces for Visual Telepresence in Human-Robot Interaction Using Wii Controller (WII 컨트롤러를 이용한 사람과 로봇간 원격작동 사용자 인터페이스)

  • Jang, Su-Hyung;Yoon, Jong-Won;Cho, Sung-Bae
    • Journal of the HCI Society of Korea
    • /
    • v.3 no.1
    • /
    • pp.27-32
    • /
    • 2008
  • As studies on more realistic human-robot interface are being actively carried out, people's interests about telepresence which remotely controls robot and obtains environmental information through video display are increasing. In order to provide natural telepresence services by moving a remote robot, it is required to recognize user's behaviors. The recognition of user movements used in previous telepresence system was difficult and costly to be implemented, limited in its applications to human-robot interaction. In this paper, using the Nintendo's Wii controller getting a lot of attention in these days and infrared LEDs, we propose an immersive user interface that easily recognizes user's position and gaze direction and provides remote video information through HMD.

  • PDF

Motion-based Controlling 4D Special Effect Devices to Activate Immersive Contents (실감형 콘텐츠 작동을 위한 모션 기반 4D 특수효과 장치 제어)

  • Kim, Kwang Jin;Lee, Chil Woo
    • Smart Media Journal
    • /
    • v.8 no.1
    • /
    • pp.51-58
    • /
    • 2019
  • This paper describes a gesture application to controlling the special effects of physical devices for 4D contents using the PWM (Pulse Width Modulation) method. The user operation recognized by the infrared sensor is interpreted as a command for 3D content control, several of which manipulate the device that generates the special effect to display the physical stimulus to the user. With the content controlled under the NUI (Natural User Interface) technique, the user can be directly put into an immersion experience, which leads to provision of the higher degree of interest and attention. In order to measure the efficiency of the proposed method, we implemented a PWM-based real-time linear control system that manages the parameters of the motion recognition and animation controller using the infrared sensor and transmits the event.