• Title/Summary/Keyword: VR System

Search Result 622, Processing Time 0.028 seconds

A Molecular Modeling Education System based on Collaborative Virtual Reality (협업 가상현실 기반의 분자모델링 교육 시스템)

  • Kim, Jung-Ho;Lee, Jun;Kim, Hyung-Seok;Kim, Jee-In
    • Journal of the Korea Computer Graphics Society
    • /
    • v.14 no.4
    • /
    • pp.35-39
    • /
    • 2008
  • A computer supported collaborative system provides with a shared virtual workspace over the Internet where its remote users cooperate in order to achieve their goals by overcoming problems caused by distance and time. VRMMS (Virtual Reality Molecular Modeling System) [1] is a VR based collaborative system where biologists can remotely participate in and exercise molecular modeling tasks such as viewing three dimensional structures of molecular models, confirming results of molecular simulations and providing with feedbacks for the next simulations. Biologists can utilize VRMMS in executing molecular simulations. However, first-time users and beginners need to spend some time for studying and practicing in order to skillfully manipulate molecular models and the system. The best way to resolve the problem is to have a face-to-face session of teaching and learning VRMMS. However, it is not practically recommended in the sense that the users are remotely located. It follows that the learning time could last longer than desired. In this paper, we propose to use Second Life [2] combining with VRMMS for removing the problem. It can be used in building a shared workplace over the Internet where molecular simulations using VRMMS can be exercised, taught, learned and practiced. Through the web, users can collaborate with each other using VRMMS. Their avatars and tools of molecular simulations can be remotely utilized in order to provide with senses of 'being there' to the remote users. The users can discuss, teach and learn over the Internet. The shared workspaces for discussion and education are designed and implemented in Second Life. Since the activities in Second Life and VRMMS are designed to realistic, the system is expected to help users in improving their learning and experimental performances.

  • PDF

A Bio-Edutainment System to Virus-Vaccine Discovery based on Collaborative Molecular in Real-Time with VR

  • Park, Sung-Jun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.6
    • /
    • pp.109-117
    • /
    • 2020
  • An edutainment system aims to help learners to recognize problems effectively, grasp and classify important information needed to solve the problems and convey the contents of what they have learned. Edutainment contents can be usefully applied to education and training in the both scientific and industrial areas. Our present work proposes an edutainment system that can be applied to a drug discovery process including virtual screening by using intuitive multi-modal interfaces. In this system, a stereoscopic monitor is used to make three-dimensional (3D) macro-molecular images, with supporting multi-modal interfaces to manipulate 3D models of molecular structures effectively. In this paper, our system can easily solve a docking simulation function, which is one of important virtual drug screening methods, by applying gaming factors. The level-up concept is implemented to realize a bio-game approach, in which the gaming factor depends on number of objects and users. The quality of the proposed system is evaluated with performance comparison in terms of a finishing time of a drug docking process to screen new inhibitors against target proteins of human immunodeficiency virus (HIV) in an e-drug discovery process.

Development of Code-PPP Based on Multi-GNSS Using Compact SSR of QZSS-CLAS (QZSS-CLAS의 Compact SSR을 이용한 다중 위성항법 기반의 Code-PPP 개발)

  • Lee, Hae Chang;Park, Kwan Dong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.6
    • /
    • pp.521-531
    • /
    • 2020
  • QZSS (Quasi-Zenith Satellite System) provides the CLAS (Centimeter Level Augmentation Service) through the satellite's L6 band. CLAS provides correction messages called C-SSR (Compact - State Space Representation) for GPS (Global Positioning System), Galileo and QZSS. In this study, CLAS messages were received by using the AsteRx4 of Septentrio which is a GPS receiver capable of receiving L6 bands, and the messages were decoded to acquire C-SSR. In addition, Multi-GNSS (Global Navigation Satellite System) Code-PPP (Precise Point Positioning) was developed to compensate for GNSS errors by using C-SSR to pseudo-range measurements of GPS, Galileo and QZSS. And non-linear least squares estimation was used to estimate the three-dimensional position of the receiver and the receiver time errors of the GNSS constellations. To evaluate the accuracy of the algorithms developed, static positioning was performed on TSK2 (Tsukuba), one of the IGS (International GNSS Service) sites, and kinematic positioning was performed while driving around the Ina River in Kawanishi. As a result, for the static positioning, the mean RMSE (Root Mean Square Error) for all data sets was 0.35 m in the horizontal direction ad 0.57 m in the vertical direction. And for the kinematic positioning, the accuracy was approximately 0.82 m in horizontal direction and 3.56 m in vertical direction compared o the RTK-FIX values of VRS.

A Real Time 6 DoF Spatial Audio Rendering System based on MPEG-I AEP (MPEG-I AEP 기반 실시간 6 자유도 공간음향 렌더링 시스템)

  • Kyeongok Kang;Jae-hyoun Yoo;Daeyoung Jang;Yong Ju Lee;Taejin Lee
    • Journal of Broadcast Engineering
    • /
    • v.28 no.2
    • /
    • pp.213-229
    • /
    • 2023
  • In this paper, we introduce a spatial sound rendering system that provides 6DoF spatial sound in real time in response to the movement of a listener located in a virtual environment. This system was implemented using MPEG-I AEP as a development environment for the CfP response of MPEG-I Immersive Audio and consists of an encoder and a renderer including a decoder. The encoder serves to offline encode metadata such as the spatial audio parameters of the virtual space scene included in EIF and the directivity information of the sound source provided in the SOFA file and deliver them to the bitstream. The renderer receives the transmitted bitstream and performs 6DoF spatial sound rendering in real time according to the position of the listener. The main spatial sound processing technologies applied to the rendering system include sound source effect and obstacle effect, and other ones for the system processing include Doppler effect, sound field effect and etc. The results of self-subjective evaluation of the developed system are introduced.

Availability Evaluation of FKP-RTK Positioning for Construction Survey Application (FKP-RTK 측위의 시공측량 적용성 실험)

  • Kim, In Seup
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.31 no.6_1
    • /
    • pp.463-469
    • /
    • 2013
  • In addition to the VRS-RTK service, FKP-RTK service launched recently in Korea however unlike VRS, it is not yet applied to various surveying tasks. VRS system is operated in two way communication over the mobile Internet. When user send rover position data to network RTK server and the server provides correction data to users continuously. It causes to increase communications load and makes delaying or failure in data transmission depends on server capacity and number of concurrent users. In contrast, since FKP system is one way communication system, user only receives correction data and area correction parameters for the selected Continuous Reference Station from the server. Thus, there is no limitation to the number of concurrent users in FKP system and it would be more efficient than VRS system in terms of economic. To this end, we performed FKP-RTK test for Unified Control Points and Urban Control Points where are located at 5 regions in Korea to evaluate the accuracy. As a result, almost of FKP positioning data are in error range of ${\pm}6.2cm$ in horizontal and it would be enough for construction survey such as for earth work in limited except precise structure survey.

A Study on Development of Experimental Contents Using 3-channel Multi-Image Playback Technique: Based on transparent OLED and dual layer display system (3채널 멀티 영상 재생 기법과 증강현실을 이용한 체험 콘텐츠 제작에 관한 연구: 투명 OLED 및 듀얼 레이어 디스플레이 시스템 기반)

  • Lee, Sang-Hyun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.17 no.6
    • /
    • pp.151-160
    • /
    • 2017
  • Among the methods of developing tourist spots and culture as the experience contents, it is a common method to display high-quality video images on a large display, and it is necessary to make a special difference between the participant's active participation and the visual experience in other regions. In this paper, using the single molecular OLED and active type, the regional tourist spots blend transparent OLED dual-layer display systems with the extended image implementation and augmented interaction techniques to give the participants a real-world experience, such as directing to new experiences and beautiful sights. In this paper, additional images and UI layers are applied to the layers of the images to allow visitors to experience sightseeing information, weather, maps, accommodations, festivals and photo materials with image. In addition to the dual-layer system, it also added a multi-display system that additionally has one vertical 55-inch display on each side, adding to the experience the immersive experience and interface interlocking fun. By using transparent OLED, dual layer panel and 3-channel Multi-image playback technique, the augmented type experience contents which can experience the local attractions in Jeollanamdo province in Korea at all time without any limitation of time and space were developed.

Important Facility Guard System Using Edge Computing for LiDAR (LiDAR용 엣지 컴퓨팅을 활용한 중요시설 경계 시스템)

  • Jo, Eun-Kyung;Lee, Eun-Seok;Shin, Byeong-Seok
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.10
    • /
    • pp.345-352
    • /
    • 2022
  • Recent LiDAR(Light Detection And Ranging) sensor is used for scanning object around in real-time. This sensor can detect movement of the object and how it has changed. As the production cost of the sensors has been decreased, LiDAR begins to be used for various industries such as facility guard, smart city and self-driving car. However, LiDAR has a large input data size due to its real-time scanning process. So another way for processing a large amount of data are needed in LiDAR system because it can cause a bottleneck. This paper proposes edge computing to compress massive point cloud for processing quickly. Since laser's reflection range of LiDAR sensor is limited, multiple LiDAR should be used to scan a large area. In this reason multiple LiDAR sensor's data should be processed at once to detect or recognize object in real-time. Edge computer compress point cloud efficiently to accelerate data processing and decompress every data in the main cloud in real-time. In this way user can control LiDAR sensor in the main system without any bottleneck. The system we suggest solves the bottleneck which was problem on the cloud based method by applying edge computing service.

The Design of Digital Human Content Creation System (디지털 휴먼 컨텐츠 생성 시스템의 설계)

  • Lee, Sang-Yoon;Lee, Dae-Sik;You, Young-Mo;Lee, Kye-Hun;You, Hyeon-Soo
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.15 no.4
    • /
    • pp.271-282
    • /
    • 2022
  • In this paper, we propose a digital human content creation system. The digital human content creation system works with 3D AI modeling through whole-body scanning, and is produced with 3D modeling post-processing, texturing, rigging. By combining this with virtual reality(VR) content information, natural motion of the virtual model can be achieved in virtual reality, and digital human content can be efficiently created in one system. Therefore, there is an effect of enabling the creation of virtual reality-based digital human content that minimizes resources. In addition, it is intended to provide an automated pre-processing process that does not require a pre-processing process for 3D modeling and texturing by humans, and to provide a technology for efficiently managing various digital human contents. In particular, since the pre-processing process such as 3D modeling and texturing to construct a virtual model are automatically performed by artificial intelligence, so it has the advantage that rapid and efficient virtual model configuration can be achieved. In addition, it has the advantage of being able to easily organize and manage digital human contents through signature motion.

Responsive Digital Heritage Experience with Haptic Deformation (햅틱 변형을 이용한 반응형 디지털 문화 체험)

  • Lee, Beom-Chan;Park, Jeung-Chul;Kim, Jong-Phil;Lee, Kwan-H.;Ryu, Je-Ha
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.210-218
    • /
    • 2006
  • 본 논문은 광주과학기술원(GIST)에서 개발하고 있는 전남지역 '운주사'의 천불천탑 설화를 근간으로 한 반응형 가상 문화 체험 시스템(Responsive Multimedia System for virtual storytelling)의 햅틱 변형 상호작용에 관한 것이다. 기존의 디지털 문화재 체험 시스템은 사실적인 체험을 제공하기 위해 시각 및 청각 기술 개발에 많은 연구 및 노력이 이루어져왔다. 그러나 최근 인간의 인지 체계의 중요 요소인 촉감 상호작용의 중요성이 증대 됨에 따라, 본 논문에서는 가상 문화재 체험을 위한 햅틱 변형 알고리즘 및 상호작용 시스템을 개발하여 가상 불상을 만져보고 그 표면을 변형시키게 함으로써 몰입감을 증대하고 재미를 주는 시스템을 제공하였다. 아울러 체험의 몰입감 증대를 위해 본 시스템은 시/청각과 더불어 청각 효과를 가미하여 체험 시 발생되는 주변 환경의 소리(새, 물, 바람소리)를 제공하고, 기존의 문화체험 시스템과의 차별성을 위해 3 차원 입력장치를 이용하여 체험자가 직접 가상 불상을 변형시키면서 체험자 고유의 작품을 만들어낼 수 있는 상호작용을 제공한다. 따라서 제안된 햅틱 변형 상호작용 시스템은 체험자의 능동적 참여 및 흥미 유발을 통하여 문화 유산에 대한 교육적 효과 및 관심 증대에 기여할 수 있을 것이라 여겨진다.

  • PDF

Realizing a Mixed Reality Space Guided by a Virtual Human;Creating a Virtual Human from Incomplete 3-D Motion Data

  • Abe, Shinsuke;Yamaguti, Iku;Tan, Joo Kooi;Ishikawa, Seiji
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.1625-1628
    • /
    • 2003
  • Recently the VR technique has evolved into a mixed reality (MR) technique, in which a user can observe a real world in front of him/her as well as virtual objects displayed. This has been realized by the employment of a see-through type HMD (S-HMD). We have been developing a mixed reality space employing the MR technique. The objective of our study is to realize a virtual human that acts as a man-machine interface in the real space. It is important in the study to create a virtual human acting naturally in front of a user. In order to give natural motions to the virtual human, we employ a developed motion capture technique. We have already created various 3-D human motion models by the motion capture technique. In this paper, we present a technique for creating a virtual human using a human model provided by a computer graphics software, 3D Studio Max(C). The main difficulty of this issue is that 3D Studio Max(C) claims 28 feature points for describing a human motion, but the used motion capture system assumes less number of feature points. Therefore a technique is proposed in the paper for producing motion data of 28 feature points from the motion data of less number of feature points or from incomplete motion data. Performance of the proposed technique was examined by observing visually the demonstration of some motions of a created virtual human and overall natural motions were realized.

  • PDF